The project is to make an autonomous driving robot commanded by voice.
- LIDAR 360°
- 9 DOF IMU
- Triple Microphone
- Rotary encoder
- PPM to drive motors
- Use foreign API to achieve voice recognition and speaking. Using the high level data to understand commands.
- Use wheels to move
- Use inputs to "locate" itself in space and the triple microphone allows it to know where the commands are coming from.
Idea of the applicationsnippets
Listen the microphones => Web audio recognition API => Neural network IMU (gyroscope, accelerometer, compass) => Neural Network Rotary wheels => Neural Netork Sweep 360 LIDAR => Neural Network (Time delayed input) Neural network => data => Text To Speak => Speakers Neural network => impulsions => Motors