Stay updated and follow on instagram!: @mutt.mentor
Our first project on Hackster was also our first to run machine learning models on resource limited devices like the Raspberry Pi. We called the project: PoochPak.
At the time, simply setting up TensorFlow on a Pi was part of the challenge. But now it's possible to deploy models trained in TensorFlow on devices like an Arduino!
In this project, we come full circle to teach an old hack a new trick.
Specifically, we use the PoochPak information gathering platform to collect and annotate data for our new smart dog collar: the Mutt Mentor.
The idea behind the the Mutt Mentor is simple: it's an automatic dog clicker.
Practically speaking, this smart dog collar identifies when your pooch performs a trick you want to teach, then it emits a reinforcing marker tone.
We use TensorFlow to train a small neural network to identify an action based on a small time window of accelerometer data while our dog performs a trick.
Using the Arduino Nano 33 BLE Sense, we can bring this model into a featherweight form factor.
Since the board comes equipped with an accelerometer and LEDs, we only need to wire up a buzzer to emit a tone for feedback.
Since we are also training a machine learning model, we will need a source of labeled training data.
We need to gather data in realistic settings, including on the go, so we use PoochPak to gather accelerometer data.
By connecting a PS3 six-axis controller to the raspberry pi via bluetooth, we can annotate these datapoint in real time.
Check out our updated PoochPak data collection prototype:
With sufficiently many training examples, we proceed to train the model and make the necessary reductions to deploy on the Arduino.
With just a few sessions worth of data, we reached ~82% accuracy categorizing three movements - jump, sit, and roll over. This will improve as we collect more samples.
Designing the Mutt Mentor
After reaching satisfactory model performance and deploying to the device, we perform some qualitative evaluation before finalizing our design and testing the Arduino-based prototype.
In our application, moves like sit, jump, and rollover are indicated with different lights and buzzer tones for feedback.
Through consistent feedback with a custom marker tone (piezo buzzer tune) corresponding to each activity we seek to reinforce, we can automatically assist dog training with the context of our dog Sweetpea's physical response.
Check out our video:
Next, we plan to introduce the additional context of speech recognition to condition the reward marker tone based on whether a command keyword like: sit, jump, or rollover was spoken.