Project in progress
Where's my stuff?? - Find your misplaced things with Alexa!

Where's my stuff?? - Find your misplaced things with Alexa! © Apache-2.0

Find all your misplaced stuff with Amazon Alexa. Next time, you don't need to search entire house to locate your phone, remote etc!

  • 7 respects

Components and supplies

Apps and online services

About this project


The idea of this project is to enable people to find their belongings which they left somewhere in the house but can't seem to find them when needed.

To save your time in searching these items, we've developed a system wherein you just have to ask Alexa the location of that object and get the precise location of the item! Be it under a pile of clothes on your bed or inside a cupboard! Alexa will be there to the rescue! It can be very useful at times when you're getting late and can't find your mobile phone or wallet or some other item you immediately have the use for.

The Raspberry Pi Camera keeps track of these small objects (input by the user) in the room with respect to bigger things in the room like table, chair, bed etc. It can also help you make sure your possessions are in the right place as they should be!

Setting up the system

The Raspberry Pi (RPi) module is connected to RPi camera, Arduino Uno and a server machine via the Internet. RPi continuously transmits the video recorded by RPi camera to the server machine which tracks the position of smaller, already marked as important objects and saves their last known location in an online database. Arduino Uno has also been connected to RPi for controlling a servo motor to manage the orientation of the camera to ensure that maximum area is covered and all objects are recorded in the best possible conditions. The camera is embedded onto the motor.

YOLO's Darknet trained on COCO dataset has been used for object recognition and can be set-up on any (x86/x64) machine by following the amazing tutorial given by Joseph Redmond at

In case you need to train the model for your own models, keep following the code and we will try to add a well documented version for the same very soon!

After launching the Object Finder skill on your Alexa device using the following command:

"Alexa, Open Object Finder."

You can then simply ask it about the whereabouts of your stuff. Alexa would then connect to our database and answer your question with the item's last known location.

Altering the YOLO code

You need to alter some of the files in the darknet repository for setting up YOLO to complete this project (this particularly includes darknet/src/image.c)

Altered program can be found in the code section of this project. The lines which have been altered simply save the coordinates of the bounding boxes of the objects recognized in a text file. The function within image.c which has been modified can be found separately added in the code section! We have provided useful comments wherever necessary!

Creating an Amazon Alexa Skill for this Project

Internal Implementation

  • Create an Alexa Skill in the Alexa Skill Builder portal and define the interaction models as well as the intent schema. This sets up the framework for the behavior of the skill.
  • Create an AWS Lambda function that interfaces with the Amazon Alexa and translates the various intents into requests that can be passed on. This lambda function is written in Node.js using Alexa Skills Kit. Axios module is used in the lambda back end to connect to our smart home device. Dashbot module has also been used for analytics.

How to get it up and running?

Setting up Your Alexa Skill in the Developer Portal

To link it with your Amazon Echo Device, go to your Amazon developer console.

  • Create a new skill. Call it Where's my stuff??. Give the invocation name as where's my stuff. Click next.
  • Click on the Launch Skill Builder (Beta) button . This will launch the new Skill Builder Dashboard.
  • Click on the "Code Editor" item under Dashboard on the top left side of the skill builder.
  • In the textfield provided, replace any existing code with the code provided in the speechAssets/intentSchema.json and click on "Apply Changes" or "Save Model".
  • Click on the Save Model button, and then click on the Build Model button.
  • If your interaction model builds successfully, click on Configuration button to move on to Configuration. Now, we will be creating our Lambda function in the AWS developer console, but keep this browser tab open, because we will be returning here later.

Setting Up A Lambda Function Using Amazon Web Services

  • Go to and sign in to the console and choose the Lambda service from the search box. AWS Lambda only works with the Alexa Skills Kit in two regions: US East (N. Virginia) and EU (Ireland). Choose one of them.

  • Click the "Create a Lambda function" button. Choose "Blueprints", then choose the blueprint named "alexa-skill-kit-sdk-factskill". And give your function a name.

  • Set up your Lambda function role to "lambda_basic_execution" and click on Create Function.
  • Configure your trigger. Look at the column on the left called "Add triggers", and select Alexa Skills Kit from the list.
  • After you create the function, the ARN value appears in the top right corner. Copy this value for now.
  • Scroll down to the field called "Function code", and replace any existing code with the code provided in the lambda/index.js. You can also copy the code to your local machine, run npm install and upload the zip file containing your index.js, package.json and node_modules using Upload a .zip file in the "Function code" section.
  • Make sure you've copied the ARN value. The ARN value should be in the top right corner. If you haven't already, copy this value for use in the next section.

Connecting Your Voice User Interface To Your Lambda Function

  • Open the "Configuration" tab on the left side, if you didn't keep it open as mentioned earlier, and select the "AWS Lambda ARN" option for your endpoint.

  • Select "North America" or "Europe" as your geographical region and Paste your Lambda's ARN (Amazon Resource Name) into the textbox provided.
  • Click Save and Next.

Your Skill is Up and Running Now! You can test it on

For more details visit:

Object Tracking

Once an object has been recognized in an image, our system saves its whereabouts in real time and then it is only a matter of keeping track of its movements!

Simply put, locating an object in successive frames of a video is called tracking.

For object tracking, there are many different types of approaches which can be used. These include:

  • Dense Optical flow: These algorithms help estimate the motion vector of every pixel in a video frame.
  • Sparse optical flow: These algorithms, like the Kanade-Lucas-Tomashi (KLT) feature tracker, track the location of a few feature points in an image.
  • Kalman Filtering: A very popular signal processing algorithm used to predict the location of a moving object based on prior motion information. One of the early applications of this algorithm was missile guidance! Also as mentioned here, “the on-board computer that guided the descent of the Apollo 11 lunar module to the moon had a Kalman filter”.
  • Meanshift and Camshift: These are algorithms for locating the maxima of a density function. They are also used for tracking.
  • Single object trackers: In this class of trackers, the first frame is marked using a rectangle to indicate the location of the object we want to track. The object is then tracked in subsequent frames using the tracking algorithm. In most real life applications, these trackers are used in conjunction with an object detector.
  • Multiple object track finding algorithms: In cases when we have a fast object detector, it makes sense to detect multiple objects in each frame and then run a track finding algorithm that identifies which rectangle in one frame corresponds to a rectangle in the next frame.

For the purposes of this tutorial, we will stick to Kalman Filter. Other algorithms can be easily switched into via these lines of code in the file

tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN']
tracker_type = tracker_types[2]

We then use RPi Camera's real time feed along with YOLO's predicted bounding boxes to track the objects with ease! The rest of the well commented code has been included in the Github repository!

Demo Video

The demo videos of this skill in action can be found here:


Modifications made in image.c file of darknetC/C++
void draw_detections(image im, int num, float thresh, box *boxes, float **probs, float **masks, char **names, image **alphabet, int classes)
   // Pointer for file handling
   FILE *fptr;
   int i,j;
   // Open the following files in write access mode
   fptr = fopen("program.txt", "w+");
   for(i = 0; i < num; ++i){
       // labelstr will eventually contain label of recognized image
       char labelstr[4096] = {0};
       int class = -1;
       for(j = 0; j < classes; ++j){
           if (probs[i][j] > thresh){
               if (class < 0) {
                   strcat(labelstr, names[j]);
                   class = j;
               } else {
                   strcat(labelstr, ", ");
                   strcat(labelstr, names[j]);
               printf("%s: %.0f%%\n", names[j], probs[i][j]*100);
       if(class >= 0){
           int width = im.h * .006;
           int offset = class*123457 % classes;
           float red = get_color(2,offset,classes);
           float green = get_color(1,offset,classes);
           float blue = get_color(0,offset,classes);
           float rgb[3];
           //width = prob*20+2;
           rgb[0] = red;
           rgb[1] = green;
           rgb[2] = blue;
           box b = boxes[i];
           int left  = (b.x-b.w/2.)*im.w;
           int right = (b.x+b.w/2.)*im.w;
           int top   = (b.y-b.h/2.)*im.h;
           int bot   = (b.y+b.h/2.)*im.h;
           if(left < 0) left = 0;
           if(right > im.w-1) right = im.w-1;
           if(top < 0) top = 0;
           if(bot > im.h-1) bot = im.h-1;
           if(fptr == NULL)
             goto X;
               // Add object details in program.txt for further processing
               char buf[100];
               fprintf(fptr,"%s", buf);
           // printf("Bounding Box: Left=%d, Top=%d, Right=%d, Bottom=%d\n", left, top, right, bot);
       X:  draw_box_width(im, left, top, right, bot, width, red, green, blue);
           if (alphabet) {
               image label = get_label(alphabet, labelstr, (im.h*.03)/10);
               draw_label(im, top + width, left, label, rgb);
           if (masks){
               image mask = float_to_image(14, 14, 1, masks[i]);
               image resized_mask = resize_image(mask, b.w*im.w, b.h*im.h);
               image tmask = threshold_image(resized_mask, .5);
               embed_image(tmask, im, left, top);
   // Finally, close the files
Where's my stuff?? Alexa Skill
Please refer to the to know how to implement this code.
RPi & server communication for object recognition and tracking


Servo motor connections with Arduino Uno
Analog input to servo servo 1 7muxy2ubrl


Similar projects you might like

Alexa Doorman: Who Is at My Door?

Project tutorial by MD R. Islam

  • 28 respects

Intelligent Door Lock

Project in progress by Md. Khairul Alam

  • 95 respects

Automated Chess Play Using Alexa

Project in progress by Team Automaters

  • 53 respects

Android Things Andy Robot Rasberry Pi3 And Arduino

Project showcase by Dwayne Hoang

  • 7 respects

Animated Smart Light with Alexa and Arduino

Project tutorial by Bruno Portaluri

  • 23 respects

Enable Alexa Control to your Ceiling Fan

Project tutorial by Jithin Thulase

  • 9 respects
Add projectSign up / Login