Project in progress

Intelligent Door Lock © GPL3+

An Alexa enabled door lock with face recognition and remote control.

  • 11,455 views
  • 13 comments
  • 73 respects

Components and supplies

Necessary tools and machines

3drag
3D Printer (generic)

Apps and online services

About this project

Overview

Security and accessibility is the main concern in today's world. We always try to keep our house secure and at the same time we want to make our home devices easy accessible even from the remote location. Think, you have a guest waiting at your front door and you are outside of the home. But you want to allow him inside your house. Or you are doing a very important task in your desk and want to know who came at the front door without leaving your seat. Just imagine! Alexa can do everything for you!!

Yes, I made an intelligent door lock which can recognize a guest, greet the guest with name, notify the owner about the guest and remember an unknown guest. House owner can know the name of the guest by asking Alexa like "Alexa, who is at the front door?" You can also ask Alexa to open or close the door. I made a custom Alexa Skill for this. Using the skill you can know your guest and welcome him inside your house without leaving your seat.

My skill is live at Amazon Store (Skill ID: amzn1.ask.skill.4ba64998-cb8f-461d-8712-16c5dfcfc9d3)

Demo

Before going to the details please watch the demo videos:

Step by step instructions

In this tutorial I will show you how you can make such intelligent device yourself. I am assuming you have some previous experience with Arduino & Raspberry Pi and some basic knowledge in Python programming.

In this project, I used several AWS services (e.g. IoT, Lambda, Bucket, Polly, SNS). So, you will be required an Amazon AWS account.

Before going into detail instructions let me first explain how it works. I am calling this device Intelligent Door Lock and for making the device I used a Raspberry Pi with official camera module and an Arduino with a servomotor for controlling the lock.

When a guest comes to your door and press the calling button, Raspberry Pi performs three tasks:

  • It takes a picture of the guest and upload it to AWS S3 Bucket and S3 Bucket trigger a SNS notification to a specific topic.
  • It sends an email with the photo to the house owner.
  • It sends a greeting text to AWS Polly and then play the audio greeting for the guest returned by the Polly.

After getting the notification from AWS SNS or the email house owner can ask Alexa to introduce the guest by invoking the custom skill "Door Guard" and saying:

Alexa, ask door guard who is at the front door? or

Alexa, ask door guard who came?

Alexa triggers a Lambda function and Lambda function does the following jobs:

  • Read the image uploaded to the S3 Bucket.
  • Sends a face search request for the image to AWS Rekognition.
  • After getting face matches result return by Rekognition, Lambda search for the name to AWS DynamoDB and return the name to the Alexa if found.

Alexa provides the name to the house owner and house owner again call the Alexa to open the door for the guest. In this case Lambda sends a open door command to AWS IoT to a specific topic. Raspberry Pi receives this command and sends to Arduino using serial port. Arduino control the lock accordingly. The following block diagram can helps for better understanding.

Work Flow

  • Preparing Raspberry Pi (installing required libraries)
  • Writing program for Raspberry Pi (for capturing image on button press, uploading the image to S3, sending email to the owner, receiving message from mqtt broker, greeting guest, sending control signal to Raspberry Pi)
  • Setting AWS Services (AWS S3 Bucket, AWS DynamoDB, AWS Lambda, AWS SNS, AWS Rekognition)
  • Writing program for uploading Images of knows persons and storing Face Index in the DinamoDB table.
  • Making Custom Alexa Skill and writing code for Lambda function.
  • Writing code for Arduino.
  • Connecting all the hardware.
  • Testing & Debugging.

Setting up the Raspberry Pi

Prepare your Raspberry Pi with latest Raspbian operating system and get ready to do some programming. If you are new in Raspberry Pi read this how to get started using Raspberry Pi guide. You can plug a mouse, keyboard, and monitor into your Pi or access it using SSH client like PuTTY. To know how to connect with PuTTY you may read this tutorial.

Install python serial module using the command:

sudo apt-get install python-serial

Install AWS IoT SDK using following command:

sudo pip install AWSIoTPythonSDK

Details of AWSToTPythonSDK is here.

Installing & Configuring AWS CLI

The AWS Command Line Interface (CLI) is a unified tool that allows you to control AWS services from the command line. AWS CLI helps you creating any AWS object from command line without using GUI interface. If you already have pip and a supported version of Python (integrated with latest Raspbian OS), you can install the AWS CLI with the following command:

pip install awscli --upgrade --user

You need to configure AWS CLI with Access Key ID, Secret Access Key, AWS Region Name and Command Output format before getting started with it.

Follow this tutorial for completing the whole process.

Setting up Amazon S3 Bucket, Amazon Rekognition and Amazon DynamoDB

Amazon Rekognition is a sophisticated deep learning based service from Amazon Web Services (AWS) that makes it easy to add powerful visual search and discovery to your own applications. With Rekognition using simple APIs, you can quickly detect objects, scenes, faces, celebrities and inappropriate content within images. Amazon Rekognition also provides highly accurate facial analysis and facial recognition. You can detect, analyze, and compare faces for a wide variety of user verification, cataloging, people counting, and public safety use cases.

Amazon Rekognition is based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily, and requires no machine learning expertise to use. Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3.

Amazon Rekognition can store information about detected faces in server-side containers known as collections. You can use the facial information stored in a collection to search for known faces in images, stored videos and streaming videos. Amazon Rekognition supports the IndexFaces operation, which you can use to detect faces in an image and persist information about facial features detected into a collection.

The face collection is the primary Amazon Rekognition resource, each face collection you create has a unique Amazon Resource Name (ARN). You create each face collection in a specific AWS Region in your account.

We start by creating a collection within Amazon Rekognition. A collection is a container for persisting faces detected by the IndexFaces API. You might choose to create one container to store all faces or create multiple containers to store faces in groups. You can use AWS CLI to create a collection or use the console. For AWS CLI, you can use the following command:

aws rekognition create-collection --collection-id guest_collection --region eu-west-1

The above command creates a collection named as guest_collection.

The user or role that executes the commands must have permissions in AWS Identity and Access Management (IAM) to perform those actions. AWS provides a set of managed policies that help you get started quickly. For our example, you need to apply the following minimum managed policies to your user or role:

  • AmazonRekognitionFullAccess
  • AmazonDynamoDBFullAccess
  • AmazonS3FullAccess
  • IAMFullAccess

Next, we create an Amazon DynamoDB table. DynamoDB is a fully managed cloud database that supports both document and key-value store models. In our example, we’ll create a DynamoDB table and use it as a simple key-value store to maintain a reference of the FaceId returned from Amazon Rekognition and the full name of the person.

You can use either the AWS Management Console, the API, or the AWS CLI to create the table. For the AWS CLI, use the following command:

aws dynamodb create-table --table-name guest_collection \
--attribute-definitions AttributeName=RekognitionId,AttributeType=S \
--key-schema AttributeName=RekognitionId,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 \
--region eu-west-1

For the IndexFaces operation, you can provide the images as bytes or make them available to Amazon Rekognition inside an Amazon S3 bucket. In our example, we upload the images (images of the known guest) to an Amazon S3 bucket.

Again, you can create a bucket either from the AWS Management Console or from the AWS CLI. Use the following command:

aws s3 mb s3://guest-images --region eu-west-1

Although all the preparation steps were performed from the AWS CLI, we need to create an IAM role that grants our function the rights to access the objects from Amazon S3, initiate the IndexFaces function of Amazon Rekognition, and create multiple entries within our Amazon DynamoDB key-value store for a mapping between the FaceId and the person’s full name.

To get the access use the file: access-policy.json

{ 
    "Version": "2012-10-17", 
    "Statement": [ 
        {
             "Effect": "Allow",
             "Action": [ 
                "logs:CreateLogGroup", 
                "logs:CreateLogStream", 
                "logs:PutLogEvents" 
                ], 
                "Resource": "arn:aws:logs:*:*:*"
         }, 
         {
             "Effect": "Allow", 
             "Action": [ 
                "s3:GetObject"
                 ], 
                "Resource": [
                     "arn:aws:s3:::bucket-name/*" 
                      ]
          }, 
          { 
            "Effect": "Allow", 
            "Action": [
                 "dynamodb:PutItem" 
            ], 
            "Resource": [ 
                "arn:aws:dynamodb:aws-region:account-id:table/family_collection"
            ] 
          }, 
          { 
            "Effect": "Allow", 
            "Action": [ 
                "rekognition:IndexFaces"
             ],
            "Resource": "*"
                 }
             ]
        }

For the access policy, ensure you replace aws-region, account-id, and the actual name of the resources (e.g., bucket-name and family_collection) with the name of the resources in your environment.

Now, attach the access policy to the role using following command.

aws iam put-role-policy --role-name LambdaRekognitionRole --policy-name \ 
LambdaPermissions --policy-document file://access-policy.json

We can almost configure our AWS environment. We can now upload our images to Amazon S3 to seed the face collection. For this example, we again use a small piece of Python code that iterates through a list of items that contain the file location and the name of the person within the image.

Before running the code you need to install Boto3. Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2. You can find the latest, most up to date, documentation at Read the Docs, including a list of services that are supported.

Install the Boto3 library using the following command:

sudo pip install boto3

Now, run the following python code to upload the images into S3 Bucket. Before running the code be sure that you keep all the images and the python file in the same directory.

import boto3
s3 = boto3.resource('s3')
# Get list of objects for indexing
images=[('afridi.jpg','Shahid Afridi'),
       ('sakib.jpg','Sakib Al Hasan'),
       ('kohli.jpg','Birat Kohli'),
       ('masrafi.jpg','Mashrafe Bin Mortaza'),
       ('ganguli.jpg','Sourav Ganguly')
      ]
# Iterate through list to upload objects to S3   
for image in images:
   file = open(image[0],'rb')
   object = s3.Object('taifur12345bucket',image[0])
   ret = object.put(Body=file,
                   Metadata={'FullName':image[1]}
                   )
   #print(image[0])
   #print(image[1])

Now, add the Face Index to AWS DynamoDB with full name for every image using the following python code.

import boto3
from decimal import Decimal
import json
import urllib
BUCKET = "taifur12345bucket"
KEY = "sample.jpg"
IMAGE_ID = KEY  # S3 key as ImageId
COLLECTION = "family_collection"
dynamodb = boto3.client('dynamodb', "eu-west-1")
s3 = boto3.client('s3')
# Note: you have to create the collection first!
# rekognition.create_collection(CollectionId=COLLECTION)
def update_index(tableName,faceId, fullName):
	response = dynamodb.put_item(
	TableName=tableName,
	Item={
		'RekognitionId': {'S': faceId},
		'FullName': {'S': fullName}
		}
	)
	#print(response)
def index_faces(bucket, key, collection_id, image_id=None, attributes=(), region="eu-west-1"):
	rekognition = boto3.client("rekognition", region)
	response = rekognition.index_faces(
		Image={
			"S3Object": {
				"Bucket": bucket,
				"Name": key,
			}
		},
		CollectionId=collection_id,
		ExternalImageId="taifur",
	    DetectionAttributes=attributes,
	)
	if response['ResponseMetadata']['HTTPStatusCode'] == 200:
		faceId = response['FaceRecords'][0]['Face']['FaceId']
		print(faceId)
		ret = s3.head_object(Bucket=bucket,Key=key)
		personFullName = ret['Metadata']['fullname']
		#print(ret)
		print(personFullName)
		update_index('taifur12345table',faceId,personFullName)
	# Print response to console.
	#print(response)
	return response['FaceRecords']
for record in index_faces(BUCKET, KEY, COLLECTION, IMAGE_ID):
	face = record['Face']
	# details = record['FaceDetail']
	print "Face ({}%)".format(face['Confidence'])
	print "  FaceId: {}".format(face['FaceId'])
	print "  ImageId: {}".format(face['ImageId'])

Once the collection is populated, we can query it by passing in other images that contain faces. Using the SearchFacesByImage API, you need to provide at least two parameters: the name of the collection to query, and the reference to the image to analyze. You can provide a reference to the Amazon S3 bucket name and object key of the image, or provide the image itself as a bytestream.

In the following example, I used following code in Lambda function to to search face by taking the image from S3 Bucket. In response, Amazon Rekognition returns a JSON object containing the FaceIds of the matches. Using the face ID it retrieves the full name.

Creating Custom Alexa Skill

1. Sign in to https://developer.amazon.com and click on Create Skill.

2. Give a name of the skill and click on Next

3. Select Custom and then click on Create skill

4. Select JSON Editor

5. Drag and drop the json file attached in the code section or paste the code in the editor window.

6. Save and Build the model.

Your custom skill is almost ready. We will back here again after creating a Lambda function.

Creating Lambda Function

1. Go to aws management console and from the services tab select Lambda and click on Create function

2. Put a name of the function, Select Python 2.7 as runtime and from the Role select Choose an existing role. (we you use the rule we created earlier from aws cli)

3. Select the LambdaRekognitionRole we created using aws cli and click on Create function from the right bottom corner.

4. From the Add triggers tab select Alexa Skills Kit

Alexa Skills Kit will be added with your Lambda function.

5. Go to the top bottom corner and copy the ARN in your clipboard.

6. Go back to Alexa developer console on click on Endpoint tab. Paste the ARN to the Default Region text box (or any specific region if you want to make it for a specific location)

7. Copy the Application ID (skill id) to your mouse clipboard and got the Lambda

8. Paste the Skill ID to the text box and click to Add

9. Click on Save.

10. Configuration for Lambda function is almost complete.

11. Create a Thing on AWS IoT, download certificate key, private key and root ca files. (follow the link to create aws IoT)

12. Download the code file for Lambda function from the code section and replace the skill id with your own. Download AWSIoTPythonSDK from the github link and make a .zip folder including all (lambda code, certificate file, private key file, root ca file and SDK moudle directory)

13. Go to the Lambda function again from AWS console and from the code section choose Upload a ZIP file, browse the zip file you created and then click on Save.

Your Custom Skill with a Lambda function is now ready to test.

Making the Hardware

Raspberry Pi is connected with a camera module. Raspberry Pi sends data to Arduino using serial cable. I short Arduino cable was used to connect Arduino with Raspberry Pi.

A test setup was made for primary testing either it is working perfectly or not.

After primary testing I setup all the devices in a door using some hot glue. This setup is for demonstration purpose only. To make the demonstration easy I place all the components in the same side of the door. Practically the camera and the button switch will be outer side of the door. Here, I did not attached any speaker. A speaker is required to play the greetings for the guest. The demo lock was printed using a 3D printer. To get full design of the lock see my previous tutorial.

Acknowledgement

Special thanks to Mr. Christian Petters for his nice tutorial Build Your Own Face Recognition Service Using Amazon Rekognition. It was really helpful and I copied some instructions and commands directly from his writing.

This GitHub link also helped me to develop program.

Code

capture-button-upload-email.pyPython
This python code snippet captures a photo, upload it to S3 bucket, sent the photo to your email address with a notification.
import time
import picamera

import boto3

s3 = boto3.resource('s3')

import RPi.GPIO as GPIO

import smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
from email.MIMEBase import MIMEBase
from email import encoders


GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)


def gpio_callback(self):
	capture_image()
	time.sleep(0.3)
	print('Captured')
	upload_image()
	time.sleep(2)
	send_email()
	

GPIO.add_event_detect(4, GPIO.FALLING, callback=gpio_callback, bouncetime=3000)


def capture_image():
	with picamera.PiCamera() as camera:
		camera.resolution = (640, 480)
		camera.start_preview()
		camera.capture('sample.jpg')
		camera.stop_preview()
		camera.close()
		return
		
				
def upload_image():
	file = open('sample.jpg','rb')
	object = s3.Object('taifur12345bucket','sample.jpg')
	ret = object.put(Body=file,
			Metadata={'FullName':'Guest'}
			)
	print(ret)
	return


def send_email(): 
    fromaddr = "Put From Which Email"
    toaddr = "put to which email"
     
    msg = MIMEMultipart()
     
    msg['From'] = fromaddr
    msg['To'] = toaddr
    msg['Subject'] = "New Guest"
     
    body = "A new guest is waiting at your front door. Photo of the guest is attached."
     
    msg.attach(MIMEText(body, 'plain'))
     
    filename = "sample.jpg"
    attachment = open("/home/pi/sample.jpg", "rb")
     
    part = MIMEBase('application', 'octet-stream')
    part.set_payload((attachment).read())
    encoders.encode_base64(part)
    part.add_header('Content-Disposition', "attachment; filename= %s" % filename)
     
    msg.attach(part)
     
    server = smtplib.SMTP('smtp.gmail.com', 587)
    server.starttls()
    server.login(fromaddr, "password of your email")
    text = msg.as_string()
    server.sendmail(fromaddr, toaddr, text)
    server.quit()
	
aws-iot-receive.pyPython
This python code snippet receives message from AWS IoT, sends command to Arduino and produce the greeting message for the guest.
import time
from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient
myMQTTClient = AWSIoTMQTTClient("doorLock")
myMQTTClient.configureEndpoint("a3jra11pv5kiyg.iot.eu-west-1.amazonaws.com", 8883)
myMQTTClient.configureCredentials("./rootCA.pem", "./privateKEY.key", "./certificate.crt")

import serial
ser = serial.Serial('/dev/ttyACM0',9600)
guest_name = None

import boto3
from contextlib import closing


import os

client = boto3.client('polly', 'eu-west-1')
	
# trigger when receive mqtt message and sent the message to arduno
def customOnMessage(message):
    global guest_name
    print("Received a new message: ")
    msg = message.payload
    ser.write(msg)
    ser.write('\n')
    if msg == 'open' and guest_name != None:
        play_greeting()
        guest_name = None 
    if msg.find('#') + 1:
        msg = msg.translate(None, '#')
        guest_name = msg
        print('removed #')		
    print(msg)
    print(guest_name)
	
# Suback callback
def customSubackCallback(mid, data):
    print("Received SUBACK packet id: ")
    print(mid)
    print("Granted QoS: ")
    print(data)
    print("++++++++++++++\n\n")

# AWSIoTMQTTClient connection configuration
myMQTTClient.configureAutoReconnectBackoffTime(1, 32, 20)
myMQTTClient.configureDrainingFrequency(2)  # Draining: 2 Hz
myMQTTClient.configureConnectDisconnectTimeout(10)  # 10 sec
myMQTTClient.configureMQTTOperationTimeout(5)  # 5 sec
myMQTTClient.onMessage = customOnMessage

myMQTTClient.connect()
myMQTTClient.subscribeAsync("test/door", 1, ackCallback=customSubackCallback)

def play_greeting():
    global guest_name
    response = client.synthesize_speech(
	    OutputFormat='mp3',
	    Text=  'Welcome ' + guest_name + '. The door is open for you.',
            #Text=  'Welcome. The door is open for you.',
	    TextType='text',
	    VoiceId='Emma'
        )
    guest_name = None
    #print response
    if "AudioStream" in response:
        with closing(response["AudioStream"]) as stream:
            output = "welcome.mp3"

            try:
			    # Open a file for writing the output as a binary stream
                with open(output, "wb") as file:
                    file.write(stream.read())
            except IOError as error:
                # Could not write to file, exit gracefully
                print(error)
                sys.exit(-1)
        os.system('omxplayer welcome.mp3')
        print('played')

while True:
	time.sleep(0.1)
alexa-skill.txtJSON
Use this JSON code for the Alexa Skill.
{
    "languageModel": {
        "invocationName": "door guard",
        "intents": [
            {
                "name": "AddGuestIntent",
                "slots": [],
                "samples": [
                    "Add guest in your memory",
                    "Add person in your memory",
                    "Add the person in your memory",
                    "Add the guest to your memory",
                    "Remember the man",
                    "Remember the guy",
                    "Remember the person",
                    "Remember the guest",
                    "Remember him",
                    "Remember her",
                    "Save the guest",
                    "Save the person",
                    "Save the guy",
                    "Remember",
                    "Save"
                ]
            },
            {
                "name": "AMAZON.CancelIntent",
                "slots": [],
                "samples": []
            },
            {
                "name": "AMAZON.HelpIntent",
                "slots": [],
                "samples": []
            },
            {
                "name": "AMAZON.StopIntent",
                "slots": [],
                "samples": []
            },
            {
                "name": "CheckDoorIntent",
                "slots": [],
                "samples": [
                    "Is the door unlocked",
                    "Is the door locked",
                    "Check door lock",
                    "Check door ",
                    "Is the door closed",
                    "What is the door condition",
                    "Is my door open",
                    "Check my door",
                    "Is the door open",
                    "Check the door",
                    "Is the door open or closed"
                ]
            },
            {
                "name": "CloseDoorIntent",
                "slots": [],
                "samples": [
                    "Lock door",
                    "Close door ",
                    "Lock the door ",
                    "Close the door ",
                    "Close my door",
                    "Make the door lock",
                    "Make the door close",
                    "Lock"
                ]
            },
            {
                "name": "DescribeGuestIntent",
                "slots": [],
                "samples": [
                    "Tell some details about the man",
                    "Tell some details about the guy",
                    "Give some details about the man",
                    "Give some details about the guy",
                    "Give some details about the person",
                    "Tell some details about the person",
                    "Tell some details about the guest",
                    "Give some details about the guest",
                    "Introduce the person",
                    "Introduce the guest",
                    "Give some details",
                    "Give details",
                    "Explain him",
                    "Explain her",
                    "How is he",
                    "How is she",
                    "How he look",
                    "How she looks"
                ]
            },
            {
                "name": "GiveAccessIntent",
                "slots": [],
                "samples": [
                    "Let the guest come",
                    "Let her come",
                    "Let him come",
                    "Allow her",
                    "Allow him",
                    "Let the guy enter",
                    "Let the person enter",
                    "Let the guest enter",
                    "Allow the guy",
                    "Allow the person",
                    "Allow the guest",
                    "Open the door",
                    "Open",
                    "Allow",
                    "Open the lock",
                    "Open lock",
                    "Let him enter",
                    "Let her enter"
                ]
            },
            {
                "name": "IdentifyGuestIntent",
                "slots": [],
                "samples": [
                    "Who want to meet",
                    "Who came",
                    "Who is waiting ",
                    "Who is at the front door",
                    "Who",
                    "Who is he",
                    "Who is she",
                    "Who is outside"
                ]
            }
        ],
        "types": []
    }
}
lambda-uploaded.pyPython
This is the code for lambda function. You neet to include this python file with AWSIoTPythonSDK in a zip file and then upload to Lambda.Link: https://github.com/aws/aws-iot-device-sdk-python
"""
This sample demonstrates a simple skill built with the Amazon Alexa Skills Kit.
The Intent Schema, and Sample Utterances for this skill, as well
as testing instructions are located at http://amzn.to/1LzFrj6
 
The code is developed by:
	Md. Khairul Alam
	February, 2018
For additional samples, visit the Alexa Skills Kit Getting Started guide at
http://amzn.to/1LGWsLG
"""
 
from __future__ import print_function
import urllib2
import xml.etree.ElementTree as etree
from datetime import datetime as dt

from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient
myMQTTClient = AWSIoTMQTTClient("doorLock")
myMQTTClient.configureEndpoint("a3jra11pv5kiyg.iot.eu-west-1.amazonaws.com", 8883)
myMQTTClient.configureCredentials("./rootCA.pem", "./privateKEY.key", "./certificate.crt")

# AWSIoTMQTTClient connection configuration
myMQTTClient.configureAutoReconnectBackoffTime(1, 32, 20)
#myMQTTClient.configureOfflinePublishQueueing(-1)  # Infinite offline Publish queueing
myMQTTClient.configureDrainingFrequency(2)  # Draining: 2 Hz
myMQTTClient.configureConnectDisconnectTimeout(10)  # 10 sec
myMQTTClient.configureMQTTOperationTimeout(5)  # 5 sec

myMQTTClient.connect()
#myMQTTClient.connectAsync()


import boto3
import io
import time

BUCKET = "taifur12345bucket"
KEY = "sample.jpg"
COLLECTION = "family_collection"
IMAGE_ID = KEY

door_state = False

rekognition = boto3.client('rekognition', region_name='eu-west-1')
dynamodb = boto3.client('dynamodb', region_name='eu-west-1')
s3 = boto3.client('s3')

def update_index(tableName,faceId, fullName):
	response = dynamodb.put_item(
	TableName=tableName,
	Item={
		'RekognitionId': {'S': faceId},
		'FullName': {'S': fullName}
		}
	)
	#print(response)
def index_faces(bucket, key, collection_id, image_id=None, attributes=(), region="eu-west-1"):
	rekognition = boto3.client("rekognition", region)
	response = rekognition.index_faces(
		Image={
			"S3Object": {
				"Bucket": bucket,
				"Name": key,
			}
		},
		CollectionId=collection_id,
		ExternalImageId="taifur",
	    DetectionAttributes=attributes,
	)
	if response['ResponseMetadata']['HTTPStatusCode'] == 200:
		faceId = response['FaceRecords'][0]['Face']['FaceId']
		print(faceId)
		ret = s3.head_object(Bucket=bucket,Key=key)
		personFullName = ret['Metadata']['fullname']
		#print(ret)
		print(personFullName)
		update_index('taifur12345table',faceId,personFullName)

	# Print response to console.
	#print(response)
	return response['FaceRecords']


def lambda_handler(event, context):
    """ Route the incoming request based on type (LaunchRequest, IntentRequest,
    etc.) The JSON body of the request is provided in the event parameter.
    """
    print("event.session.application.applicationId=" +
          event['session']['application']['applicationId'])
          
    #myMQTTClient.connect()
 
    """
    Uncomment this if statement and populate with your skill's application ID to
    prevent someone else from configuring a skill that sends requests to this
    function.
    """
    # if (event['session']['application']['applicationId'] !=
    #         "amzn1.echo-sdk-ams.app.[unique-value-here]"):
    #     raise ValueError("Invalid Application ID")
 
    if event['session']['new']:
        on_session_started({'requestId': event['request']['requestId']},
                           event['session'])
 
    if event['request']['type'] == "LaunchRequest":
        return on_launch(event['request'], event['session'])
    elif event['request']['type'] == "IntentRequest":
        return on_intent(event['request'], event['session'])
    elif event['request']['type'] == "SessionEndedRequest":
        return on_session_ended(event['request'], event['session'])
 
 
def on_session_started(session_started_request, session):
    """ Called when the session starts """
 
    print("on_session_started requestId=" + session_started_request['requestId']
          + ", sessionId=" + session['sessionId'])
 
 
def on_launch(launch_request, session):
    """ Called when the user launches the skill without specifying what they
    want
    """
 
    print("on_launch requestId=" + launch_request['requestId'] +
          ", sessionId=" + session['sessionId'])
    # Dispatch to your skill's launch
    return get_welcome_response()
 
 
def on_intent(intent_request, session):
    """ Called when the user specifies an intent for this skill """
 
    print("on_intent requestId=" + intent_request['requestId'] +
          ", sessionId=" + session['sessionId'])
 
    intent = intent_request['intent']
    intent_name = intent_request['intent']['name']
 
    # Dispatch to your skill's intent handlers
    if intent_name == "IdentifyGuestIntent":
        return guest_search(intent, session)
    elif intent_name == "GiveAccessIntent":
        return give_access(intent, session)
    elif intent_name == "CloseDoorIntent":
        return close_door(intent, session)
    elif intent_name == "CheckDoorIntent":
        return check_door(intent, session)
    elif intent_name == "DescribeGuestIntent":
        return describe_guest(intent, session)
    elif intent_name == "AddGuestIntent":
        return add_guest(intent, session)
    elif intent_name == "AMAZON.HelpIntent":
        return get_welcome_response()
    elif intent_name == "AMAZON.StopIntent" or intent_name == "AMAZON.CancelIntent":
        return session_end(intent, session)
    else:
        raise ValueError("Invalid intent")

 
def on_session_ended(session_ended_request, session):
    """ Called when the user ends the session.
 
    Is not called when the skill returns should_end_session=true
    """
    print("on_session_ended requestId=" + session_ended_request['requestId'] +
          ", sessionId=" + session['sessionId'])
    # add cleanup logic here
 
# --------------- Functions that control the skill's behavior ------------------
 
 
def get_welcome_response():
    """ If we wanted to initialize the session to have some attributes we could
    add those here
    """
 
    session_attributes = {}
    card_title = "Welcome"
    speech_output = "Welcome to the door lock application. " \
                    "You can ask me, who is at the front door or" \
                    " open the door"
    # If the user either does not reply to the welcome message or says something
    # that is not understood, they will be prompted again with this text.
    reprompt_text = "Please ask me for checking the door by telling, " \
                    "Who is at the front door?"
    should_end_session = False
    #myMQTTClient.publish("test/door", "welcome", 0)
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))

def give_access(intent, session):
    """ If we wanted to initialize the session to have some attributes we could
    add those here
    """
    #myMQTTClient.publish("test/door", "open", 0)
    myMQTTClient.publishAsync("test/door", "open", 1, ackCallback=None)
    global door_state
    session_attributes = {}
    card_title = "Opening Door"
    speech_output = "The door is now open."
    # If the user either does not reply to the welcome message or says something
    # that is not understood, they will be prompted again with this text.
    reprompt_text = ""
    should_end_session = True
    #myMQTTClient.connectAsync()
    #myMQTTClient.publish("test/door", "open", 0)
    time.sleep(2)
    door_state = True
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))
 
def close_door(intent, session):
    """ If we wanted to initialize the session to have some attributes we could
    add those here
    """
    #myMQTTClient.publish("test/door", "close", 0)
    myMQTTClient.publishAsync("test/door", "close", 1, ackCallback=None)
    global door_state
    session_attributes = {}
    card_title = "Closing Door"
    speech_output = "The door is now closed."
    
    # If the user either does not reply to the welcome message or says something
    # that is not understood, they will be prompted again with this text.
    reprompt_text = ""
    should_end_session = True
    door_state = False
    time.sleep(2)
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))


def check_door(intent, session):
    """ If we wanted to initialize the session to have some attributes we could
    add those here
    """
    global door_state
    session_attributes = {}
    card_title = "Checking Door"
    if door_state == True:
        speech_output = "The door is open."
    else:
        speech_output = "The door is closed."
    # If the user either does not reply to the welcome message or says something
    # that is not understood, they will be prompted again with this text.
    reprompt_text = ""
    should_end_session = True
    #myMQTTClient.publish("test/door", "open", 0)
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))
        
def describe_guest(intent, session):
    """ If we wanted to initialize the session to have some attributes we could
    add those here
    """
    session_attributes = {}
    card_title = "Guest Details"
    speech_output = "The guest is waiting with smiling face."
    
    # If the user either does not reply to the welcome message or says something
    # that is not understood, they will be prompted again with this text.
    reprompt_text = ""
    should_end_session = True
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))
        
def add_guest(intent, session):
    """ If we wanted to initialize the session to have some attributes we could
    add those here
    """
    for record in index_faces(BUCKET, KEY, COLLECTION, IMAGE_ID):
	    face = record['Face']
    #face = record['Face']
    session_attributes = {}
    card_title = "Adding Guest"
    speech_output = "The guest's details is stored for next time."
    
    # If the user either does not reply to the welcome message or says something
    # that is not understood, they will be prompted again with this text.
    reprompt_text = ""
    should_end_session = True
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))
 
		
def guest_search(intent, session):
    
    card_title = "Guest's Identity"
    session_attributes = {}
    should_end_session = True
    speech_output = "I dont know the person."
    reprompt_text = ""
    response = rekognition.search_faces_by_image(
            CollectionId='family_collection',
            Image={ 
				"S3Object": {
					"Bucket": BUCKET,
					"Name": KEY,
				}
			},                                    
			)
	#print(response)    
    for match in response['FaceMatches']:
        print (match['Face']['FaceId'],match['Face']['Confidence'])
			
        face = dynamodb.get_item(
			TableName='taifur12345table',  
			Key={'RekognitionId': {'S': match['Face']['FaceId']}}
			)
		
        if 'Item' in face:
            guest = face['Item']['FullName']['S']
            speech_output = guest + " is waiting at the door."
	        #print (face['Item']['FullName']['S'])
            #guest = face['Item']['FullName']['S']
            #speech_output = " is waiting at the door." 
            reprompt_text = ""
            break
        else:
            print ('no match found in person lookup')

			
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))


def session_end(intent, session):
    """ If we wanted to initialize the session to have some attributes we could
    add those here
    """
    session_attributes = {}
    card_title = "End"
    speech_output = "Thank you for calling me. Have a nice day!"
    
    # If the user either does not reply to the welcome message or says something
    # that is not understood, they will be prompted again with this text.
    reprompt_text = ""
    should_end_session = True
    return build_response(session_attributes, build_speechlet_response(
        card_title, speech_output, reprompt_text, should_end_session))        
 
# --------------- Helpers that build all of the responses ----------------------
 
 
def build_speechlet_response(title, output, reprompt_text, should_end_session):
    return {
        'outputSpeech': {
            'type': 'PlainText',
            'text': output
        },
        'card': {
            'type': 'Simple',
            'title': title,
            'content': output
        },
        'reprompt': {
            'outputSpeech': {
                'type': 'PlainText',
                'text': reprompt_text
            }
        },
        'shouldEndSession': should_end_session
    }
 
 
def build_response(session_attributes, speechlet_response):
    return {
        'version': '1.0',
        'sessionAttributes': session_attributes,
        'response': speechlet_response
    }
arduino-door-guard.inoArduino
This Arduino sketch is used for receiving the command for controlling the lock. The command is receiving using USB serial cable. A servo motor is used to control a 3D printed lock.
#include <Servo.h>

String inputString = "";         // a string to hold incoming data
boolean stringComplete = false;  // whether the string is complete

//servo motor is used to control the lock
Servo myservo;  // create servo object to control a servo

void setup() {
  // initialize serial:
  Serial.begin(9600);

  inputString.reserve(200);
  myservo.attach(9);  // attaches the servo on pin 9 to the servo object
}

void loop() {
  
  if (stringComplete) {
    //lcd.clear();
    //lcd.print(inputString);
    if(inputString == "open"){
        openDoor();
        delay(20);
      }
    else if(inputString == "close"){
        closeDoor();
        delay(20);
      }  
    // clear the string:
    inputString = "";
    stringComplete = false;
  }
}

/*
  SerialEvent occurs whenever a new data comes in the
 hardware serial RX.  This routine is run between each
 time loop() runs, so using delay inside loop can delay
 response.  Multiple bytes of data may be available.
 */

void serialEvent() {
  while (Serial.available()) {    
    // get the new byte:
    char inChar = (char)Serial.read();     
    // if the incoming character is a newline, set a flag
    // so the main loop can do something about it:
    if (inChar == '\n') {
      stringComplete = true;
    }
    else
    // add it to the inputString:  
      inputString += inChar;
  }
}

void openDoor(){
  myservo.write(0); //place servo knob at 0 degree
  delay(100);   
}

void closeDoor(){
  myservo.write(65); //place servo knob at 65 degree to fully closed the lock
  delay(100); 
}
capture-n-upload.pyPython
This code snippet captures a photo of the guest automatically when he presses the calling button and upload the photo to the S3 Bucket and send a notification to the house owner.
import time
import picamera

import boto3
s3 = boto3.resource('s3')

import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)

#this function automatically called in button press and can call other two functions
def gpio_callback(self):
	capture_image()
	time.sleep(0.3)
	print('Captured')
	upload_image()
	time.sleep(2)
	
#detect a button press and call gpio_callback function
GPIO.add_event_detect(4, GPIO.FALLING, callback=gpio_callback, bouncetime=2000)


#this function capture an image from pi camera and save it as sample.jpg
def capture_image():
	with picamera.PiCamera() as camera:
		camera.resolution = (640, 480)
		camera.start_preview()
		camera.capture('sample.jpg')
		camera.stop_preview()
		camera.close()
		return
		
				

#this function uploads the sample.jpg to AWS S3 Bucket with a name as Guest
def upload_image():
	file = open('sample.jpg','rb')
	object = s3.Object('taifur12345bucket','sample.jpg')
	ret = object.put(Body=file,
			Metadata={'FullName':'Guest'}
			)
	print(ret)
	return
	
index-face-and-store-db.pyPython
Use this code to index a face from S3 Bucket and store the index in DynamoDB with the full name.
import boto3
from decimal import Decimal
import json
import urllib

BUCKET = "taifur12345bucket"
KEY = "sample.jpg"
IMAGE_ID = KEY  # S3 key as ImageId
COLLECTION = "family_collection"

dynamodb = boto3.client('dynamodb', "eu-west-1")
s3 = boto3.client('s3')
# Note: you have to create the collection first!
# rekognition.create_collection(CollectionId=COLLECTION)

def update_index(tableName,faceId, fullName):
	response = dynamodb.put_item(
	TableName=tableName,
	Item={
		'RekognitionId': {'S': faceId},
		'FullName': {'S': fullName}
		}
	)
	#print(response)
def index_faces(bucket, key, collection_id, image_id=None, attributes=(), region="eu-west-1"):
	rekognition = boto3.client("rekognition", region)
	response = rekognition.index_faces(
		Image={
			"S3Object": {
				"Bucket": bucket,
				"Name": key,
			}
		},
		CollectionId=collection_id,
		ExternalImageId="taifur",
	    DetectionAttributes=attributes,
	)
	if response['ResponseMetadata']['HTTPStatusCode'] == 200:
		faceId = response['FaceRecords'][0]['Face']['FaceId']
		print(faceId)
		ret = s3.head_object(Bucket=bucket,Key=key)
		personFullName = ret['Metadata']['fullname']
		#print(ret)
		print(personFullName)
		update_index('taifur12345table',faceId,personFullName)

	# Print response to console.
	#print(response)
	return response['FaceRecords']


for record in index_faces(BUCKET, KEY, COLLECTION, IMAGE_ID):
	face = record['Face']
	# details = record['FaceDetail']
	print "Face ({}%)".format(face['Confidence'])
	print "  FaceId: {}".format(face['FaceId'])
	print "  ImageId: {}".format(face['ImageId'])
upload-multiple-image-with-name.pyPython
Use this code snippet to upload multiple images with full name to S3 Bucket.
import boto3

s3 = boto3.resource('s3')

# Get list of objects for indexing
images=[('afridi.jpg','Shahid Afridi'),
        ('sakib.jpg','Sakib Al Hasan'),
        ('kohli.jpg','Birat Kohli'),
        ('masrafi.jpg','Mashrafe Bin Mortaza'),
        ('ganguli.jpg','Sourav Ganguly')
       ]

# Iterate through list to upload objects to S3   
for image in images:
    file = open(image[0],'rb')
    object = s3.Object('taifur12345bucket',image[0])
    ret = object.put(Body=file,
                    Metadata={'FullName':image[1]}
                    )
    #print(image[0])
    #print(image[1])

Schematics

Block diagram of the System
Block1 iowgctjdgn
Demo Setup
Door lock m7i5jfa1je
VUI Diagram
Vui diagrami s5pccuu3ms
Arduino and Servo Lock connection
Servo lock dfnhl39ibk
Arduino & Raspberry Pi Connection
Arduino raspberry pi kyemo3nzab
Raspberry Pi Circuit
Raspberry Pi is equipped with a camera module, audio amplifier, and a button switch.
Raspberry pi circuit litlzeps22

Comments

Similar projects you might like

Arduino-Powered Robotic Arm Controlled with T-Skin!

Project showcase by Michele Valentini and Massimiliano

  • 2,038 views
  • 2 comments
  • 21 respects

SpeechLess

by tazasproject

  • 223 views
  • 0 comments
  • 5 respects

GRawler - The Glass Roof Cleaner

Project tutorial by Gelstronic

  • 4,969 views
  • 3 comments
  • 19 respects

Homemade Claw Machine

Project tutorial by -MMM-

  • 3,205 views
  • 4 comments
  • 18 respects

Arduino Motorcycle Tail Lights

Project showcase by david schneider

  • 3,703 views
  • 6 comments
  • 12 respects

Automated Film Developer

Project in progress by Tanner Stinson

  • 429 views
  • 0 comments
  • 2 respects
Add projectSign up / Login