Here is how you can use OpenCV with AWS rekognition
- Enable your configured aws account on rekognition and S3
- Create a rekognition collection this will be the collection of faces the camera stream is matched against > #aws rekognition create-collection –collection-id “faces” –region us-east-1
- List collections to make sure that the collection was created #aws rekognition list-collections –region us-east-1
- Create an S3 bucket and upload the photo with the face you’d like your camera to rekognize
- Index the faces in your S3 Bucket
aws rekognition index-faces \
–image ‘{“S3Object”:{“Bucket”:”BucketName“,”Name”:”FileName.png“}}’ \
–collection-id “faces” \
–region us-east-1 - The following python code should work
import boto3import cv2from PIL import Imageimport ioframe_skip = 12 # analyze every 100 frames to cut down on Rekognition API callsthreshold=80vidcap = cv2.VideoCapture(0)cur_frame = 0success = TrueBUCKET = “BUCKETNAME” ### This is the bucket where the face your trying to match livesCOLLECTION = “COLLECTIONNAME” #### This is the collection of faces that you have already taught the ai to storedef search_faces_by_image(bin_img, key, collection_id, threshold=80, region=”us-east-1″):rekognition = boto3.client(“rekognition”, region)try:response = rekognition.search_faces_by_image(Image={‘Bytes’: bin_img},CollectionId=collection_id,FaceMatchThreshold=threshold,)return response[‘FaceMatches’]except:print(“no faces found”)returnwhile success:success, frame = vidcap.read() # get next frame from videoif cur_frame % frame_skip ==0: # only analyze every n framesprint(‘frame: {}’.format(cur_frame))pil_img = Image.fromarray(frame) # convert opencv frame (with type()==numpy) into PIL Imagestream = io.BytesIO()pil_img.save(stream, format=’JPEG’) # convert PIL Image to Bytesbin_img = stream.getvalue()face = search_faces_by_image(bin_img, KEY, COLLECTION)print(face)cur_frame += 1
Advertisements