awslabs/lambda-refarch-imagerecognition

Name: lambda-refarch-imagerecognition

Owner: Amazon Web Services - Labs

Owner: AWS Samples

Description: The Image Recognition and Processing Backend reference architecture demonstrates how to use AWS Step Functions to orchestrate a serverless processing workflow using AWS Lambda, Amazon S3, Amazon DynamoDB and Amazon Rekognition.

Created: 2017-01-25 19:26:55.0

Updated: 2018-01-13 18:31:38.0

Pushed: 2018-01-10 16:52:34.0

Homepage: null

Size: 9420

Language: JavaScript

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Serverless Reference Architecture: Image Recognition and Processing Backend

The Image Recognition and Processing Backend demonstrates how to use AWS Step Functions to orchestrate a serverless processing workflow using AWS Lambda, Amazon S3, Amazon DynamoDB and Amazon Rekognition. This workflow processes photos uploaded to Amazon S3 and extracts metadata from the image such as geolocation, size/format, time, etc. It then uses image recognition to tag objects in the photo. In parallel, it also produces a thumbnail of the photo.

This repository contains sample code for all the Lambda functions depicted in the diagram below as well as an AWS CloudFormation template for creating the functions and related resources. There is also a test web app that you can run locally to interact with the backend.

screenshot for instruction

Walkthrough of the architecture
  1. An image is uploaded to the PhotoRepo S3 bucket under the “Incoming/” prefix
  2. The S3 upload event triggers the ImageProcStartExecution Lambda function, which kicks off an execution of the ImageProc state machine in AWS Step Functions, passing in the S3 bucket and object key as input parameters.
  3. The ImageProc state machine has the following sub-steps:
  4. Read the file from S3 and extract image metadata (format, EXIF data, size, etc.)
  5. Based on output from previous step, validate if the file uploaded is a supported file format (png or jpg). If not, throw NotSupportedImageType error and end execution.
  6. Store the extracted metadata in the ImageMetadata DynamoDB table
  7. In parallel, kick off two processes simultaneously:
    • Call Amazon Rekognition to detect objects in the image file. If detected, store the tags in the ImageMetadata DynamoDB table
    • Generate a thumbnail and store it under the “Thumbnails/” prefix in the PhotoRepo S3 bucket
Test web app

You can use the test web app to upload images and see the result of the image recognition and processing workflow. screenshot for instruction

Running the Example
Option 1: Launch the CloudFormation Template in US West - Oregon (us-west-2)

The backend infrastructure can be deployed in US West - Oregon (us-west-2) using the provided CloudFormation template. Click Launch Stack to launch the template in the US West - Oregon (us-west-2) region in your account:

Launch Lambda IoT Backend into Oregon with CloudFormation

(In the last page of the wizard, make sure to:

  1. Click the checkboxes to give AWS CloudFormation permission to “create IAM resources” and “create IAM resources with custom names”
  2. Follow the instructions to “Create Change Set”
  3. Click “Execute” )
Option 2: Launch the CloudFormation Template in a different region than US West - Oregon (us-west-2)

If you would like to deploy the template to a different region (must be a region that supports Amazon Rekognition and AWS Step Functions, e.g. US East (N.Virginia) or EU (Ireland), you need a S3 bucket in the target region, and then package the Lambda functions into that S3 bucket by using the aws cloudformation package utility.

First, In the terminal, go to the lambda-functions folder. Then prepare npm dependencies for the following Lambda functions:

ambda-functions
reate-s3-event-trigger-helper && npm install && cd ../thumbnail  && npm install && cd ../extract-image-metadata && npm install && cd ..

Set environment variables for later commands to use:

ON=[YOUR_TARGET_REGION]
CKET=[REPLACE_WITH_YOUR_BUCKET]

Then go to the cloudformation folder and use the aws cloudformation package utility

./cloudformation

on inject_state_machine_cfn.py -s state-machine.json -c image-processing.serverless.yaml -o image-processing.complete.yaml

cloudformation package --region $REGION --s3-bucket $S3BUCKET --template image-processing.complete.yaml --output-template-file image-processing.output.yaml

Last, deploy the stack with the resulting yaml (image-processing.output.yaml) through the CloudFormation Console or command line:

cloudformation deploy --region $REGION --template-file image-processing.output.yaml --stack-name photo-sharing-backend --capabilities CAPABILITY_IAM
Testing the example

You can use the test web app to see the backend working in action.

Configuring the web app

The web app needs references to the resources created from the CloudFormation template above. To do so, follow these steps:

  1. Go to CloudFormation console
  2. Go to the Output section of the stack you just launched in the previous section
  3. Open the Config.ts file in the webapp/app/ folder, and fill in the corresponding values from the CloudFormation stack output
Running the web app
Prerequisite: node and npm

This web app is built using Angular2 and TypeScript, which relies heavily on node and npm.

Verify that you are running at least node v4.x.x and npm 3.x.x by running node -v and npm -v in a terminal/console window.

Get it now if it's not already installed on your machine.

Run the web app locally

In a terminal, go to the webapp folder, then type

install
start

This compiles the application, starts a local server, and opens a browser that loads the test web application.

Using the web app Login

Pick any username to log in (This is a test app to showcase the backend so it's not using real user authentication. In an actual app, you can use Amazon Cognito to manage user sign-up and login.)

The username will be used in storing ownership metadata of the uploaded images.

Album list

Create new or select existing albums to upload images to.

Photo gallery

Upload images and see status updates when:

  1. Upload to S3 bucket succeeds
  2. The AWS Step Function execution is started. The execution ARN is provided in the UI so you can easily look up its details in the Step Functions Console
  3. The AWS Step Function execution completes

A sample set of extracted image metadata and recognized tags, along with the thumbnail generated in the Step Function execution is displayed for each uploaded image.

Below is the diagram of the state machine being executed every time a new image is uploaded (you can explore this in the Step Functions Console):

state machine diagram

Cleaning Up the Application Resources

To remove all resources created by this example, do the following:

  1. Delete all objects from the S3 bucket created by the CloudFormation stack.
  2. Delete the CloudFormation stack.
  3. Delete the CloudWatch log groups associated with each Lambda function created by the CloudFormation stack.
CloudFormation template resources

The following sections explain all of the resources created by the CloudFormation template provided with this example.

Storage
Image recognition and processing state machine
S3 Upload event trigger
Resources for the test web app
IAM roles

This CloudFormation template chose not to create one IAM role for each Lambda function and consolidated them, simply to reduce the number of IAM roles it takes up in your account. When developing your application, you might instead create individual IAM roles for each Lambda function to follow the Least Privilege principle.

License

This reference architecture sample is licensed under Apache 2.0.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.