aws-samples/reinvent-fine-tune-mxnet-model-for-ios-coreml

Name: reinvent-fine-tune-mxnet-model-for-ios-coreml

Owner: AWS Samples

Description: Python code, deep learning models and associated jupyter notebooks to be used by the participants of the deep learning workshop at re:Invent 2017 (MCL311).

Created: 2017-11-17 18:46:42.0

Updated: 2017-12-05 03:43:03.0

Pushed: 2017-12-03 19:26:58.0

Homepage:

Size: 12193

Language: Jupyter Notebook

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Workshop: Accelerating Apache MXNet Models on Apple Platforms Using Core ML

Introduction

This repository contains the Python code, deep learning models and the associated jupyter notebook to be used by the participants of the deep learning workshop at re:Invent 2017 (MCL311).

With the release of Core ML by Apple at WWDC 2017, iOS,m acOS, watchOS and tvOS developers can now easily integrate a machine learning model into their app. This enables developers to bring intelligent new features to users with just a few lines of code. Core ML makes machine learning more accessible to mobile developers. It also enables rapid prototyping and the use of different sensors (like the camera, GPS, etc.) to create more powerful apps than ever.

Members of the MXNet community, including contributors from Apple and Amazon Web Services (AWS), have collaborated to produce a tool that converts machine learning models built using MXNet to the Core ML format. This tool makes it easy for developers to build apps powered by machine learning for Apple devices. With this conversion tool, you now have a fast pipeline for your deep-learning-enabled applications. You can move from scalable and efficient distributed model training in the AWS Cloud using MXNet to fast run-time inference on Apple devices.

In this workshop, we will use models trained on millions of examples for one dataset, and apply them to improve performance on a new problem with a much smaller dataset to infer whether an image is that of a cat or a dog. We will then convert this model from MXNet into CoreML and import it into a sample iOS app written in Swift. The iOS app feeds a picture to the model, which predicts whether the image we are looking at is a cat or a dog. For performance, we recommend that you run the app on a physical iOS device (e.g., an iPhone) installed with iOS 11, but you can also try it on the simulator that comes with the Xcode 9.0 using the test images provided with the sample iOS app.

Instructions
1. Download instructions and artifacts: 2. Create A Training Compute Instance:

In the Amazon EC2 console, launch an instance. For step-by-step instructions, see Launching an AWS Marketplace Instance in the Amazon EC2 User Guide for Linux Instances. As you follow the steps, use the following values:

3. Connect

Once the instance is running (Status Check: 2/2), create an SSH tunnel to your instance. Enter the following into the terminal:

ssh -L 8888:localhost:8888 -i <YOUR_CERT>.pem ubuntu@<DNS of your EC2 Instance>
4. Starting the Jupyter Notebook 5. Run, Train & Model

Open notebook fine-tuning-catsvsdogs.ipynb

6. Creating the iOS App

We will be using a sample iOS app to test our newly created CoreML model.


IMAGE ATTRIBUTION

The following works of great photography were used in the iOS app:


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.