IBM/keras-binary-classifier

Name: keras-binary-classifier

Owner: International Business Machines

Description: A sequential CNN binary image classifier written in Keras

Created: 2018-04-11 17:18:36.0

Updated: 2018-04-16 05:31:11.0

Pushed: 2018-04-16 21:47:55.0

Homepage: null

Size: 48385

Language: Jupyter Notebook

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

seeFOOD CNN, a binary classifier written in Keras and converted to CoreML

Walk you through how to use GPU hardware in the Cloud with Nimbix, to quickly train and deploy a Convolutional Neural Network Model that can tell you whether your lunchtime nutritional choice is the right one - all with the camera of the mobile phone in your pocket. All you need are some photos, descriptions of them, and you can be up and running with a model to stream video through in no time flat.

I'm sure you've seen the episode of Silicon Valley, but to give you an idea of the amazing technology we are going to share with you today here's a clip:

So you want to identify hotdogs - great! Summer is just around the corner, and you can never be too careful with what you're eating. You too can develop an app that identifies Hot Dog and the alternatives… Not Hot Dog

Overview

This repo will walk you through the steps, and technologies to train a Deep Learning model using a Convolutional Neural Network, evaluate its accuracy, and save it into a format that can be loaded on an iOS device. With a model converted to Apple's CoreML format we will load a .mlmodel into an opensource project: Lumina. Within Lumina you can quickly import and activate your .mlmodel, and stream object predictions in real time from the camera feed… Let me repeat, you can stream object predictions from the camera feed in real time - and you can do this with one line of code.

Demo

Technologies
Lumina

Lumina is an iOS camera designed in Swift that can use any CoreML model for object recognition, as well as streaming video, images, and qr/bar codes.

Lumina

CoreMLTools

CoreMLTools integrates trained machine learning models into your iOS app

CoreML

Keras

Keras.io is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano.

Keras

Nimbix

Nimbix provides super computing in the Cloud

Nimbix

PowerAI

PowerAI takes advantage of the CPU:GPU NVLink interconnect (that's a fat pipe between CPU:GPU:Memory) to help support and load larger deep learning models than ever before. Train datasets that could never be trained before utilizing system memory without bottlenecks

PowerAI

Steps

Follow these steps to setup and run this phenomenon sweepng the vegan meat industry. The steps are described in detail below.

  1. Get 24-hours of free access to the PowerAI platform
  2. Access and start the Jupyter notebook
  3. Run the notebook
  4. Save and share your model
  5. End your trial
1. Get 24-Hours of free access to the PowerAI platform

IBM has partnered with Nimbix to provide cognitive developers a trial account that provides 24-Hours of free processing time on the PowerAI platform. Follow these steps to register for access to Nimbix to try the PowerAI Cognitive Code Patterns and explore the platform.

Go to the IBM Marketplace PowerAI Portal, and click Request Trial.

On the IBM PowerAI Trial page, shown below, enter the required information to sign up for an IBM account and click Continue. If you already have an IBM ID, click Already have an account? Log in, enter your credentials and click Continue.

On the Almost there? page, shown below, enter the required information and click Continue to complete the registration and launch the IBM Marketplace Products and Services page.

Your IBM Marketplace Products and Services page displays all offerings that are available to you; the PowerAI Trial should now be one of them. From the PowerAI Trial section, click Launch, as shown below, to launch the IBM PowerAI trial page.

The Welcome to IBM PowerAI Trial page provides instructions for accessing the trial, as shown below. Alternatively, you will receive an email confirming your registration with similar instructions that you can follow to start the trial.

Summary of steps for starting the trial:

2. Access and start the Jupyter notebook

Use git clone to download the example notebook, dataset, and retraining library with a single command.

clone https://github.com/justinmccoy/keras-binary-classifier
3. Run the notebook

When a notebook is executed, what is actually happening is that each code cell in the notebook is executed, in order, from top to bottom.

Each code cell is selectable and is preceded by a tag in the left margin. The tag format is In [x]:. Depending on the state of the notebook, the x can be:

There are several ways to execute the code cells in your notebook:

4. Save and share your model
How to save your work:

Because this notebook is running temporarily on a Nimbix Cloud server, use the following options to save your work:

Under the File menu, there are options to:

5. Implement Your Model With Lumina

You'll need to start an iOS project that uses the Lumina framework. You can either clone the repository linked above and use the LuminaSample app in the main workspace, or you can make your own iOS app using the framework. Watch this video for more information on using Lumina.

Once you have a project open with Lumina integrated, make sure you implement a camera with at least the following code:

camera = LuminaViewController()
ra.delegate = self
ra.streamingModelTypes = [seefood()]
ent(camera, animated: true)

At this point, your iOS app is already making use of the CoreML functionality embedded in Lumina. Now, you need to actually do something with the data returned from it.

Extend your class to conform to LuminaDelegate like so:

nsion ViewController: LuminaDelegate {
func streamed(videoFrame: UIImage, with predictions: [LuminaRecognitionResult]?, from controller: LuminaViewController) {

}

Results streamed from each video frame are given to you in this delegate method. In this example, you have created a binary classifier, so you should only expect one result with either a 1.0 or 0.0 result. Lumina has a built in text label to use as a prompt, so update your method to make use of it here like so:

 streamed(videoFrame: UIImage, with predictions: [LuminaRecognitionResult]?, from controller: LuminaViewController) {
guard let predicted = predictions else {
    return
}
guard let value = predicted.first?.predictions?.first else {
    return
}
if value.confidence > 0 {
    controller.textPrompt = "\(String(describing: predicted.first?.type)): Not Hot Dog"
} else {
    controller.textPrompt = "\(String(describing: predicted.first?.type)): Hot Dog"
}

Run your app, and point the camera at a hot dog, then at anything that isn't a hot dog. The results speak for themselves!

6. End your trial

When you are done with your work, please cancel your subscription by issuing the following command in your ssh session or by visiting the Manage link on the My Products and Services page.

 poweroff --force
Links
License

Apache 2.0


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.