IBM/Predictive-Industrial-Visual-Analysis

Name: Predictive-Industrial-Visual-Analysis

Owner: International Business Machines

Description: Predictive Industrial Visual Analysis using Watson Visual Recognition, IBM Cloud Functions and Cloudant database

Created: 2017-10-23 19:25:42.0

Updated: 2018-04-27 18:02:47.0

Pushed: 2018-04-27 18:02:47.0

Homepage: http://industrial-visual-analysis.mybluemix.net/

Size: 32438

Language: JavaScript

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Skill Level: Beginner
N.B: All services used in this repo are Lite plans.

Industrial Visual Analysis

In this code pattern, we will identify industrial equipment for various damages upon visual inspection by using machine learning classification techniques. Using Watson Visual Recognition, we will analyze the image against a trained classifier to inspect oil and gas pipelines with six identifiers - Normal, Burst, Corrosion, Damaged Coating, Joint Failure and Leak. For each image we will provide a percent match with each of the category, on how closely the image matches one of the damaged identifiers or the Normal identifier. This data can then be used to create a dashboard to the pipelines needing immediate attention to no attention.

The images data is stored in a Cloudant database which makes it easier to connect remote devices (including drones) to capture images. The database can store different properties of the images like location and description. This code pattern demonstrates IBM Cloud Functions (OpenWhisk) to trigger microservice as an image is added to the Cloudant database. The microservice performs the Visual Recognition analysis and updates the Cloudant database with the analysis data.

When the reader has completed this code pattern, they will understand how to:

Architecture Flow

  1. User uploads the image through the web UI
  2. The image data is send to the Cloudant database
  3. As the image is added into the database, the Cloud Functions triggers microservice
  4. The microservice analyzes the image using the trained Watson Visual Recognition service
  5. The analyzed data is fed back into the Cloudant database
  6. The dashboard on the web UI displays the Visual Recognition analysis and images requiring attention
Included Components
Featured technologies

Running the Application

Follow these steps to setup and run the application. The steps are described in detail below.

Steps
  1. Watson Visual Recognition Setup
  2. Cloudant NoSQL DB Setup
  3. IBM Cloud Functions Setup
  4. Run Web Application
  5. Saves back in Cloudant database
  6. Sends it back to the web UI
1. Watson Visual Recognition Setup

Create the Watson Visual Recognition service in IBM Cloud. You will need the API Key.

Here we will create a classifier using the zipped images to train the Watson Visual-Recognition service. The images in each zipped folder are used to make the Watson VR service become familiar with the images that relate to the different categories (Corrosion, Leak, etc.). Run the following command to submit all 6 sets of images to the Watson service classifier:

 -X POST -u "apikey:{INSERT-YOUR-IAM-APIKEY-HERE}" -F "Bursted_Pipe_positive_examples=@Burst_Images.zip" -F "Corroded_Pipe_positive_examples=@Corrosion_Images.zip" -F "Damaged_Coating_positive_examples=@Damaged_Coating_Images.zip" -F "Joint_Failure_positive_examples=@Joint_Failure_Images.zip" -F "Pipe_Leak_positive_examples=@Leak_Images.zip" -F "Normal_Condition_positive_examples=@Normal_Condition.zip" -F "name=OilPipeCondition" "https://gateway.watsonplatform.net/visual-recognition/api/v3/classifiers?version=2018-03-19"

The response from above will provide you with a status on the submission and will give you a CLASSIFIER_ID. Please copy this for future use as well. After executing the above command, you can view the status of your Watson service and whether it has finished training on the images you submitted. You can check the status like this:

 -X GET -u "apikey:{INSERT-YOUR-IAM-APIKEY-HERE}"  "https://gateway.watsonplatform.net/visual-recognition/api/v3/classifiers/{INSERT-CLASSIFIER-ID-HERE}?api_key={INSERT-API-KEY-HERE}&version=2018-03-19"

You can find more information on working with your classifier here

2. Cloudant NoSQL DB Setup

Create the Cloudant NoSQL service in IBM Cloud.

Create a new database in Cloudant called image_db

Next, create a view on the database with the design name image_db_images, index name image_db.images, and use the following map function:

tion (doc) {
 doc.type == 'image_db.image' ) {
it(doc);


3. IBM Cloud Functions Setup

We will now set up the IBM Cloud Functions (OpenWhisk) using Bluemix CLI.

Setup and download the Bluemix CLI API Authentication and Host

We will need the API authentication key and host.

N.B: make sure what plan (Lite, etc.) you are associating when creating this service.

Configure .env file

You will need to provide credentials to your Cloudant NoSQL database and Watson Visual Recognition service, and Cloud Functions Host/Auth information retrieved in the previous step, into a .env file. Copy the sample .env.example file using the following command:

env.example .env

and fill in your credentials and your VR Classifier name.

m cloudant NoSQL database
DANT_USERNAME=
DANT_PASSWORD=
DANT_HOST=
DANT_URL=
DANT_DB=image_db
m Watson Visual Recognition Service
EY=
RL=
LASSIFIERS=default,OilPipeCondition_1063693116
m OpenWhisk Functions Service in IBM Cloud
TIONS_APIHOST=
TIONS_AUTHORIZATION=
Run setup_functions.sh

We will now run the setup_functions.sh file to set up the microservice which triggers the Visual Recognition analysis as an image is added to the Cloudant database.

d +x setup_functions.sh
tup_functions.sh --install

The above command will setup the OpenWhisk actions for you, there should be no need to do anything else if you see an Install Complete message with green OK signs in the CLI.

Explore IBM Cloud Functions

In IBM Cloud, look for Functions in Catalog

There you will see a UI to Manage and Monitor the service. In addition, it has information for Getting Started and even Develop actions.

4. Run Web Application
Run locally

To run the app, go to the `Industrial-Visual-Analysis` folder and run the following commands.

install
start

Test your application by going to: http://localhost:3000/

Deploy to IBM Cloud

Deploy to IBM Cloud

You can push the app to IBM Cloud by first editing the `manifest file` file and then using cloud foundry cli commands.

Edit the manifest.yml file in the folder that contains your code and replace with a unique name for your application. The name that you specify determines the application's URL, such as your-application-name.mybluemix.net. Additionally - update the service names so they match what you have in IBM Cloud. The relevant portion of the manifest.yml file looks like the following:

ications:
th: .
mory: 256M
stances: 1
main: mybluemix.net
me: {industrial-visual-analysis}
sk_quota: 1024M
rvices:
{cloudant}
{visual-recognition}

In the command line use the following command to push the application to IBM Cloud:

pp push YOUR_APP_NAME
Application

The app has the following functions:

Extending the pattern with Drone

This code pattern can be extended by adding a Drone to take images. A DJI drone can be used to capture images and configured to send images to our Cloudant database. As the image is received by the Cloudant database, the VR analysis and image detail can be displayed through the web UI.

Troubleshooting
Visual Recognition

If you invoke GET /classifiers with verbose=1 what do you see? If that list is empty, and you get this error message, you should open an IBM Cloud support ticket. If it's not empty, you should use DELETE /classifiers/{classifier_id} to remove the existing classifier so that you can create your new one.

IBM Cloud Functions

The setup_functions.sh have different commands to uninstall, re-install or update IBM Cloud Functions. And to view the env credentials used by IBM Cloud Functions.

IBM Cloud application

To troubleshoot your IBM Cloud application, use the logs. To see the logs, run:

pp logs <application-name> --recent

Learn more

License

Apache 2.0


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.