Name: MAX
Owner: International Business Machines
Description: null
Created: 2018-03-01 23:27:23.0
Updated: 2018-05-12 06:16:02.0
Pushed: 2018-03-12 06:37:21.0
Homepage: null
Size: 55
Language: null
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
Keras is a deep learning library that you can use in conjunction with Tensorflow and several other deep learning libraries. This is Keras model Inception V3, with weights pre-trained on ImageNet. Since its first introduction, Inception has been one of the best performing family of models on the ImageNet dataset
| Domain | Industry | Framework | Datasets | Data Format |
| ————- | ——– | ——– | ——— | ——— |
| Vision | General | Tensorflow and Theano| ImageNet | channels_first, channels_last|
| Component | License | Link | | ————- | ——– | ——– | | This repository | Apache 2.0 | LICENSE | | Model Weights | CC BY License | ABC| | Model Code (3rd party) | MIT | ABC| | Test assets | Various | Asset README |## Quick Start
Download and run the image from Dockerhub
erhub link and running instruction
docker
: The Docker command-line interface (https://www.docker.com/)
S3 CLI
: The command-line interface to configure your Object Storage
The minimum recommended capacity for this model is 2GB Memory and 2 CPUs.
Clone the MAX
repository locally. In a terminal, run the following command:
t clone https://https://github.com/IBM/MAX.git
Change directory into the repository base folder: $ cd MAX
.
To build the docker image locally, run:
cker build -t mae-keras -f Dockerfile .
The docker image uses the nvidia/cuda base image so will need to pull that from Dockerhub, which may take some time. Note that currently this docker image is CPU only (we will support GPU later).
To run the docker image, which automatically starts the model serving API, run:
cker run -it -v $PWD:$PWD -w $PWD -v $HOME/.keras/models:/root/.keras/models -p 5000:5000 mae-keras
When run for the first time, the model files will automatically be downloaded.
The API server automatically generates an interactive Swagger documentation page. Go to http://localhost:5000
to load it. From there you can explore the API and also create test requests.
Use the model/predict
endpoint to load a test image (you can use one of the test images from the assets
folder) and get predicted labels for the image from the API.
You can also test it on the command line, for example:
rl -F "image=@assets/dog.jpg" -XPOST http://127.0.0.1:5000/model/predict
You should see a JSON response like that below:
atus": "ok", "predictions": [{"label": "beagle", "probability": 0.9201778173446655, "label_id": "n02088364"}, {"label": "Walker_hound", "probability": 0.010086667723953724, "label_id": "n02089867"}, {"label": "English_foxhound", "probability": 0.009787781164050102, "label_id": "n02089973"}, {"label": "bluetick", "probability": 0.006095303222537041, "label_id": "n02088632"}, {"label": "Eskimo_dog", "probability": 0.0025898281019181013, "label_id": "n02109961"}]}
To run the Flask API app in debug mode, edit app.py
to set the last line to:
run(debug=True, host='0.0.0.0')
You will then need to rebuild the docker image (following step 2 and step 3).
Watson Studio Deep Learning instructions go here