NaturalHistoryMuseum/semantic-segmentation

Name: semantic-segmentation

Owner: Natural History Museum

Description: Semantic instance segmentation of specimen images using semi-supervised learning

Created: 2018-02-13 11:02:09.0

Updated: 2018-05-16 08:23:46.0

Pushed: 2018-04-26 17:08:40.0

Homepage:

Size: 801

Language: Jupyter Notebook

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Semi-supervised semantic and instance segmentation

Image segmentation is a process that breaks down an image into smaller segments according to some criteria. For semantic segmentation this means that all pixels of one segment represent an object (or objects) of a single class; in this case a class has been predefined to represent a type of object of interest. For example in the slide image shown below, the objects can be classified as being a specimen (in the centre), regular labels (either side of the specimen), type labels (red circle), barcode labels, or otherwise as being part of the 'background'.

slide image

Segments can be broken down further into instances, which represent distinct objects separately - even if they have the same class. For example, the labels on either side of the slide image would be part of the same 'segment' (even though it is not contiguous), but are treated as separate instances. A corresponding representation of instances for the slide image can be seen below, where each unique colour maps pixels to a specific instance.

slide image

In situations where labelled example data is hard to come by, as is the case for manually generated example segmentations for slides, it is desirable to use methods that can perform well with small datasets. Semi-supervised learning covers a number of techniques to leverage large datasets of unlabelled data to enhance the capability of models otherwise learned on small amounts of data.

This project combines approaches from each of these problems, primarily using the following prior works as reference:

Installation

The implementation is entirely in Python and all dependencies can be installed most straightforwardly by using the Anaconda package manager. Creating a new conda environment is done using the command:

a env create -f environment.yml

which will create a new environment with the name segmentation. It is assumed that a PyTorch-supported GPU is available; notices of dropping support for GPU models can be found in the PyTorch release notes. For older GPUs it may still be possible to install PyTorch from source although behaviour may not be guaranteed. Adapting the code to work without a GPU should be straightforward but could be prohibitively slow - especially for training.

Usage

Notes on usage can be found in the wiki


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.