NVIDIA/apex

Name: apex

Owner: NVIDIA Corporation

Description: A PyTorch Extension

Created: 2018-04-23 16:28:52.0

Updated: 2018-05-23 18:02:24.0

Pushed: 2018-05-23 18:02:23.0

Homepage:

Size: 7572

Language: Python

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Introduction

This repo is designed to hold PyTorch modules and utilities that are under active development and experimental. This repo is not designed as a long term solution or a production solution. Things placed in here are intended to be eventually moved to upstream PyTorch.

Requirements

Python 3 PyTorch 0.3 or newer CUDA 9

Full Documentation

Quick Start

To build the extension run the following command in the root directory of this project

on setup.py install

To use the extension simply run

rt apex

and optionally (if required for your use)

rt apex._C as apex_backend

What's included

Current version of apex contains:

  1. Mixed precision utilities can be found here examples of using mixed precision utilities can be found for the PyTorch imagenet example and the PyTorch word language model example.
  2. Parallel utilities can be found here and an example/walkthrough can be found here
  3. apex/parallel/distributed.py contains a simplified implementation of PyTorch's DistributedDataParallel that's optimized for use with NCCL in single gpu / process mode
  4. apex/parallel/multiproc.py is a simple multi-process launcher that can be used on a single node/computer with multiple GPU's
  5. Reparameterization function that allows you to recursively apply reparameterization to an entire module (including children modules).
  6. An experimental and in development flexible RNN API.

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.