NVIDIA/OpenSeq2Seq

Name: OpenSeq2Seq

Owner: NVIDIA Corporation

Description: Distributed (muti-gpu and multi-node) sequence to sequence learning

Created: 2017-09-08 20:53:07.0

Updated: 2018-03-25 22:58:46.0

Pushed: 2018-03-19 23:44:27.0

Homepage:

Size: 134

Language: Python

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

OpenSeq2Seq

OpenSeq2Seq: toolkit for distributed and mixed precision training of sequence-to-sequence models

This is a research project, not an official NVIDIA product.

OpenSeq2Seq main goal is to allow researchers to most effectively explore various sequence-to-sequence models. The efficiency is achieved by fully supporting distributed and mixed-precision training. OpenSeq2Seq is built using TensorFlow and provides all the necessary building blocks for training encoder-decoder models for neural machine translation and automatic speech recognition. We plan to extend it with other modalities in the future.

Features
  1. Sequence to sequence learning
  2. Neural Machine Translation
  3. Automatic Speech Recognition
  4. Data-parallel distributed training
  5. Multi-GPU
  6. Multi-node
  7. Mixed precision training for NVIDIA Volta GPUs
Documentation

https://nvidia.github.io/OpenSeq2Seq/

Acknowledgments

Speech-to-text workflow uses some parts of Mozilla DeepSpeech project.

Text-to-text workflow uses some functions from Tensor2Tensor and Neural Machine Translation (seq2seq) Tutorial.

Related resources

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.