Name: OpenSeq2Seq
Owner: NVIDIA Corporation
Description: Distributed (muti-gpu and multi-node) sequence to sequence learning
Created: 2017-09-08 20:53:07.0
Updated: 2018-03-25 22:58:46.0
Pushed: 2018-03-19 23:44:27.0
Size: 134
Language: Python
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
This is a research project, not an official NVIDIA product.
OpenSeq2Seq main goal is to allow researchers to most effectively explore various sequence-to-sequence models. The efficiency is achieved by fully supporting distributed and mixed-precision training. OpenSeq2Seq is built using TensorFlow and provides all the necessary building blocks for training encoder-decoder models for neural machine translation and automatic speech recognition. We plan to extend it with other modalities in the future.
https://nvidia.github.io/OpenSeq2Seq/
Speech-to-text workflow uses some parts of Mozilla DeepSpeech project.
Text-to-text workflow uses some functions from Tensor2Tensor and Neural Machine Translation (seq2seq) Tutorial.