Name: SageMaker_seq2seq_WordPronunciation
Owner: AWS Samples
Description: Sequence to Sequence modeling have seen great performance in building models where the input is a sequence of tokens (words for example) and output is also a sequence of tokens. The notebook provides an end-to-end training example of training the English word pronunciation model.
Created: 2018-03-13 22:39:00.0
Updated: 2018-03-20 16:49:30.0
Pushed: 2018-03-16 18:49:22.0
Homepage: null
Size: 37
Language: Jupyter Notebook
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
Sequence to Sequence modeling have seen great performance in building models where the input is a sequence of tokens (words for example) and output is also a sequence of tokens. The notebook provides an end-to-end example of training and hosting the English word pronunciation model using the Amazon SageMaker built-in Seq2Seq.
Jupyter notebook to demonstrate an end-to-end example of training and hosting the English word pronunciation model.
Note: The training the model with the exact same setup will take ~2 hours.
Helper python script to generate a recordIO file from pairs of tokenized source and target sequences in numpy array. See also Link.
Another helper python script to generate a recordIO file from pairs of tokenized source and target sequences in numpy array. See also Link.
This library is licensed under the Apache 2.0 License.