Name: yoctol-keras-layer-zoo
Owner: YOCTOL INFO INC.
Description: Some customized keras layers used in Yoctol NLU.
Created: 2017-04-17 07:11:18.0
Updated: 2018-01-08 14:21:05.0
Pushed: 2017-06-26 03:06:39.0
Homepage: null
Size: 126
Language: Python
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
Customized keras layers used in Yoctol NLU service.
Our customized layers support mask function in Keras framework.
The mask function is used to deal with unfixed length of natural language sentences.
Recurrent Neural Network
Convolutional Neural Network
pip install yoctol_keras_layer_zoo
python -m unittest
Reference: https://en.wikipedia.org/wiki/Long_short-term_memory
The implemented Peephole LSTM outputs its hidden states i.e. h.
Usage:
keras.models import Model, Input
keras.layers.core import Masking
yklz import LSTMPeephole
ts = Input(shape=(max_length, feature_size))
ed_inputs = Masking(0.0)(inputs)
uts = LSTMPeephole(
ts=units,
urn_sequences=True
sked_inputs)
l = Model(inputs, outputs)
l.compile('sgd', 'mean_squared_error')
The RNNCell add another Dense layer behind its recurrent layer.
Usage:
keras.models import Model, Input
keras.layers.core import Masking
keras.layers import LSTM, Dense
yklz import RNNCell
ts = Input(shape=(max_length, feature_size))
ed_inputs = Masking(0.0)(inputs)
uts = RNNCell(
M(
units=hidden_units,
return_sequences=True
se(
units=units
se_dropout=0.1
sked_inputs)
l = Model(inputs, outputs)
l.compile('sgd', 'mean_squared_error')
The RNN Encoder encodes sequences into a fixed length vector and pads zero vectors after the encoded vector to provide mask function in the Keras training framework.
put tensor shape: (batch_size, timesteps, encoding_size)
ues: [[[encoded_vector], [0,...,0], ..., [0,...,0]], ...]
You can use any recurrent layer in Keras with our RNN Encoder wrapper.
Usage:
keras.models import Model, Input
keras.layers import GRU, Masking
yklz import RNNEncoder
ts = Input(shape=(max_length, feature_size))
ed_inputs = Masking(0.0)(inputs)
uts = RNNEncoder(
(
units=encoding_size,
sked_inputs)
l = Model(inputs, outputs)
l.compile('sgd', 'mean_squared_error')
To use bidirectional encoding, we provide a customed bidirectional RNN encoder wrapper.
keras.models import Model, Input
keras.layers import LSTM, Masking
yklz import BidirectionalRNNEncoder
ts = Input(shape=(max_length, feature_size))
ed_inputs = Masking(0.0)(inputs)
uts = BidirectionalRNNEncoder(
M(
units=encoding_size,
sked_inputs)
l = Model(inputs, outputs)
l.compile('sgd', 'mean_squared_error')
The customed RNN decoder decodes the sequences come from our RNN Encoder.
When our decoder gets zero vector as input, it uses the previous output vector as input vector.
That's why we pad zero vectors after the encoded vector in RNN Encoder.
Note that your encoding size should be the same with the decoding size.
The decoder could also decode sequences with different length by specifying your decoding length.
Usage:
Auto-encoder: decoding sequences whose length and mask are same with the input sequences.
keras.models import Model, Input
keras.layers import LSTM, Masking
yklz import RNNEncoder, RNNDecoder
ts = Input(shape=(max_length, feature_size))
ed_inputs = Masking(0.0)(inputs)
ded_seq = RNNEncoder(
M(
units=encoding_size,
sked_inputs)
uts = RNNDecoder(
M(
units=decoding_size
coded_seq)
l = Model(inputs, outputs)
l.compile('sgd', 'mean_squared_error')
Seq2Seq: decoding sequences whose length is different with input sequences
keras.models import Model, Input
keras.layers import LSTM, Masking
yklz import RNNEncoder, RNNDecoder
ts = Input(shape=(max_length, feature_size))
ed_inputs = Masking(0.0)(inputs)
ded_seq = RNNEncoder(
M(
units=encoding_size,
sked_inputs)
uts = RNNDecoder(
M(
units=decoding_size
e_steps=decoding_length
coded_seq)
l = Model(inputs, outputs)
l.compile('sgd', 'mean_squared_error')
Note we use channel last data format with our ConvNet layers.
A masking layer masks 2D, 3D or higher dimensional input tensors.
keras.models import Model, Input
yklz import MaskConv
ts = Input(shape=(seq_max_length, word_embedding_size, channel_size))
ed_inputs = MaskConv(0.0)(inputs)
A pooling wrapper supports mask function with Keras pooling layers.
keras.models import Model, Input
keras.layers.pooling import MaxPool2D
yklz import MaskConv
yklz import MaskPooling
ts = Input(shape=(seq_max_length, word_embedding_size, channel_size))
ed_inputs = MaskConv(0.0)(inputs)
ing_outputs = MaskPooling(
MaxPool2D(
pooling_kernel,
pooling_strides,
pooling_padding,
),
pool_mode='max'
sked_inputs)
A wrapper supports mask function with Keras convolutional layers.
keras.models import Model, Input
keras.layers import Conv2D
yklz import MaskConv
yklz import MaskConvNet
ts = Input(shape=(seq_max_length, word_embedding_size, channel_size))
ed_inputs = MaskConv(0.0)(inputs)
_outputs = MaskConvNet(
Conv2D(
filters,
kernel,
strides=strides
)
sked_inputs)
Use convolutional neural network to extract text features and make prediction.
keras.models import Model, Input
keras.layers import Dense
keras.layers.pooling import MaxPool2D
keras.layers import Conv2D
yklz import MaskConv
yklz import MaskConvNet
yklz import MaskPooling
yklz import MaskFlatten
ts = Input(shape=(seq_max_length, word_embedding_size, channel_size))
ed_inputs = MaskConv(0.0)(inputs)
_outputs = MaskConvNet(
Conv2D(
filters,
kernel,
strides=strides
)
sked_inputs)
ing_outputs = MaskPooling(
MaxPool2D(
pooling_kernel,
pooling_strides,
pooling_padding,
),
pool_mode='max'
nv_outputs)
ten_outputs = MaskFlatten()(pooling_outputs)
uts = Dense(
label_num
atten_outputs)
l = Model(inputs, outputs)
l.compile('sgd', 'mean_squared_error')
Encode text features with ConvNet and decode it with RNN.
MaskToSeq is a wrapper transform 2D or 3D mask tensor into timestamp mask tensor.
ConvEncoder transforms a 2D or 3D tensor into a 3D timestamp sequence and mask the sequence with the mask tensor from MaskToSeq wrapper.
keras.models import Model, Input
keras.layers import LSTM
keras.layers.pooling import MaxPool2D
keras.layers import Conv2D, Dense
from yklz import MaskConv from yklz import MaskConvNet from yklz import MaskPooling from yklz import RNNDecoder, MaskToSeq from yklz import RNNCell
inputs = Input(shape=(seq_max_length, word_embedding_size, channel_size)) masked_inputs = MaskConv(0.0)(inputs) masked_seq = MaskToSeq(
layer=MaskConv(0.0),
time_axis=1,
)(inputs)
conv_outputs = MaskConvNet(
Conv2D(
filters,
kernel,
strides=strides
)
)(masked_inputs) pooling_outputs = MaskPooling(
MaxPool2D(
pooling_kernel,
pooling_strides,
pooling_padding,
),
pool_mode='max'
)(conv_outputs)
encoded = ConvEncoder()(
[pooling_outputs, masked_seq]
)
outputs = RNNDecoder(
RNNCell(
LSTM(
units=hidden_size
),
Dense(
units=decoding_size
)
)
)(encoded) model = Model(inputs, outputs) model.compile('sgd', 'mean_squared_error')
Seq2Seq example
age
from keras.models import Model, Input from keras.layers import LSTM from keras.layers.pooling import MaxPool2D from keras.layers import Conv2D from keras.layers import Masking, Dense
from yklz import MaskConv from yklz import MaskConvNet from yklz import MaskPooling from yklz import RNNDecoder from yklz import RNNCell
inputs = Input(shape=(input_seq_max_length, word_embedding_size, channel_size)) masked_inputs = MaskConv(0.0)(inputs) masked_seq = MaskToSeq(
layer=MaskConv(0.0),
time_axis=1,
)(inputs)
conv_outputs = MaskConvNet(
Conv2D(
filters,
kernel,
strides=strides
)
)(masked_inputs) pooling_outputs = MaskPooling(
MaxPool2D(
pooling_kernel,
pooling_strides,
pooling_padding,
),
pool_mode='max'
)(conv_outputs)
encoded = ConvEncoder()(
[pooling_outputs, masked_seq]
)
outputs = RNNDecoder(
RNNCell(
LSTM(
units=hidden_size
),
Dense(
units=decoding_size
)
),
time_steps=decoding_length
)(encoded) model = Model(inputs, outputs) model.compile('sgd', 'mean_squared_error')
more examples you could visit our seq2vec repository.
s://github.com/Yoctol/seq2vec
seq2vec repository contains auto-encoder models which encode
ences into fixed length feature vector.