Seq2vec¶
Layers mapping sequences to vectors
Modules
YangAttention
¶
Reduce time dimension by applying attention using learned variables
Arguments¶
n_units (
int
): Attention’s variables unitsname (
str
): Layer name
Input shape¶
(batch_size, time_steps, channels)
Output shape¶
(batch_size, channels)
Examples¶
import tensorflow as tf
import tavolo as tvl
model = tf.keras.Sequential([tf.keras.layers.Embedding(vocab_size, 8, input_length=max_sequence_length),
tvl.seq2vec.YangAttention()])