- Encoder (Original Text)
- Decoder (Translated Text)
- Read Sequence Data
- SequenceExample proto to store sequence data + context
- Efficient Storage, Efficient parser
- tf.parse_single_sequence_example
- Static Padding - Pick Max Sequence length, Convert input to that length
- tf.train.batch(...)
- Dynamic Padding tf.train.batch(...,dynamic_pad=true)
- Bucketing - tf.contrib.training.bucket_by_sequence_length(...,dynamic_pad=True)
- tf.contrib.training.batch_sequences_with_states(...)
- RNN is unit of computation repeated over and over
- Inputs / Previous State / Intermediate calculation / Updated State
- LSTM (Long Short Term Memory)
- RNNCell (Provide Knowledge about specific RNN Architecture)
- Represent a time step as a layer
- Tensor Cells
Fully Dynamic Computation
- tf.while_loop (dynamic loops + gradients)
- tf.TensorArray (dynamic Tensor slice access, gradients)
- Encoder (8 Stacked LSTM Layers)
- XLA fused time steps
- Manually fused time steps
- Works everywhere fast on XOR (GPU, Android)
- tf.contrib.seq2seq
- Pick Sampling method
- Pick RNN Cells
- Pick attention, in-grah beam search
Happy Mastering DL!!!
No comments:
Post a Comment