|
Keraflow
Deep Learning for Python.
|
Base class for recurrent layers. More...
Public Member Functions | |
| def | __init__ |
Base class for recurrent layers.
Do not use this layer in your code.
| def keraflow.layers.recurrent.Recurrent.__init__ | ( | self, | |
| num_states, | |||
| output_dim, | |||
return_sequences = False, |
|||
go_backwards = False, |
|||
stateful = False, |
|||
unroll = False, |
|||
| kwargs | |||
| ) |
| num_states | int. Number of state of the layer. |
| output_dim | int. The output dimension of the layer. |
| return_sequences | Boolean. Whether to return the last output in the output sequence, or the full sequence. |
| go_backwards | Boolean. If True, process the input sequence backwards. |
| stateful | Boolean. See below. |
| unroll | Boolean. If True, the network will be unrolled, else a symbolic loop will be used. When using TensorFlow, the network is always unrolled, so this argument has no effect. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. |
| kwargs | see Layer.__init__. |
stateful=True, which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a fixed batch size (specified by Input) and a one-to-one mapping between samples in different successive batches.mask_value of Input. Usually you will concatenate an Input layer, an embedding layer, and then a recurrent layer.