|
Keraflow
Deep Learning for Python.
|
Long-Short Term Memory unit - Hochreiter 1997. More...
Public Member Functions | |
| def | __init__ |
Public Member Functions inherited from keraflow.layers.recurrent.Recurrent | |
| def | __init__ |
Long-Short Term Memory unit - Hochreiter 1997.
For a step-by-step description of the algorithm, see this tutorial.
(nb_samples, sequence_length, input_dim)return_sequences=True): 3D, (nb_samples, sequence_length, output_dim)return_sequences=False): 2D, (nb_samples, output_dim)(input_dim, output_dim)(output_dim, output_dim)(output_dim,)| def keraflow.layers.recurrent.LSTM.__init__ | ( | self, | |
| output_dim, | |||
init = 'glorot_uniform', |
|||
inner_init = 'orthogonal', |
|||
forget_bias_init = 'one', |
|||
activation = 'tanh', |
|||
inner_activation = 'hard_sigmoid', |
|||
dropout_W = 0., |
|||
dropout_U = 0., |
|||
return_sequences = False, |
|||
go_backwards = False, |
|||
stateful = False, |
|||
unroll = False, |
|||
| kwargs | |||
| ) |
| output_dim | int. The output dimension of the layer. |
| init | str/function. Function to initialize W_i,W_f,W_c, W_o (input to hidden transformation). See Initializations. |
| inner_init | str/function. Function to initialize U_i,U_f,U_c, U_o (hidden to hidden transformation). See Initializations. |
| forget_bias_init | initialization function for the bias of the forget gate. Jozefowicz et al. recommend initializing with ones. |
| activation | str/function. Activation function applied on the output. See Activations. |
| inner_activation | str/function. Activation function applied on the inner cells. See Activations. |
| dropout_W | float between 0 and 1. Fraction of the input units to drop for input gates. |
| dropout_U | float between 0 and 1. Fraction of the input units to drop for recurrent connections. |
| return_sequences | Boolean. See Recurrent.__init__ |
| go_backwards | Boolean. See Recurrent.__init__ |
| stateful | Boolean. See Recurrent.__init__ |
| unroll | Boolean. See Recurrent.__init__ |
| kwargs | see Layer.__init__. |