|
Keraflow
Deep Learning for Python.
|
Fully-connected RNN where the output is to be fed back to input. More...
Public Member Functions | |
| def | __init__ |
Public Member Functions inherited from keraflow.layers.recurrent.Recurrent | |
| def | __init__ |
Fully-connected RNN where the output is to be fed back to input.
(nb_samples, sequence_length, input_dim)return_sequences=True): 3D, (nb_samples, sequence_length, output_dim)return_sequences=False): 2D, (nb_samples, output_dim)(input_dim, output_dim)(output_dim, output_dim)(output_dim,) | def keraflow.layers.recurrent.SimpleRNN.__init__ | ( | self, | |
| output_dim, | |||
init = 'glorot_uniform', |
|||
inner_init = 'orthogonal', |
|||
activation = 'tanh', |
|||
dropout_W = 0., |
|||
dropout_U = 0., |
|||
return_sequences = False, |
|||
go_backwards = False, |
|||
stateful = False, |
|||
unroll = False, |
|||
| kwargs | |||
| ) |
| output_dim | int. The output dimension of the layer. |
| init | str/function. Function to initialize W (input to hidden transformation). See Initializations. |
| inner_init | str/function. Function to initialize U (hidden to hidden transformation). See Initializations. |
| activation | str/function. Activation function applied on the output. See Activations. |
| dropout_W | float between 0 and 1. Fraction of the input units to drop for input gates. |
| dropout_U | float between 0 and 1. Fraction of the input units to drop for recurrent connections. |
| return_sequences | Boolean. See Recurrent.__init__ |
| go_backwards | Boolean. See Recurrent.__init__ |
| stateful | Boolean. See Recurrent.__init__ |
| unroll | Boolean. See Recurrent.__init__ |
| kwargs | see Layer.__init__. |