Keraflow
Deep Learning for Python.
|
Gated Recurrent Unit - Cho et al. More...
Public Member Functions | |
def | __init__ |
![]() | |
def | __init__ |
Gated Recurrent Unit - Cho et al.
2014.
(nb_samples, sequence_length, input_dim)
return_sequences=True
): 3D, (nb_samples, sequence_length, output_dim)
return_sequences=False
): 2D, (nb_samples, output_dim)
(input_dim, output_dim)
(output_dim, output_dim)
(output_dim,)
def keraflow.layers.recurrent.GRU.__init__ | ( | self, | |
output_dim, | |||
init = 'glorot_uniform' , |
|||
inner_init = 'orthogonal' , |
|||
activation = 'tanh' , |
|||
inner_activation = 'hard_sigmoid' , |
|||
dropout_W = 0. , |
|||
dropout_U = 0. , |
|||
return_sequences = False , |
|||
go_backwards = False , |
|||
stateful = False , |
|||
unroll = False , |
|||
kwargs | |||
) |
output_dim | int. The output dimension of the layer. |
init | str/function. Function to initialize W_z ,W_r ,W_h , (input to hidden transformation). See Initializations. |
inner_init | str/function. Function to initialize U_z , U_r , U_h (hidden to hidden transformation). See Initializations. |
activation | str/function. Activation function applied on the output. See Activations. |
inner_activation | str/function. Activation function applied on the inner cells. See Activations. |
dropout_W | float between 0 and 1. Fraction of the input units to drop for input gates. |
dropout_U | float between 0 and 1. Fraction of the input units to drop for recurrent connections. |
return_sequences | Boolean. See Recurrent.__init__ |
go_backwards | Boolean. See Recurrent.__init__ |
stateful | Boolean. See Recurrent.__init__ |
unroll | Boolean. See Recurrent.__init__ |
kwargs | see Layer.__init__. |