Keraflow
Deep Learning for Python.
 All Classes Namespaces Functions Pages
keraflow.layers.recurrent.Recurrent Class Reference

Base class for recurrent layers. More...

Inheritance diagram for keraflow.layers.recurrent.Recurrent:
keraflow.layers.recurrent.GRU keraflow.layers.recurrent.LSTM keraflow.layers.recurrent.SimpleRNN

Public Member Functions

def __init__
 

Detailed Description

Base class for recurrent layers.

Do not use this layer in your code.

Constructor & Destructor Documentation

def keraflow.layers.recurrent.Recurrent.__init__ (   self,
  num_states,
  output_dim,
  return_sequences = False,
  go_backwards = False,
  stateful = False,
  unroll = False,
  kwargs 
)
Parameters
num_statesint. Number of state of the layer.
output_dimint. The output dimension of the layer.
return_sequencesBoolean. Whether to return the last output in the output sequence, or the full sequence.
go_backwardsBoolean. If True, process the input sequence backwards.
statefulBoolean. See below.
unrollBoolean. If True, the network will be unrolled, else a symbolic loop will be used. When using TensorFlow, the network is always unrolled, so this argument has no effect. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
kwargssee Layer.__init__.
Note
  1. You can set RNN layers to be stateful by setting stateful=True, which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. This assumes a fixed batch size (specified by Input) and a one-to-one mapping between samples in different successive batches.
  2. All recurrent layer supports masking for input data with a variable number of timesteps. To introduce masks to your data, specify mask_value of Input. Usually you will concatenate an Input layer, an embedding layer, and then a recurrent layer.