|
Keraflow
Deep Learning for Python.
|
| Concatenate multiple input tensors | |
| Reduce multiple input tensors by conducting multiplication operation | |
| Reduce multiple input tensors by conducting summation operation | |
| Abstract base class used to build new callbacks | |
| Stop training when a monitored quantity has stopped improving | |
| Learning rate scheduler | |
| Save the model after every epoch | |
| Constrain the weights along an axis pattern to have unit norm | |
| Constrain the weights to be non-negative | |
| Constrain the weights along an axis pattern to have unit norm | |
| Wrapper for a backend tensor | |
| The entering point of each model | |
| Building block of a model | |
| The entering point of each model | |
| Base class for layer accepting multiple inputs | |
| Class for making Sequential a Layer | |
| Wrapper for apply a layer to every temporal slice of an input | |
| A model is a directed Kensor graph | |
| Model with single input and single output | |
| Abstract optimizer base class | |
| Adadelta optimizer | |
| Adagrad optimizer | |
| Adam optimizer | |
| Adamax optimizer from Adam paper's Section 7 | |
| RMSProp optimizer | |
| Stochastic gradient descent, with support for momentum, learning rate decay, and Nesterov momentum | |
| L1 weight regularization penalty, also known as LASSO | |
| L1-L2 weight regularization penalty, also known as ElasticNet | |
| L2 weight regularization penalty, also known as weight decay, or Ridge | |
| Utility class for flexible user input assiging optimizers, regularizers, numpy input.. | |
| Model with single input and single output | |
| Not implemented yet | |
| Base layer for convolution layers | |
| Convolution layer for convolving (sequence_length, input_dim) inputs | |
| Convolution layer for convolving (input_depth, input_row, input_col) inputs | |
| Base layer for pooling layers | |
| Pooling layer for sub-sampling (sequence_length, input_dim) inputs | |
| Pooling layer for sub-sampling (input_depth, input_row, input_col) inputs | |
| Zero-padding layer for (input_depth, input_x, input_y, input_z) inputs | |
| Base layer for unsampling layers | |
Repeat each temporal step length times along the time axis | |
| Unsampling layer for (input_depth, input_row, input_col) inputs | |
| Unsampling layer for (input_depth, input_x, input_y, input_z) inputs | |
| Base layer for zero padding layers | |
| Zero-padding layer for (sequence_length, input_dim) inputs | |
| Zero-padding layer for (input_depth, input_row, input_col) inputs | |
| Zero-padding layer for (input_depth, input_x, input_y, input_z) inputs | |
| Applies an activation function to an output | |
| Fully connected layer | |
| Applies Dropout to the input | |
| Expand dimension of the input tensor | |
| Flatten the input tensor into 1D | |
| Densely connected highway network | |
| Wrapper for implementating simple inline layer | |
| Permutes the dimensions of the input tensor according to a given pattern | |
| Repeat the input tensor n times along given axis | |
| Reshapes the input tensor according to a given pattern | |
| Vocabulary (row) vectors looking up layer | |
| Base class for recurrent layers | |
| Gated Recurrent Unit - Cho et al | |
| Long-Short Term Memory unit - Hochreiter 1997 | |
| Fully-connected RNN where the output is to be fed back to input |