Keraflow
Deep Learning for Python.
|
![]() ![]() | |
![]() ![]() ![]() | Built-in activation functions |
![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | |
![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | Abstract base class used to build new callbacks |
![]() ![]() ![]() ![]() | Save the model after every epoch |
![]() ![]() ![]() ![]() | Stop training when a monitored quantity has stopped improving |
![]() ![]() ![]() ![]() | Learning rate scheduler |
![]() ![]() ![]() | Built-in constrints |
![]() ![]() ![]() ![]() | Constrain the weights along an axis pattern to have unit norm |
![]() ![]() ![]() ![]() | Constrain the weights to be non-negative |
![]() ![]() ![]() ![]() | Constrain the weights along an axis pattern to have unit norm |
![]() ![]() ![]() | Built-in initialization functions |
![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | Wrapper for a backend tensor |
![]() ![]() ![]() ![]() ![]() | Building block of a model |
![]() ![]() ![]() ![]() ![]() | Base class for layer accepting multiple inputs |
![]() ![]() ![]() ![]() ![]() | The entering point of each model |
![]() ![]() ![]() ![]() ![]() | Class for making Sequential a Layer |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | Base layer for convolution layers |
![]() ![]() ![]() ![]() ![]() | Convolution layer for convolving (sequence_length, input_dim) inputs |
![]() ![]() ![]() ![]() ![]() | Convolution layer for convolving (input_depth, input_row, input_col) inputs |
![]() ![]() ![]() ![]() ![]() | Not implemented yet |
![]() ![]() ![]() ![]() ![]() | Base layer for pooling layers |
![]() ![]() ![]() ![]() ![]() | Pooling layer for sub-sampling (sequence_length, input_dim) inputs |
![]() ![]() ![]() ![]() ![]() | Pooling layer for sub-sampling (input_depth, input_row, input_col) inputs |
![]() ![]() ![]() ![]() ![]() | Zero-padding layer for (input_depth, input_x, input_y, input_z) inputs |
![]() ![]() ![]() ![]() ![]() | Base layer for zero padding layers |
![]() ![]() ![]() ![]() ![]() | Zero-padding layer for (sequence_length, input_dim) inputs |
![]() ![]() ![]() ![]() ![]() | Zero-padding layer for (input_depth, input_row, input_col) inputs |
![]() ![]() ![]() ![]() ![]() | Zero-padding layer for (input_depth, input_x, input_y, input_z) inputs |
![]() ![]() ![]() ![]() ![]() | Base layer for unsampling layers |
![]() ![]() ![]() ![]() ![]() | Repeat each temporal step length times along the time axis |
![]() ![]() ![]() ![]() ![]() | Unsampling layer for (input_depth, input_row, input_col) inputs |
![]() ![]() ![]() ![]() ![]() | Unsampling layer for (input_depth, input_x, input_y, input_z) inputs |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | Expand dimension of the input tensor |
![]() ![]() ![]() ![]() ![]() | Permutes the dimensions of the input tensor according to a given pattern |
![]() ![]() ![]() ![]() ![]() | Reshapes the input tensor according to a given pattern |
![]() ![]() ![]() ![]() ![]() | Flatten the input tensor into 1D |
![]() ![]() ![]() ![]() ![]() | Repeat the input tensor n times along given axis |
![]() ![]() ![]() ![]() ![]() | Concatenate multiple input tensors |
![]() ![]() ![]() ![]() ![]() | Wrapper for implementating simple inline layer |
![]() ![]() ![]() ![]() ![]() | Applies an activation function to an output |
![]() ![]() ![]() ![]() ![]() | Reduce multiple input tensors by conducting summation operation |
![]() ![]() ![]() ![]() ![]() | Reduce multiple input tensors by conducting multiplication operation |
![]() ![]() ![]() ![]() ![]() | Applies Dropout to the input |
![]() ![]() ![]() ![]() ![]() | Fully connected layer |
![]() ![]() ![]() ![]() ![]() | Densely connected highway network |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | Vocabulary (row) vectors looking up layer |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | Base class for recurrent layers |
![]() ![]() ![]() ![]() ![]() | Fully-connected RNN where the output is to be fed back to input |
![]() ![]() ![]() ![]() ![]() | Gated Recurrent Unit - Cho et al |
![]() ![]() ![]() ![]() ![]() | Long-Short Term Memory unit - Hochreiter 1997 |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | Wrapper for apply a layer to every temporal slice of an input |
![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | A model is a directed Kensor graph |
![]() ![]() ![]() ![]() | Model with single input and single output |
![]() ![]() ![]() | Built-in objectives functions |
![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | Abstract optimizer base class |
![]() ![]() ![]() ![]() | Stochastic gradient descent, with support for momentum, learning rate decay, and Nesterov momentum |
![]() ![]() ![]() ![]() | RMSProp optimizer |
![]() ![]() ![]() ![]() | Adagrad optimizer |
![]() ![]() ![]() ![]() | Adadelta optimizer |
![]() ![]() ![]() ![]() | Adam optimizer |
![]() ![]() ![]() ![]() | Adamax optimizer from Adam paper's Section 7 |
![]() ![]() ![]() | Built-in regularizers |
![]() ![]() ![]() ![]() | L1 weight regularization penalty, also known as LASSO |
![]() ![]() ![]() ![]() | L2 weight regularization penalty, also known as weight decay, or Ridge |
![]() ![]() ![]() ![]() | L1-L2 weight regularization penalty, also known as ElasticNet |
![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() | Generic utils for keraflow |
![]() ![]() ![]() ![]() | |
![]() ![]() ![]() ![]() ![]() | Utility class for flexible user input assiging optimizers, regularizers, numpy input.. |