Input Data
tflearn.layers.core.input_data (shape=None, placeholder=None, dtype=tf.float32, data_preprocessing=None, data_augmentation=None, name='InputData')
This layer is used for inputting (aka. feeding) data to a network. A TensorFlow placeholder will be used if it is supplied, otherwise a new placeholder will be created with the given shape.
Either a shape or placeholder must be provided, otherwise an exception will be raised.
Furthermore, the placeholder is added to TensorFlow collections so it can be retrieved using tf.get_collection(tf.GraphKeys.INPUTS) as well as tf.GraphKeys.LAYER_TENSOR + '/' + name. Similarly for the data preprocessing and augmentation objects which are stored in the collections with tf.GraphKeys.DATA_PREP and tf.GraphKeys.DATA_AUG. This allows other parts of TFLearn to easily retrieve and use these objects by referencing these graph-keys.
Input
List of int
(Shape), to create a new placeholder.
Or
Tensor
(Placeholder), to use an existing placeholder.
Output
Placeholder Tensor with given shape.
Arguments
- shape: list of
int
. An array or tuple representing input data shape. It is required if no placeholder is provided. First element should be 'None' (representing batch size), if not provided, it will be added automatically. - placeholder: A Placeholder to use for feeding this layer (optional). If not specified, a placeholder will be automatically created. You can retrieve that placeholder through graph key: 'INPUTS', or the 'placeholder' attribute of this function's returned tensor.
- dtype:
tf.type
, Placeholder data type (optional). Default: float32. - data_preprocessing: A
DataPreprocessing
subclass object to manage real-time data pre-processing when training and predicting (such as zero center data, std normalization...). - data_augmentation:
DataAugmentation
. ADataAugmentation
subclass object to manage real-time data augmentation while training ( such as random image crop, random image flip, random sequence reverse...). - name:
str
. A name for this layer (optional).
Fully Connected
tflearn.layers.core.fully_connected (incoming, n_units, activation='linear', bias=True, weights_init='truncated_normal', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='FullyConnected')
A fully connected layer.
Input
(2+)-D Tensor [samples, input dim]. If not 2D, input will be flatten.
Output
2D Tensor [samples, n_units].
Arguments
- incoming:
Tensor
. Incoming (2+)D Tensor. - n_units:
int
, number of units for this layer. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'FullyConnected'.
Attributes
- scope:
Scope
. This layer scope. - W:
Tensor
. Variable representing units weights. - b:
Tensor
. Variable representing biases.
Dropout
tflearn.layers.core.dropout (incoming, keep_prob, noise_shape=None, name='Dropout')
Outputs the input element scaled up by 1 / keep_prob
. The scaling is so
that the expected sum is unchanged.
By default, each element is kept or dropped independently. If noise_shape is specified, it must be broadcastable to the shape of x, and only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions. For example, if shape(x) = [k, l, m, n] and noise_shape = [k, 1, 1, n], each batch and channel component will be kept independently and each row and column will be kept or not kept together.
Arguments
- incoming : A
Tensor
. The incoming tensor. - keep_prob : A float representing the probability that each element is kept.
- noise_shape : A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags.
- name : A name for this layer (optional).
References
Dropout: A Simple Way to Prevent Neural Networks from Overfitting. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever & R. Salakhutdinov, (2014), Journal of Machine Learning Research, 5(Jun)(2), 1929-1958.
Links
https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
Custom Layer
tflearn.layers.core.custom_layer (incoming, custom_fn, **kwargs)
A custom layer that can apply any operations to the incoming Tensor or
list of Tensor
. The custom function can be pass as a parameter along
with its parameters.
Arguments
- incoming : A
Tensor
or list ofTensor
. Incoming tensor. - custom_fn : A custom
function
, to apply some ops on incoming tensor. - **kwargs: Some custom parameters that custom function might need.
Reshape
tflearn.layers.core.reshape (incoming, new_shape, name='Reshape')
A layer that reshape the incoming layer tensor output to the desired shape.
Arguments
- incoming: A
Tensor
. The incoming tensor. - new_shape: A list of
int
. The desired shape. - name: A name for this layer (optional).
Flatten
tflearn.layers.core.flatten (incoming, name='Flatten')
Flatten the incoming Tensor.
Input
(2+)-D Tensor
.
Output
2-D Tensor
[batch, flatten_dims].
Arguments
- incoming:
Tensor
. The incoming tensor.
Activation
tflearn.layers.core.activation (incoming, activation='linear', name='activation')
Apply given activation to incoming tensor.
Arguments
- incoming: A
Tensor
. The incoming tensor. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
Single Unit
tflearn.layers.core.single_unit (incoming, activation='linear', bias=True, trainable=True, restore=True, reuse=False, scope=None, name='Linear')
A single unit (Linear) Layer.
Input
1-D Tensor [samples]. If not 2D, input will be flatten.
Output
1-D Tensor [samples].
Arguments
- incoming:
Tensor
. Incoming Tensor. - activation:
str
(name) orfunction
. Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Linear'.
Attributes
- W:
Tensor
. Variable representing weight. - b:
Tensor
. Variable representing bias.
Fully Connected Highway
tflearn.layers.core.highway (incoming, n_units, activation='linear', transform_dropout=None, weights_init='truncated_normal', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='FullyConnectedHighway')
A fully connected highway network layer, with some inspiration from https://github.com/fomorians/highway-fcn.
Input
(2+)-D Tensor [samples, input dim]. If not 2D, input will be flatten.
Output
2D Tensor [samples, n_units].
Arguments
- incoming:
Tensor
. Incoming (2+)D Tensor. - n_units:
int
, number of units for this layer. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - transform_dropout:
float
: Keep probability on the highway transform gate. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'FullyConnectedHighway'.
Attributes
- scope:
Scope
. This layer scope. - W:
Tensor
. Variable representing units weights. - W_t:
Tensor
. Variable representing units weights for transform gate. - b:
Tensor
. Variable representing biases. - b_t:
Tensor
. Variable representing biases for transform gate.
Links
https://arxiv.org/abs/1505.00387
One Hot Encoding
tflearn.layers.core.one_hot_encoding (target, n_classes, on_value=1.0, off_value=0.0, name='OneHotEncoding')
Transform numeric labels into a binary vector.
Input
The Labels Placeholder.
Output
2-D Tensor, The encoded labels.
Arguments
- target:
Placeholder
. The labels placeholder. - n_classes:
int
. Total number of classes. - on_value:
scalar
. A scalar defining the on-value. - off_value:
scalar
. A scalar defining the off-value. - name: A name for this layer (optional). Default: 'OneHotEncoding'.
Time Distributed
tflearn.layers.core.time_distributed (incoming, fn, args=None, scope=None)
This layer applies a function to every timestep of the input tensor. The custom function first argument must be the input tensor at every timestep. Additional parameters for the custom function may be specified in 'args' argument (as a list).
Examples
# Applying a fully_connected layer at every timestep
x = time_distributed(input_tensor, fully_connected, [64])
# Using a conv layer at every timestep with a scope
x = time_distributed(input_tensor, conv_2d, [64, 3], scope='tconv')
Input
(3+)-D Tensor [samples, timestep, input_dim].
Output
(3+)-D Tensor [samples, timestep, output_dim].
Arguments
- incoming:
Tensor
. The incoming tensor. - fn:
function
. A function to apply at every timestep. This function first parameter must be the input tensor per timestep. Additional parameters may be specified in 'args' argument. - args:
list
. A list of parameters to use with the provided function. - scope:
str
. A scope to give to each timestep tensor. Useful when sharing weights. Each timestep tensor scope will be generated as 'scope'-'i' where i represents the timestep id. Note that your custom function will be required to have a 'scope' parameter.
Returns
A Tensor.
Multi Target Data
tflearn.layers.core.multi_target_data (name_list, shape, dtype=tf.float32)
Create and concatenate multiple placeholders. To be used when a regression layer uses targets from different sources.
Arguments
- name_list: list of
str
. The names of the target placeholders. - shape: list of
int
. The shape of the placeholders. - dtype:
tf.type
, Placeholder data type (optional). Default: float32.