Convolution 2D
tflearn.layers.conv.conv_2d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv2D')
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, new height, new width, nb_filter].
Arguments
- incoming:
Tensor
. Incoming 4-D Tensor. - nb_filter:
int
. The number of convolutional filters. - filter_size:
int
orlist of int
. Size of filters. - strides: 'int
or list of
int`. Strides of conv operation. Default: [1 1 1 1]. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
) or None. Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Conv2D'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - b:
Variable
. Variable representing biases.
Convolution 2D Transpose
tflearn.layers.conv.conv_2d_transpose (incoming, nb_filter, filter_size, output_shape, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv2DTranspose')
This operation is sometimes called "deconvolution" after (Deconvolutional
Networks)[http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf], but is
actually the transpose (gradient) of conv_2d
rather than an actual
deconvolution.
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, new height, new width, nb_filter].
Arguments
- incoming:
Tensor
. Incoming 4-D Tensor. - nb_filter:
int
. The number of convolutional filters. - filter_size:
int
orlist of int
. Size of filters. - output_shape:
list of int
. Dimensions of the output tensor. Can optionally include the number of conv filters. [new height, new width, nb_filter] or [new height, new width]. - strides:
int
or list ofint
. Strides of conv operation. Default: [1 1 1 1]. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Conv2DTranspose'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - b:
Variable
. Variable representing biases.
Atrous Convolution 2D
tflearn.layers.conv.atrous_conv_2d (incoming, nb_filter, filter_size, rate=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='AtrousConv2D')
(a.k.a. convolution with holes or dilated convolution).
Computes a 2-D atrous convolution, also known as convolution with holes or dilated convolution, given 4-D value and filters tensors. If the rate parameter is equal to one, it performs regular 2-D convolution. If the rate parameter is greater than one, it performs convolution with holes, sampling the input values every rate pixels in the height and width dimensions. This is equivalent to convolving the input with a set of upsampled filters, produced by inserting rate - 1 zeros between two consecutive values of the filters along the height and width dimensions, hence the name atrous convolution or convolution with holes (the French word trous means holes in English).
More specifically
output[b, i, j, k] = sum_{di, dj, q} filters[di, dj, q, k] *
value[b, i + rate * di, j + rate * dj, q]
Atrous convolution allows us to explicitly control how densely to compute feature responses in fully convolutional networks. Used in conjunction with bilinear interpolation, it offers an alternative to conv2d_transpose in dense prediction tasks such as semantic image segmentation, optical flow computation, or depth estimation. It also allows us to effectively enlarge the field of view of filters without increasing the number of parameters or the amount of computation.
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, new height, new width, nb_filter].
Arguments
- incoming:
Tensor
. Incoming 4-D Tensor. - nb_filter:
int
. The number of convolutional filters. - filter_size:
int
orlist of int
. Size of filters. - rate: 'int
. A positive int32. The stride with which we sample input values across the height and width dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. In the literature, the same parameter is sometimes called input
strideor
dilation`. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
) or None. Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Conv2D'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - b:
Variable
. Variable representing biases.
Grouped Convolution 2D
tflearn.layers.conv.grouped_conv_2d (incoming, channel_multiplier, filter_size, strides=1, padding='same', activation='linear', bias=False, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='GroupedConv2D')
a.k.a DepthWise Convolution 2D.
Given a 4D input tensor ('NHWC' or 'NCHW' data formats), a kernel_size and a channel_multiplier, grouped_conv_2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. The output has in_channels * channel_multiplier channels.
In detail,
output[b, i, j, k * channel_multiplier + q] = sum_{di, dj}
filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di, strides[2] * j + rate[1] * dj, k]
Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertical strides, strides = [1, stride, stride, 1]. If any value in rate is greater than 1, we perform atrous depthwise convolution, in which case all values in the strides tensor must be equal to 1.
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, new height, new width, in_channels * channel_multiplier].
Arguments
- incoming:
Tensor
. Incoming 4-D Tensor. - channel_multiplier:
int
. The number of channels to expand to. - filter_size:
int
orlist of int
. Size of filters. - strides: 'int
or list of
int`. Strides of conv operation. Default: [1 1 1 1]. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
) or None. Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Conv2D'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - b:
Variable
. Variable representing biases.
Max Pooling 2D
tflearn.layers.conv.max_pool_2d (incoming, kernel_size, strides=None, padding='same', name='MaxPool2D')
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, pooled height, pooled width, in_channels].
Arguments
- incoming:
Tensor
. Incoming 4-D Layer. - kernel_size:
int
orlist of int
. Pooling kernel size. - strides:
int
orlist of int
. Strides of conv operation. Default: same as kernel_size. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - name: A name for this layer (optional). Default: 'MaxPool2D'.
Attributes
- scope:
Scope
. This layer scope.
Average Pooling 2D
tflearn.layers.conv.avg_pool_2d (incoming, kernel_size, strides=None, padding='same', name='AvgPool2D')
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, pooled height, pooled width, in_channels].
Arguments
- incoming:
Tensor
. Incoming 4-D Layer. - kernel_size:
int
orlist of int
. Pooling kernel size. - strides:
int
orlist of int
. Strides of conv operation. Default: same as kernel_size. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - name: A name for this layer (optional). Default: 'AvgPool2D'.
Attributes
- scope:
Scope
. This layer scope.
UpSample 2D
tflearn.layers.conv.upsample_2d (incoming, kernel_size, name='UpSample2D')
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, pooled height, pooled width, in_channels].
Arguments
- incoming:
Tensor
. Incoming 4-D Layer to upsample. - kernel_size:
int
orlist of int
. Upsampling kernel size. - name: A name for this layer (optional). Default: 'UpSample2D'.
Attributes
- scope:
Scope
. This layer scope.
Upscore
tflearn.layers.conv.upscore_layer (incoming, num_classes, shape=None, kernel_size=4, strides=2, trainable=True, restore=True, reuse=False, scope=None, name='Upscore')
This implements the upscore layer as used in (Fully Convolutional Networks)[http://arxiv.org/abs/1411.4038]. The upscore layer is initialized as bilinear upsampling filter.
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, pooled height, pooled width, in_channels].
Arguments
- incoming:
Tensor
. Incoming 4-D Layer to upsample. - num_classes:
int
. Number of output feature maps. - shape:
list of int
. Dimension of the output map [batch_size, new height, new width]. For convinience four values are allows [batch_size, new height, new width, X], where X is ignored. - kernel_size:
int
orlist of int
. Upsampling kernel size. - strides:
int
orlist of int
. Strides of conv operation. Default: [1 2 2 1]. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. name: A name for this layer (optional). Default: 'Upscore'.
Attributes
- scope:
Scope
. This layer scope.
Links
(Fully Convolutional Networks)[http://arxiv.org/abs/1411.4038]
Convolution 1D
tflearn.layers.conv.conv_1d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv1D')
Input
3-D Tensor [batch, steps, in_channels].
Output
3-D Tensor [batch, new steps, nb_filters].
Arguments
- incoming:
Tensor
. Incoming 3-D Tensor. - nb_filter:
int
. The number of convolutional filters. - filter_size:
int
orlist of int
. Size of filters. - strides:
int
orlist of int
. Strides of conv operation. Default: [1 1 1 1]. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Conv1D'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - b:
Variable
. Variable representing biases.
Max Pooling 1D
tflearn.layers.conv.max_pool_1d (incoming, kernel_size, strides=None, padding='same', name='MaxPool1D')
Input
3-D Tensor [batch, steps, in_channels].
Output
3-D Tensor [batch, pooled steps, in_channels].
Arguments
- incoming:
Tensor
. Incoming 3-D Layer. - kernel_size:
int
orlist of int
. Pooling kernel size. - strides:
int
orlist of int
. Strides of conv operation. Default: same as kernel_size. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - name: A name for this layer (optional). Default: 'MaxPool1D'.
Attributes
- scope:
Scope
. This layer scope.
Average Pooling 1D
tflearn.layers.conv.avg_pool_1d (incoming, kernel_size, strides=None, padding='same', name='AvgPool1D')
Input
3-D Tensor [batch, steps, in_channels].
Output
3-D Tensor [batch, pooled steps, in_channels].
Arguments
- incoming:
Tensor
. Incoming 3-D Layer. - kernel_size:
int
orlist of int
. Pooling kernel size. - strides:
int
orlist of int
. Strides of conv operation. Default: same as kernel_size. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - name: A name for this layer (optional). Default: 'AvgPool1D'.
Attributes
- scope:
Scope
. This layer scope.
Convolution 3D
tflearn.layers.conv.conv_3d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv3D')
Input
5-D Tensor [batch, in_depth, in_height, in_width, in_channels].
Output
5-D Tensor [filter_depth, filter_height, filter_width, in_channels, out_channels].
Arguments
- incoming:
Tensor
. Incoming 5-D Tensor. - nb_filter:
int
. The number of convolutional filters. - filter_size:
int
orlist of int
. Size of filters. - strides: 'int
or list of
int`. Strides of conv operation. Default: [1 1 1 1 1]. Must have strides[0] = strides[4] = 1. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Conv3D'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - b:
Variable
. Variable representing biases.
Convolution 3D Transpose
tflearn.layers.conv.conv_3d_transpose (incoming, nb_filter, filter_size, output_shape, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv3DTranspose')
This operation is sometimes called "deconvolution" after (Deconvolutional
Networks)[http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf], but is
actually the transpose (gradient) of conv_3d
rather than an actual
deconvolution.
Input
5-D Tensor [batch, depth, height, width, in_channels].
Output
5-D Tensor [batch, new depth, new height, new width, nb_filter].
Arguments
- incoming:
Tensor
. Incoming 5-D Tensor. - nb_filter:
int
. The number of convolutional filters. - filter_size:
int
orlist of int
. Size of filters. - output_shape:
list of int
. Dimensions of the output tensor. Can optionally include the number of conv filters. [new depth, new height, new width, nb_filter] or [new depth, new height, new width]. - strides:
int
or list ofint
. Strides of conv operation. Default: [1 1 1 1 1]. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Conv2DTranspose'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - b:
Variable
. Variable representing biases.
Max Pooling 3D
tflearn.layers.conv.max_pool_3d (incoming, kernel_size, strides=1, padding='same', name='MaxPool3D')
Input
5-D Tensor [batch, depth, rows, cols, channels].
Output
5-D Tensor [batch, pooled depth, pooled rows, pooled cols, in_channels].
Arguments
- incoming:
Tensor
. Incoming 5-D Layer. - kernel_size:
int
orlist of int
. Pooling kernel size. Must have kernel_size[0] = kernel_size[1] = 1 - strides:
int
orlist of int
. Strides of conv operation. Must have strides[0] = strides[4] = 1. Default: [1 1 1 1 1]. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - name: A name for this layer (optional). Default: 'MaxPool3D'.
Attributes
- scope:
Scope
. This layer scope.
Average Pooling 3D
tflearn.layers.conv.avg_pool_3d (incoming, kernel_size, strides=1, padding='same', name='AvgPool3D')
Input
5-D Tensor [batch, depth, rows, cols, channels].
Output
5-D Tensor [batch, pooled depth, pooled rows, pooled cols, in_channels].
Arguments
- incoming:
Tensor
. Incoming 5-D Layer. - kernel_size:
int
orlist of int
. Pooling kernel size. Must have kernel_size[0] = kernel_size[1] = 1 - strides:
int
orlist of int
. Strides of conv operation. Must have strides[0] = strides[4] = 1. Default: [1 1 1 1 1] - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - name: A name for this layer (optional). Default: 'AvgPool3D'.
Attributes
- scope:
Scope
. This layer scope.
Global Max Pooling
tflearn.layers.conv.global_max_pool (incoming, name='GlobalMaxPool')
Input
4-D Tensor [batch, height, width, in_channels].
Output
2-D Tensor [batch, pooled dim]
Arguments
- incoming:
Tensor
. Incoming 4-D Tensor. - name: A name for this layer (optional). Default: 'GlobalMaxPool'.
Global Average Pooling
tflearn.layers.conv.global_avg_pool (incoming, name='GlobalAvgPool')
Input
4-D Tensor [batch, height, width, in_channels].
Output
2-D Tensor [batch, pooled dim]
Arguments
- incoming:
Tensor
. Incoming 4-D Tensor. - name: A name for this layer (optional). Default: 'GlobalAvgPool'.
Residual Block
tflearn.layers.conv.residual_block (incoming, nb_blocks, out_channels, downsample=False, downsample_strides=2, activation='relu', batch_norm=True, bias=True, weights_init='variance_scaling', bias_init='zeros', regularizer='L2', weight_decay=0.0001, trainable=True, restore=True, reuse=False, scope=None, name='ResidualBlock')
A residual block as described in MSRA's Deep Residual Network paper. Full pre-activation architecture is used here.
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, new height, new width, nb_filter].
Arguments
- incoming:
Tensor
. Incoming 4-D Layer. - nb_blocks:
int
. Number of layer blocks. - out_channels:
int
. The number of convolutional filters of the convolution layers. - downsample:
bool
. If True, apply downsampling using 'downsample_strides' for strides. - downsample_strides:
int
. The strides to use when downsampling. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - batch_norm:
bool
. If True, apply batch normalization. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'uniform_scaling'. - bias_init:
str
(name) ortf.Tensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'ShallowBottleneck'.
References
- Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
- Identity Mappings in Deep Residual Networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
Links
Residual Bottleneck
tflearn.layers.conv.residual_bottleneck (incoming, nb_blocks, bottleneck_size, out_channels, downsample=False, downsample_strides=2, activation='relu', batch_norm=True, bias=True, weights_init='variance_scaling', bias_init='zeros', regularizer='L2', weight_decay=0.0001, trainable=True, restore=True, reuse=False, scope=None, name='ResidualBottleneck')
A residual bottleneck block as described in MSRA's Deep Residual Network paper. Full pre-activation architecture is used here.
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, new height, new width, nb_filter].
Arguments
- incoming:
Tensor
. Incoming 4-D Layer. - nb_blocks:
int
. Number of layer blocks. - bottleneck_size:
int
. The number of convolutional filter of the bottleneck convolutional layer. - out_channels:
int
. The number of convolutional filters of the layers surrounding the bottleneck layer. - downsample:
bool
. If True, apply downsampling using 'downsample_strides' for strides. - downsample_strides:
int
. The strides to use when downsampling. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - batch_norm:
bool
. If True, apply batch normalization. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'uniform_scaling'. - bias_init:
str
(name) ortf.Tensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'DeepBottleneck'.
References
- Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
- Identity Mappings in Deep Residual Networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
Links
ResNeXt Block
tflearn.layers.conv.resnext_block (incoming, nb_blocks, out_channels, cardinality, downsample=False, downsample_strides=2, activation='relu', batch_norm=True, weights_init='variance_scaling', regularizer='L2', weight_decay=0.0001, bias=True, bias_init='zeros', trainable=True, restore=True, reuse=False, scope=None, name='ResNeXtBlock')
A ResNeXt block as described in ResNeXt paper (Figure 2, c).
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, new height, new width, out_channels].
Arguments
- incoming:
Tensor
. Incoming 4-D Layer. - nb_blocks:
int
. Number of layer blocks. - out_channels:
int
. The number of convolutional filters of the layers surrounding the bottleneck layer. - cardinality:
int
. Number of aggregated residual transformations. - downsample:
bool
. If True, apply downsampling using 'downsample_strides' for strides. - downsample_strides:
int
. The strides to use when downsampling. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - batch_norm:
bool
. If True, apply batch normalization. - bias:
bool
. If True, a bias is used. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'uniform_scaling'. - bias_init:
str
(name) ortf.Tensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'ResNeXtBlock'.
References
Aggregated Residual Transformations for Deep Neural Networks. Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, Kaiming He. 2016.
Links
https://arxiv.org/pdf/1611.05431.pdf
Highway Convolution 2D
tflearn.layers.conv.highway_conv_2d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='HighwayConv2D')
Input
4-D Tensor [batch, height, width, in_channels].
Output
4-D Tensor [batch, new height, new width, nb_filter].
Arguments
- incoming:
Tensor
. Incoming 4-D Tensor. - nb_filter:
int
. The number of convolutional filters. - filter_size:
int
orlist of int
. Size of filters. - strides:
int
orlist of int
. Strides of conv operation. Default: [1 1 1 1]. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'Conv2D'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - W_T:
Variable
. Variable representing gate weights. - b:
Variable
. Variable representing biases. - b_T:
Variable
. Variable representing gate biases.
Highway Convolution 1D
tflearn.layers.conv.highway_conv_1d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='HighwayConv1D')
Input
3-D Tensor [batch, steps, in_channels].
Output
3-D Tensor [batch, new steps, nb_filters].
Arguments
- incoming:
Tensor
. Incoming 3-D Tensor. - nb_filter:
int
. The number of convolutional filters. - filter_size:
int
orlist of int
. Size of filters. - strides:
int
orlist of int
. Strides of conv operation. Default: [1 1 1 1]. - padding:
str
from"same", "valid"
. Padding algo to use. Default: 'same'. - activation:
str
(name) orfunction
(returning aTensor
). Activation applied to this layer (see tflearn.activations). Default: 'linear'. - weights_init:
str
(name) orTensor
. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'. - bias_init:
str
(name) orTensor
. Bias initialization. (see tflearn.initializations) Default: 'zeros'. - regularizer:
str
(name) orTensor
. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None. - weight_decay:
float
. Regularizer decay parameter. Default: 0.001. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name: A name for this layer (optional). Default: 'HighwayConv1D'.
Attributes
- scope:
Scope
. This layer scope. - W:
Variable
. Variable representing filter weights. - W_T:
Variable
. Variable representing gate weights. - b:
Variable
. Variable representing biases. - b_T:
Variable
. Variable representing gate biases.