Convolution 2D

tflearn.layers.conv.conv_2d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv2D')

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, new height, new width, nb_filter].

Arguments

  • incoming: Tensor. Incoming 4-D Tensor.
  • nb_filter: int. The number of convolutional filters.
  • filter_size: int or list of int. Size of filters.
  • strides: 'intor list ofint`. Strides of conv operation. Default: [1 1 1 1].
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • activation: str (name) or function (returning a Tensor) or None. Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • bias: bool. If True, a bias is used.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'.
  • bias_init: str (name) or Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model.
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'Conv2D'.

Attributes

  • scope: Scope. This layer scope.
  • W: Variable. Variable representing filter weights.
  • b: Variable. Variable representing biases.

Convolution 2D Transpose

tflearn.layers.conv.conv_2d_transpose (incoming, nb_filter, filter_size, output_shape, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv2DTranspose')

This operation is sometimes called "deconvolution" after (Deconvolutional Networks)[http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf], but is actually the transpose (gradient) of conv_2d rather than an actual deconvolution.

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, new height, new width, nb_filter].

Arguments

  • incoming: Tensor. Incoming 4-D Tensor.
  • nb_filter: int. The number of convolutional filters.
  • filter_size: int or list of int. Size of filters.
  • output_shape: list of int. Dimensions of the output tensor. Can optionally include the number of conv filters. [new height, new width, nb_filter] or [new height, new width].
  • strides: int or list of int. Strides of conv operation. Default: [1 1 1 1].
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • bias: bool. If True, a bias is used.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'.
  • bias_init: str (name) or Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model.
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'Conv2DTranspose'.

Attributes

  • scope: Scope. This layer scope.
  • W: Variable. Variable representing filter weights.
  • b: Variable. Variable representing biases.

Max Pooling 2D

tflearn.layers.conv.max_pool_2d (incoming, kernel_size, strides=None, padding='same', name='MaxPool2D')

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, pooled height, pooled width, in_channels].

Arguments

  • incoming: Tensor. Incoming 4-D Layer.
  • kernel_size: 'intorlist of int`. Pooling kernel size.
  • strides: 'intorlist of int`. Strides of conv operation. Default: same as kernel_size.
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • name: A name for this layer (optional). Default: 'MaxPool2D'.

Attributes

  • scope: Scope. This layer scope.

Average Pooling 2D

tflearn.layers.conv.avg_pool_2d (incoming, kernel_size, strides=None, padding='same', name='AvgPool2D')

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, pooled height, pooled width, in_channels].

Arguments

  • incoming: Tensor. Incoming 4-D Layer.
  • kernel_size: 'intorlist of int`. Pooling kernel size.
  • strides: 'intorlist of int`. Strides of conv operation. Default: same as kernel_size.
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • name: A name for this layer (optional). Default: 'AvgPool2D'.

Attributes

  • scope: Scope. This layer scope.

UpSample 2D

tflearn.layers.conv.upsample_2d (incoming, kernel_size, name='UpSample2D')

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, pooled height, pooled width, in_channels].

Arguments

  • incoming: Tensor. Incoming 4-D Layer to upsample.
  • kernel_size: 'intorlist of int`. Upsampling kernel size.
  • name: A name for this layer (optional). Default: 'UpSample2D'.

Attributes

  • scope: Scope. This layer scope.

Upscore

tflearn.layers.conv.upscore_layer (incoming, num_classes, shape=None, kernel_size=4, strides=2, trainable=True, restore=True, reuse=False, scope=None, name='Upscore')

This implements the upscore layer as used in (Fully Convolutional Networks)[http://arxiv.org/abs/1411.4038]. The upscore layer is initialized as bilinear upsampling filter.

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, pooled height, pooled width, in_channels].

Arguments

  • incoming: Tensor. Incoming 4-D Layer to upsample.
  • num_classes: int. Number of output feature maps.
  • shape: list of int. Dimension of the output map [batch_size, new height, new width]. For convinience four values are allows [batch_size, new height, new width, X], where X is ignored.
  • kernel_size: 'intorlist of int`. Upsampling kernel size.
  • strides: 'intorlist of int`. Strides of conv operation. Default: [1 2 2 1].
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model.
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. name: A name for this layer (optional). Default: 'Upscore'.

Attributes

  • scope: Scope. This layer scope.

Links

(Fully Convolutional Networks)[http://arxiv.org/abs/1411.4038]


Convolution 1D

tflearn.layers.conv.conv_1d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv1D')

Input

3-D Tensor [batch, steps, in_channels].

Output

3-D Tensor [batch, new steps, nb_filters].

Arguments

  • incoming: Tensor. Incoming 3-D Tensor.
  • nb_filter: int. The number of convolutional filters.
  • filter_size: 'intorlist of int`. Size of filters.
  • strides: 'intorlist of int`. Strides of conv operation. Default: [1 1 1 1].
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • bias: bool. If True, a bias is used.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'.
  • bias_init: str (name) or Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'Conv1D'.

Attributes

  • scope: Scope. This layer scope.
  • W: Variable. Variable representing filter weights.
  • b: Variable. Variable representing biases.

Max Pooling 1D

tflearn.layers.conv.max_pool_1d (incoming, kernel_size, strides=None, padding='same', name='MaxPool1D')

Input

3-D Tensor [batch, steps, in_channels].

Output

3-D Tensor [batch, pooled steps, in_channels].

Arguments

  • incoming: Tensor. Incoming 3-D Layer.
  • kernel_size: int or list of int. Pooling kernel size.
  • strides: int or list of int. Strides of conv operation. Default: same as kernel_size.
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • name: A name for this layer (optional). Default: 'MaxPool1D'.

Attributes

  • scope: Scope. This layer scope.

Average Pooling 1D

tflearn.layers.conv.avg_pool_1d (incoming, kernel_size, strides=None, padding='same', name='AvgPool1D')

Input

3-D Tensor [batch, steps, in_channels].

Output

3-D Tensor [batch, pooled steps, in_channels].

Arguments

  • incoming: Tensor. Incoming 3-D Layer.
  • kernel_size: int or list of int. Pooling kernel size.
  • strides: int or list of int. Strides of conv operation. Default: same as kernel_size.
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • name: A name for this layer (optional). Default: 'AvgPool1D'.

Attributes

  • scope: Scope. This layer scope.

Convolution 3D

tflearn.layers.conv.conv_3d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv3D')

Input

5-D Tensor [batch, in_depth, in_height, in_width, in_channels].

Output

5-D Tensor [filter_depth, filter_height, filter_width, in_channels, out_channels].

Arguments

  • incoming: Tensor. Incoming 5-D Tensor.
  • nb_filter: int. The number of convolutional filters.
  • filter_size: int or list of int. Size of filters.
  • strides: 'intor list ofint`. Strides of conv operation. Default: [1 1 1 1 1]. Must have strides[0] = strides[4] = 1.
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • bias: bool. If True, a bias is used.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'.
  • bias_init: str (name) or Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model.
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'Conv3D'.

Attributes

  • scope: Scope. This layer scope.
  • W: Variable. Variable representing filter weights.
  • b: Variable. Variable representing biases.

Convolution 3D Transpose

tflearn.layers.conv.conv_3d_transpose (incoming, nb_filter, filter_size, output_shape, strides=1, padding='same', activation='linear', bias=True, weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='Conv3DTranspose')

This operation is sometimes called "deconvolution" after (Deconvolutional Networks)[http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf], but is actually the transpose (gradient) of conv_3d rather than an actual deconvolution.

Input

5-D Tensor [batch, depth, height, width, in_channels].

Output

5-D Tensor [batch, new depth, new height, new width, nb_filter].

Arguments

  • incoming: Tensor. Incoming 5-D Tensor.
  • nb_filter: int. The number of convolutional filters.
  • filter_size: int or list of int. Size of filters.
  • output_shape: list of int. Dimensions of the output tensor. Can optionally include the number of conv filters. [new depth, new height, new width, nb_filter] or [new depth, new height, new width].
  • strides: int or list of int. Strides of conv operation. Default: [1 1 1 1 1].
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • bias: bool. If True, a bias is used.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'.
  • bias_init: str (name) or Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model.
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'Conv2DTranspose'.

Attributes

  • scope: Scope. This layer scope.
  • W: Variable. Variable representing filter weights.
  • b: Variable. Variable representing biases.

Max Pooling 3D

tflearn.layers.conv.max_pool_3d (incoming, kernel_size, strides=1, padding='same', name='MaxPool3D')

Input

5-D Tensor [batch, depth, rows, cols, channels].

Output

5-D Tensor [batch, pooled depth, pooled rows, pooled cols, in_channels].

Arguments

  • incoming: Tensor. Incoming 5-D Layer.
  • kernel_size: 'intorlist of int`. Pooling kernel size.Must have kernel_size[0] = kernel_size[1] = 1
  • strides: 'intorlist of int`. Strides of conv operation.Must have strides[0] = strides[4] = 1. Default: [1 1 1 1 1]
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • name: A name for this layer (optional). Default: 'MaxPool3D'.

Attributes

  • scope: Scope. This layer scope.

Average Pooling 3D

tflearn.layers.conv.avg_pool_3d (incoming, kernel_size, strides=None, padding='same', name='AvgPool3D')

Input

5-D Tensor [batch, depth, rows, cols, channels].

Output

5-D Tensor [batch, pooled depth, pooled rows, pooled cols, in_channels].

Arguments

  • incoming: Tensor. Incoming 5-D Layer.
  • kernel_size: 'intorlist of int`. Pooling kernel size.Must have kernel_size[0] = kernel_size[1] = 1
  • strides: 'intorlist of int`. Strides of conv operation.Must have strides[0] = strides[4] = 1. Default: [1 1 1 1 1]
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • name: A name for this layer (optional). Default: 'AvgPool3D'.

Attributes

  • scope: Scope. This layer scope.

Global Max Pooling

tflearn.layers.conv.global_max_pool (incoming, name='GlobalMaxPool')

Input

4-D Tensor [batch, height, width, in_channels].

Output

2-D Tensor [batch, pooled dim]

Arguments

  • incoming: Tensor. Incoming 4-D Tensor.
  • name: A name for this layer (optional). Default: 'GlobalMaxPool'.

Global Average Pooling

tflearn.layers.conv.global_avg_pool (incoming, name='GlobalAvgPool')

Input

4-D Tensor [batch, height, width, in_channels].

Output

2-D Tensor [batch, pooled dim]

Arguments

  • incoming: Tensor. Incoming 4-D Tensor.
  • name: A name for this layer (optional). Default: 'GlobalAvgPool'.

Residual Block

tflearn.layers.conv.residual_block (incoming, nb_blocks, out_channels, downsample=False, downsample_strides=2, activation='relu', batch_norm=True, bias=True, weights_init='variance_scaling', bias_init='zeros', regularizer='L2', weight_decay=0.0001, trainable=True, restore=True, reuse=False, scope=None, name='ResidualBlock')

A residual block as described in MSRA's Deep Residual Network paper. Full pre-activation architecture is used here.

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, new height, new width, nb_filter].

Arguments

  • incoming: Tensor. Incoming 4-D Layer.
  • nb_blocks: int. Number of layer blocks.
  • out_channels: int. The number of convolutional filters of the convolution layers.
  • downsample: bool. If True, apply downsampling using 'downsample_strides' for strides.
  • downsample_strides: int. The strides to use when downsampling.
  • activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • batch_norm: bool. If True, apply batch normalization.
  • bias: bool. If True, a bias is used.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'uniform_scaling'.
  • bias_init: str (name) or tf.Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model.
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'ShallowBottleneck'.

References

  • Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
  • Identity Mappings in Deep Residual Networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.

Links


Residual Bottleneck

tflearn.layers.conv.residual_bottleneck (incoming, nb_blocks, bottleneck_size, out_channels, downsample=False, downsample_strides=2, activation='relu', batch_norm=True, bias=True, weights_init='variance_scaling', bias_init='zeros', regularizer='L2', weight_decay=0.0001, trainable=True, restore=True, reuse=False, scope=None, name='ResidualBottleneck')

A residual bottleneck block as described in MSRA's Deep Residual Network paper. Full pre-activation architecture is used here.

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, new height, new width, nb_filter].

Arguments

  • incoming: Tensor. Incoming 4-D Layer.
  • nb_blocks: int. Number of layer blocks.
  • bottleneck_size: int. The number of convolutional filter of the bottleneck convolutional layer.
  • out_channels: int. The number of convolutional filters of the layers surrounding the bottleneck layer.
  • downsample: bool. If True, apply downsampling using 'downsample_strides' for strides.
  • downsample_strides: int. The strides to use when downsampling.
  • activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • batch_norm: bool. If True, apply batch normalization.
  • bias: bool. If True, a bias is used.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'uniform_scaling'.
  • bias_init: str (name) or tf.Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model.
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'DeepBottleneck'.

References

  • Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
  • Identity Mappings in Deep Residual Networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.

Links


Highway Convolution 2D

tflearn.layers.conv.highway_conv_2d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='HighwayConv2D')

Input

4-D Tensor [batch, height, width, in_channels].

Output

4-D Tensor [batch, new height, new width, nb_filter].

Arguments

  • incoming: Tensor. Incoming 4-D Tensor.
  • nb_filter: int. The number of convolutional filters.
  • filter_size: 'intorlist of int`. Size of filters.
  • strides: 'intorlist of int`. Strides of conv operation. Default: [1 1 1 1].
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'.
  • bias_init: str (name) or Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'Conv2D'.

Attributes

  • scope: Scope. This layer scope.
  • W: Variable. Variable representing filter weights.
  • W_T: Variable. Variable representing gate weights.
  • b: Variable. Variable representing biases.
  • b_T: Variable. Variable representing gate biases.

Highway Convolution 1D

tflearn.layers.conv.highway_conv_1d (incoming, nb_filter, filter_size, strides=1, padding='same', activation='linear', weights_init='uniform_scaling', bias_init='zeros', regularizer=None, weight_decay=0.001, trainable=True, restore=True, reuse=False, scope=None, name='HighwayConv1D')

Input

3-D Tensor [batch, steps, in_channels].

Output

3-D Tensor [batch, new steps, nb_filters].

Arguments

  • incoming: Tensor. Incoming 3-D Tensor.
  • nb_filter: int. The number of convolutional filters.
  • filter_size: 'intorlist of int`. Size of filters.
  • strides: 'intorlist of int`. Strides of conv operation. Default: [1 1 1 1].
  • padding: str from "same", "valid". Padding algo to use. Default: 'same'.
  • activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: 'linear'.
  • weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: 'truncated_normal'.
  • bias_init: str (name) or Tensor. Bias initialization. (see tflearn.initializations) Default: 'zeros'.
  • regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
  • weight_decay: float. Regularizer decay parameter. Default: 0.001.
  • trainable: bool. If True, weights will be trainable.
  • restore: bool. If True, this layer weights will be restored when loading a model.
  • reuse: bool. If True and 'scope' is provided, this layer variables will be reused (shared).
  • scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
  • name: A name for this layer (optional). Default: 'HighwayConv1D'.

Attributes

  • scope: Scope. This layer scope.
  • W: Variable. Variable representing filter weights.
  • W_T: Variable. Variable representing gate weights.
  • b: Variable. Variable representing biases.
  • b_T: Variable. Variable representing gate biases.