Batch Normalization
tflearn.layers.normalization.batch_normalization (incoming, beta=0.0, gamma=1.0, epsilon=1e-05, decay=0.9, stddev=0.002, trainable=True, restore=True, reuse=False, scope=None, name='BatchNormalization')
Normalize activations of the previous layer at each batch.
Arguments
- incoming:
Tensor
. Incoming Tensor. - beta:
float
. Default: 0.0. - gamma:
float
. Default: 1.0. - epsilon:
float
. Defalut: 1e-5. - decay:
float
. Default: 0.9. - stddev:
float
. Standard deviation for weights initialization. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. If True, this layer weights will be restored when loading a model. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - scope:
str
. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name. - name:
str
. A name for this layer (optional).
References
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shif. Sergey Ioffe, Christian Szegedy. 2015.
Links
http://arxiv.org/pdf/1502.03167v3.pdf
Local Response Normalization
tflearn.layers.normalization.local_response_normalization (incoming, depth_radius=5, bias=1.0, alpha=0.0001, beta=0.75, name='LocalResponseNormalization')
Input
4-D Tensor Layer.
Output
4-D Tensor Layer. (Same dimension as input).
Arguments
- incoming:
Tensor
. Incoming Tensor. - depth_radius:
int
. 0-D. Half-width of the 1-D normalization window. Defaults to 5. - bias:
float
. An offset (usually positive to avoid dividing by 0). Defaults to 1.0. - alpha:
float
. A scale factor, usually positive. Defaults to 0.0001. - beta:
float
. An exponent. Defaults to0.5
. - name:
str
. A name for this layer (optional).
L2 Normalization
tflearn.layers.normalization.l2_normalize (incoming, dim, epsilon=1e-12, name='l2_normalize')
Normalizes along dimension dim
using an L2 norm.
For a 1-D tensor with dim = 0
, computes
output = x / sqrt(max(sum(x**2), epsilon))
For x
with more dimensions, independently normalizes each 1-D slice along
dimension dim
.
Arguments
- incoming:
Tensor
. Incoming Tensor. - dim:
int
. Dimension along which to normalize. - epsilon:
float
. A lower bound value for the norm. Will usesqrt(epsilon)
as the divisor ifnorm < sqrt(epsilon)
. - name:
str
. A name for this layer (optional).
Returns
A Tensor
with the same shape as x
.