Linear
tflearn.activations.linear (x)
f(x) = x
Arguments
- x : A
Tensor
with typefloat
,double
,int32
,int64
,uint8
,int16
, orint8
.
Returns
The incoming Tensor (without changes).
Tanh
tflearn.activations.tanh (x)
Computes hyperbolic tangent of x
element-wise.
Arguments
- x: A Tensor with type
float
,double
,int32
,complex64
,int64
, orqint32
.
Returns
A Tensor with the same type as x
if x.dtype != qint32
otherwise
the return type is quint8
.
Sigmoid
tflearn.activations.sigmoid (x)
Computes sigmoid of x
element-wise.
Specifically, y = 1 / (1 + exp(-x))
.
Arguments
- x: A Tensor with type
float
,double
,int32
,complex64
,int64
, orqint32
.
Returns
A Tensor with the same type as x
if x.dtype != qint32
otherwise
the return type is quint8
.
Softmax
tflearn.activations.softmax (x)
Computes softmax activations.
For each batch i
and class j
we have
softmax[i, j] = exp(logits[i, j]) / sum(exp(logits[i]))
Arguments
- x: A
Tensor
. Must be one of the following types:float32
,float64
. 2-D with shape[batch_size, num_classes]
.
Returns
A Tensor
. Has the same type as x
. Same shape as x
.
Softplus
tflearn.activations.softplus (x)
Computes softplus: log(exp(features) + 1)
.
Arguments
- x: A
Tensor
. Must be one of the following types:float32
,float64
,int32
,int64
,uint8
,int16
,int8
,uint16
.
Returns
A Tensor
. Has the same type as x
.
Softsign
tflearn.activations.softsign (x)
Computes softsign: features / (abs(features) + 1)
.
Arguments
- x: A
Tensor
. Must be one of the following types:float32
,float64
,int32
,int64
,uint8
,int16
,int8
,uint16
.
Returns
A Tensor
. Has the same type as x
.
ReLU
tflearn.activations.relu (x)
Computes rectified linear: max(features, 0)
.
Arguments
- x: A
Tensor
. Must be one of the following types:float32
,float64
,int32
,int64
,uint8
,int16
,int8
,uint16
.
Returns
A Tensor
. Has the same type as x
.
ReLU6
tflearn.activations.relu6 (x)
Computes Rectified Linear 6: min(max(features, 0), 6)
.
Arguments
- x: A
Tensor
with typefloat
,double
,int32
,int64
,uint8
,int16
, orint8
.
Returns
A Tensor
with the same type as x
.
LeakyReLU
tflearn.activations.leaky_relu (x, alpha=0.1, name='LeakyReLU')
Modified version of ReLU, introducing a nonzero gradient for negative input.
Arguments
- x: A
Tensor
with typefloat
,double
,int32
,int64
,uint8
,int16
, orint8
. - alpha:
float
. slope. - name: A name for this activation op (optional).
Returns
A Tensor
with the same type as x
.
References
Rectifier Nonlinearities Improve Neural Network Acoustic Models, Maas et al. (2013).
Links
http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf
PReLU
tflearn.activations.prelu (x, channel_shared=False, weights_init='zeros', trainable=True, restore=True, reuse=False, scope=None, name='PReLU')
Parametric Rectified Linear Unit.
Arguments
- x: A
Tensor
with typefloat
,double
,int32
,int64
,uint8
,int16
, orint8
. - channel_shared:
bool
. Single weight is shared by all channels - weights_init:
str
. Weights initialization. Default: zeros. - trainable:
bool
. If True, weights will be trainable. - restore:
bool
. Restore or not alphas. - reuse:
bool
. If True and 'scope' is provided, this layer variables will be reused (shared). - name: A name for this activation op (optional).
Attributes
- scope:
str
. This op scope. - alphas:
Variable
. PReLU alphas.
Returns
A Tensor
with the same type as x
.
References
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. He et al., 2014.
Links
http://arxiv.org/pdf/1502.01852v1.pdf
ELU
tflearn.activations.elu (x)
Exponential Linear Unit.
Arguments
- x : A
Tensor
with typefloat
,double
,int32
,int64
,uint8
,int16
, orint8
Returns
A tuple
of tf.Tensor
. This layer inference, i.e. output Tensors
at training and testing time.
References
Fast and Accurate Deep Network Learning by Exponential Linear Units, Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter. 2015.
Links
http://arxiv.org/abs/1511.07289
CReLU
tflearn.activations.crelu (x)
Computes Concatenated ReLU.
Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations.
Arguments
- x : A
Tensor
with typefloat
,double
,int32
,int64
,uint8
,int16
, orint8
.
Returns
A Tensor
with the same type as x
.
Links
https://arxiv.org/abs/1603.05201
SELU
tflearn.activations.selu (x)
Scaled Exponential Linear Unit.
Arguments
x : A Tensor
with type float
, double
, int32
, int64
, uint8
,
int16
, or int8
References
Self-Normalizing Neural Networks, Klambauer et al., 2017.
Links
https://arxiv.org/abs/1706.02515