Softmax Categorical Crossentropy
tflearn.objectives.softmax_categorical_crossentropy (y_pred, y_true)
Computes softmax cross entropy between y_pred (logits) and y_true (labels).
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
WARNING: This op expects unscaled logits, since it performs a softmax
on y_pred
internally for efficiency. Do not call this op with the
output of softmax
, as it will produce incorrect results.
y_pred
and y_true
must have the same shape [batch_size, num_classes]
and the same dtype (either float32
or float64
). It is also required
that y_true
(labels) are binary arrays (For example, class 2 out of a
total of 5 different classes, will be define as [0., 1., 0., 0., 0.])
Arguments
- y_pred:
Tensor
. Predicted values. - y_true:
Tensor
. Targets (labels), a probability distribution.
Categorical Crossentropy
tflearn.objectives.categorical_crossentropy (y_pred, y_true)
Computes cross entropy between y_pred (logits) and y_true (labels).
Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.
y_pred
and y_true
must have the same shape [batch_size, num_classes]
and the same dtype (either float32
or float64
). It is also required
that y_true
(labels) are binary arrays (For example, class 2 out of a
total of 5 different classes, will be define as [0., 1., 0., 0., 0.])
Arguments
- y_pred:
Tensor
. Predicted values. - y_true:
Tensor
. Targets (labels), a probability distribution.
Binary Crossentropy
tflearn.objectives.binary_crossentropy (y_pred, y_true)
Computes sigmoid cross entropy between y_pred (logits) and y_true (labels).
Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.
For brevity, let x = logits
, z = targets
. The logistic loss is
x - x * z + log(1 + exp(-x))
To ensure stability and avoid overflow, the implementation uses
max(x, 0) - x * z + log(1 + exp(-abs(x)))
y_pred
and y_true
must have the same type and shape.
Arguments
- y_pred:
Tensor
offloat
type. Predicted values. - y_true:
Tensor
offloat
type. Targets (labels).
Weighted Crossentropy
tflearn.objectives.weighted_crossentropy (y_pred, y_true, weight)
Computes weighted sigmoid cross entropy between y_pred (logits) and y_true (labels).
Computes a weighted cross entropy.
This is like sigmoid_cross_entropy_with_logits() except that pos_weight, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.
The usual cross-entropy cost is defined as:
targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits))
The argument pos_weight is used as a multiplier for the positive targets:
targets * -log(sigmoid(logits)) * pos_weight + (1 - targets) * -log(1 - sigmoid(logits))
For brevity, let x = logits, z = targets, q = pos_weight. The loss is:
qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
= qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
= qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
= qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
= (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x))
= (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))
Setting l = (1 + (q - 1) * z), to ensure stability and avoid overflow, the implementation uses
(1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))
logits and targets must have the same type and shape.
Arguments
- y_pred:
Tensor
offloat
type. Predicted values. - y_true:
Tensor
offloat
type. Targets (labels). - weight: A coefficient to use on the positive examples.
Mean Square Loss
tflearn.objectives.mean_square (y_pred, y_true)
Arguments
- y_pred:
Tensor
offloat
type. Predicted values. - y_true:
Tensor
offloat
type. Targets (labels).
Hinge Loss
tflearn.objectives.hinge_loss (y_pred, y_true)
Arguments
- y_pred:
Tensor
offloat
type. Predicted values. - y_true:
Tensor
offloat
type. Targets (labels).
ROC AUC Score
tflearn.objectives.roc_auc_score (y_pred, y_true)
Approximates the Area Under Curve score, using approximation based on the Wilcoxon-Mann-Whitney U statistic.
Yan, L., Dodier, R., Mozer, M. C., & Wolniewicz, R. (2003). Optimizing Classifier Performance via an Approximation to the Wilcoxon-Mann-Whitney Statistic.
Measures overall performance for a full range of threshold levels.
Arguments
- y_pred:
Tensor
. Predicted values. - y_true:
Tensor
. Targets (labels), a probability distribution.
Weak Crossentropy 2d
tflearn.objectives.weak_cross_entropy_2d (y_pred, y_true, num_classes=None, epsilon=0.0001, head=None)
Calculate the semantic segmentation using weak softmax cross entropy loss.
Given the prediction y_pred
shaped as 2d image and the corresponding
y_true, this calculated the widely used semantic segmentation loss.
Using tf.nn.softmax_cross_entropy_with_logits
is currently not supported.
See https://github.com/tensorflow/tensorflow/issues/2327#issuecomment-224491229
Arguments
- y_pred:
tensor, float
- [batch_size, width, height, num_classes]. - y_true:
Labels tensor, int32
- [batch_size, width, height, num_classes]. The ground truth of your data. - num_classes:
int
. Number of classes. - epsilon:
float
. Small number to add toy_pred
. - head:
numpy array
- [num_classes]. Weighting the loss of each class.
Returns
Loss tensor of type float.