Skip to content

Latest commit

 

History

History
224 lines (165 loc) · 4.04 KB

MetricLoss.md

File metadata and controls

224 lines (165 loc) · 4.04 KB

TFSimilarity.losses.MetricLoss

Wraps a loss function in the Loss class.

TFSimilarity.losses.MetricLoss(
    fn: Callable,
    reduction: Callable = tf.keras.losses.Reduction.AUTO,
    name: Optional[str] = None,
    **kwargs
)

Args

fn The loss function to wrap, with signature `fn(y_true, y_pred, **kwargs)`.
reduction (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO.
name (Optional) name for the loss.
**kwargs The keyword arguments that are passed on to fn.

Methods

from_config

<b>python @classmethod</b>

from_config(
    config
)

Instantiates a Loss from its config (output of get_config()).

Args
config Output of get_config().
Returns
A Loss instance.

get_config

View source

get_config() -> Dict[str, Any]

Contains the loss configuration.

Returns
A Python dict containing the configuration of the loss.

__call__

__call__(
    y_true, y_pred, sample_weight=None
)

Invokes the Loss instance.

Args
y_true Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1]
y_pred The predicted values. shape = [batch_size, d0, .. dN]
sample_weight Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.)
Returns
Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.)
Raises
ValueError If the shape of sample_weight is invalid.