Computes the PN loss in an online fashion.
Inherits From: MetricLoss
TFSimilarity.losses.PNLoss(
positive_mining_strategy: str = hard,
negative_mining_strategy: str = semi-hard,
soft_margin: bool = False,
margin: float = 1.0,
name: str = PNLoss,
**kwargs
)
This loss encourages the positive distances between a pair of embeddings with the same labels to be smaller than the minimum negative distances between pair of embeddings of different labels. Additionally, both the anchor and the positive embeddings are encouraged to be far from the negative embeddings. This is accomplished by taking the min(pos_neg_dist, anchor_neg_dist) and using that as the negative distance in the triplet loss.
Szubert, B., Cole, J.E., Monaco, C. et al. Structure-preserving visualisation of high dimensional single-cell dataset Sci Rep 9, 8914 (2019). https://doi.org/10.1038/s41598-019-45301-0
y_true must be a 1-D integer Tensor of shape (batch_size,). It's values represent the classes associated with the examples as integer values.
y_pred must be 2-D float Tensor of L2 normalized embedding vectors. you can use the layer tensorflow_similarity.layers.L2Embedding() as the last layer of your model to ensure your model output is properly normalized.
distance | Which distance function to use to compute the pairwise distances between embeddings. Defaults to 'cosine'. |
positive_mining_strategy | What mining strategy to use to select embedding from the same class. Defaults to 'hard'. |
available | 'easy', 'hard' |
negative_mining_strategy | What mining strategy to use for select the embedding from the different class. Defaults to 'semi-hard'. Available: 'hard', 'semi-hard', 'easy' |
soft_margin | - [description]. Defaults to True. Use a soft margin instead of an explicit one. |
margin | Use an explicit value for the margin term. Defaults to 1.0. |
name | Loss name. Defaults to PNLoss. |
ValueError | Invalid positive mining strategy. |
ValueError | Invalid negative mining strategy. |
ValueError | Margin value is not used when soft_margin is set to True. |
<b>python @classmethod</b>
from_config(
config
)
Instantiates a Loss from its config (output of get_config()).
Args | |
---|---|
config | Output of get_config(). |
Returns | |
---|---|
A Loss instance. |
get_config() -> Dict[str, Any]
Contains the loss configuration.
Returns | |
---|---|
A Python dict containing the configuration of the loss. |
__call__(
y_true, y_pred, sample_weight=None
)
Invokes the Loss instance.
Args | |
---|---|
y_true | Ground truth values. shape = [batch_size, d0, .. dN], except sparse loss functions such as sparse categorical crossentropy where shape = [batch_size, d0, .. dN-1] |
y_pred | The predicted values. shape = [batch_size, d0, .. dN] |
sample_weight | Optional sample_weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If sample_weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the sample_weight vector. If the shape of sample_weight is [batch_size, d0, .. dN-1] (or can be broadcasted to this shape), then each loss element of y_pred is scaled by the corresponding value of sample_weight. (Note ondN-1: all loss functions reduce by 1 dimension, usually axis=-1.) |
Returns | |
---|---|
Weighted loss float Tensor. If reduction is NONE, this has shape [batch_size, d0, .. dN-1]; otherwise, it is scalar. (Note dN-1 because all loss functions reduce by 1 dimension, usually axis=-1.) |
Raises | |
---|---|
ValueError | If the shape of sample_weight is invalid. |