You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
I trained an RGB panoptic model (Panoptic-deeplab) by converting PASTIS data to RGB (with only 3-channels) and we used your evaluation code to calculate PQ,SQ and RQ. The problem is that when we run test code and get the PQ,SQ and RQ they are very different like PQ is not a product of SQ*RQ. When we see your paper it matches accordingly. Is that code has any bug?
In your metrices.py code, it returns SQ.mean(), RQ.mean() and PQ.mean() which are the mean of all classes and PQ.mean() is not equal to RQ.mean() and SQ.mean(). How do you calculate the metrices that are in your paper?
The text was updated successfully, but these errors were encountered:
I'm not sure I understood everything but what I can say is that we follow the metric formulation of the Panoptic Segmentation paper. In our code all the metrics are computed per class and then averaged. So that means that PQ=SQ*RQ only for each individual class but not for the averaged metric.
Hello Friend
Thanks for your reply. I understand that metrices are computed per class but I am a bit confused regarding the scores shown in your paper. In your panoptic segmentation paper, PQ(43.8)=SQ(81.5)xRQ(53.2), how do you calculate these values?
Again, this equality holds for the scores per class, but not on the class average metric. In fact, if you do the math 0.815x0.532=43.35 not 43.8
We follow the metric definition of the Panoptic Segmentation (Kirillov 2019) paper taking only "things" classes, and you can see in the code how we implement it.
Please come up with a more specific question.
Hi
I trained an RGB panoptic model (Panoptic-deeplab) by converting PASTIS data to RGB (with only 3-channels) and we used your evaluation code to calculate PQ,SQ and RQ. The problem is that when we run test code and get the PQ,SQ and RQ they are very different like PQ is not a product of SQ*RQ. When we see your paper it matches accordingly. Is that code has any bug?
In your metrices.py code, it returns SQ.mean(), RQ.mean() and PQ.mean() which are the mean of all classes and PQ.mean() is not equal to RQ.mean() and SQ.mean(). How do you calculate the metrices that are in your paper?
The text was updated successfully, but these errors were encountered: