Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Performance Discrepancy Between SAM Model Demo and GitHub Code #762

Open
i-am-invincible opened this issue Jun 18, 2024 · 0 comments
Open

Comments

@i-am-invincible
Copy link

i-am-invincible commented Jun 18, 2024

Hi, I am working on segmenting car bodies in images using the Meta SAM model. I am facing a significant difference in performance between the UI demo on the official website and the code provided on the GitHub repository. UI demo performed remarkably well with just 1-2 clicks, however, when I attempted to use the code, results are very different and bad. Despite of providing multiple points, the results were not up to the mark as compared to the demo.

Using SAM Model Version:- "vit_h"
Used predictor_example file:- notebooks/predictor_example.ipynb

Examples:
Image 1:
Original Image:
image3

UI Demo Segmentation: - Performed well with 4 foreground points and 3 background points.
resized_sam_ui_3

My Code Segmentation: - Poor results with the same point placement.
code_output_3

Image 2:
Original Image:
image2

UI Demo Segmentation: - Good results with 4 foreground points and 4 background points.
resized_sam_ui_2

My Code Segmentation: - Poor results with the same point placement.
code_output_2

I would appreciate any insights into why this discrepancy is happening.
Could it be related to hidden hyperparameter settings, optimizers, or learning rates used in the UI demo that aren't included in the GitHub code?
If this is the case, would it be possible to provide some guidance.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant