Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The split of the Stanford Cars dataset. #21

Open
ppanzx opened this issue Feb 15, 2024 · 0 comments
Open

The split of the Stanford Cars dataset. #21

ppanzx opened this issue Feb 15, 2024 · 0 comments

Comments

@ppanzx
Copy link

ppanzx commented Feb 15, 2024

Thank you for your commendable efforts in your work. I have a question regarding the split of the Stanford Cars dataset, which comprises 16,185 images representing 196 car models.

In most metric-learning literature, the dataset split is described as follows: "The first 98 classes (8,054 images) are used for training, and the remaining 98 classes (8,131 images) are held out for testing."

However, the split mentioned in the Torchvision documentation states that "The data is split into 8,144 training images and 8,041 testing images, with an approximately 50-50 split for each class.", the training and testing split of which is different from current metric-learning community.

Unfortunately, the official website is currently inaccessible, leaving me uncertain about the specific split used in this implementation.

Could you kindly provide me with a detailed split list (rather than the raw images) used in your implementation of the Stanford Cars dataset?

Thank you for your attention to this matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant