Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to collect my own dataset? #1

Open
ShengtongZhu opened this issue Apr 27, 2019 · 2 comments
Open

How to collect my own dataset? #1

ShengtongZhu opened this issue Apr 27, 2019 · 2 comments

Comments

@ShengtongZhu
Copy link

What a owesome project!! Thanks a lot~
You gave me a lot of advice as a beginner.
I really want to know how to collect my own stop sign and right_curve datset.I mean how many photos should I take for per sign ?
I can never thank you enough for your help.

@ShengtongZhu
Copy link
Author

Excuse me~

@jawilk
Copy link
Owner

jawilk commented May 4, 2019

Apologies for the late reply, I hope it's still helpful.

You can see the final dataset I used for training here:
https://github.com/jawilk/Self-Driving-RC-Car-Payment/tree/master/program/traffic_sign_detection/data
The quantities are as follows:

  • Background: 4001 images
  • 50 km/h: 1529 images
  • Priority: 1678 images
  • Stop: 1333 images
  • Dangerous Curve Right: 362 images
  • No passing: 935 images

Unfortunately this task is very time intense.
(i) Started with this dataset http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset and looked at every image if it's similar to mine.
(ii) Took around 50 images of each class with my phone and cut them on my PC.
(iii) Augmented the images i already got (i.e. rotate/skew/mirror etc.)
(iv) After I got a roughly working network, I put 4 signs of the same type in a row and let the car drive along collecting images, extract region proposals and make predictions. After that I added them to the right label folder. This saved a lot of time compared to taking pictures with my phone and cutting by hand and also assures that the training images are almost the same as the real images the network has to predict later on (didn't pay much attention to overfitting here, since the detection was supposed to only work in that specific environment (yet)).

Let me know if something is unclear or you have further questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants