-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why the result is zoomed in? #11888
Comments
@itsjustrafli hi there, The zoomed-in result you are experiencing during live webcam inference might be due to the default resolution settings. By default, YOLOv5 resizes images to 640x640 pixels for better performance during training and inference. However, this might not be appropriate for all use cases. To adjust the resolution and zoom out the result during live detection, you can modify the Please note that altering the resolution might affect the detection accuracy and performance. It is recommended to experiment with different settings and find the right balance based on your specific requirements. Best regards. |
Thank you for the answer, @glenn-jocher! But I couldn't find Python script named yolo.py anywhere in yolov5 repo folder, can you tell me where is the exact location of this script? That would be helpful. Thank you in advance |
@itsjustrafli you're welcome! I apologize for the confusion. The To adjust the resolution and zoom level during live webcam inference, you can modify the Please note that modifying this parameter will disable the live detection visualization on your screen. However, you can still access the results saved in the output directory specified during inference. If you have any further questions or need additional assistance, please let me know. |
Hi, @glenn-jocher! I just read same issues about resizing inference preview here #877. On the issues, you were suggesting editing the dataset.py script, but apparently, I didn't find the script named dataset.py in yolov5/utils folder. I wonder what does this script named now? Is it changed? Once again, thanks in advance! |
@itsjustrafli hi there! I apologize for any confusion caused. It seems there was some outdated information in my previous response. The To adjust the resizing and zoom level of the inference preview, you can modify the Please note that modifying this parameter might affect the detection accuracy and performance, so it's important to experiment with different values to find the right balance for your specific use case. If you have any further questions or need additional assistance, please let me know. Sorry for any confusion caused, and thank you for bringing this issue to my attention. |
Hi, @glenn-jocher! It's me! (again). I have another question about inferencing time. Long story short, I am working on vehicle counting project, and there is output of counting recap in the form of CSV files, and I want to add TIME_STAMP column which is start time along with the end time of inference. How to check what time (system time) is the inference start and stop, is there any log or something? Or is there a variable I can refer to? Once again, thanks in advance! |
Hi @itsjustrafli! Great to see you again! Regarding your question, you can check the start and stop time of the inference by utilizing the At the beginning of your inference process, you can add the following code snippet to get the start time: import datetime
start_time = datetime.datetime.now() And at the end of the inference process, you can add this code snippet to get the end time: end_time = datetime.datetime.now() To calculate the total inference time, you can subtract the start time from the end time: total_time = end_time - start_time Now, you can add the I hope this helps! Let me know if you have any further questions or need any additional assistance. |
Thanks for the answer. I have another question, how to make the inference result video is not sped up (time-lapse)? And what makes them sped up? Is it the model or there are any settings to make it not sped up? Enlighten me, please. Thanks for the assistance, @glenn-jocher. Maybe I will ask you again and this will be a long thread :) watch_v_NcaGFp76BTY.mp4 |
@itsjustrafli the speed of the inference result video is not determined by the model itself, but rather by the frame rate (FPS) at which the video is recorded or processed. It seems like your input video has a higher FPS compared to the output video, which results in a time-lapse effect. To make the inference result video play at a normal speed, you can try using the Here's an example command:
In this example, I hope this helps! Feel free to ask any further questions, and I'll be glad to assist you. |
Thanks for the respond, @glenn-jocher! But apparently there is no flag |
@itsjustrafli hi there, Apologies for any confusion caused. You're correct, the To control the output frame rate of the inference result video, you can modify the Please note that the Let me know if you have any further questions or need any more assistance. |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
I just doing running test on my project. For all this time, I am doing inference using video source and it is just running fine, but when I run inference live with webcam, the preview inference are zoomed in. How do I zoom out the result for live detection?
Here are the comparison between file and live webcam inferencing.
Additional
No response
The text was updated successfully, but these errors were encountered: