Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the result is zoomed in? #11888

Closed
1 task done
itsjustrafli opened this issue Jul 22, 2023 · 12 comments
Closed
1 task done

Why the result is zoomed in? #11888

itsjustrafli opened this issue Jul 22, 2023 · 12 comments
Labels
question Further information is requested Stale

Comments

@itsjustrafli
Copy link

Search before asking

Question

I just doing running test on my project. For all this time, I am doing inference using video source and it is just running fine, but when I run inference live with webcam, the preview inference are zoomed in. How do I zoom out the result for live detection?

Here are the comparison between file and live webcam inferencing.

2023-07-22 (3)
2023-07-22

Additional

No response

@itsjustrafli itsjustrafli added the question Further information is requested label Jul 22, 2023
@glenn-jocher
Copy link
Member

@itsjustrafli hi there,

The zoomed-in result you are experiencing during live webcam inference might be due to the default resolution settings. By default, YOLOv5 resizes images to 640x640 pixels for better performance during training and inference. However, this might not be appropriate for all use cases.

To adjust the resolution and zoom out the result during live detection, you can modify the hyp['rect'] parameter in the yolo.py file. This parameter controls the aspect ratio of the output bounding box. By decreasing its value, you can decrease the zoom level.

Please note that altering the resolution might affect the detection accuracy and performance. It is recommended to experiment with different settings and find the right balance based on your specific requirements.

Best regards.

@itsjustrafli
Copy link
Author

Thank you for the answer, @glenn-jocher! But I couldn't find Python script named yolo.py anywhere in yolov5 repo folder, can you tell me where is the exact location of this script? That would be helpful. Thank you in advance

@glenn-jocher
Copy link
Member

@itsjustrafli you're welcome! I apologize for the confusion. The yolo.py file I mentioned earlier does not exist in the YOLOv5 repository. I apologize for any inconvenience caused.

To adjust the resolution and zoom level during live webcam inference, you can modify the view_img parameter in the detect.py script. By default, view_img is set to True which displays the inference results in a window. You can set it to False if you don't want the zoomed-in visualization.

Please note that modifying this parameter will disable the live detection visualization on your screen. However, you can still access the results saved in the output directory specified during inference.

If you have any further questions or need additional assistance, please let me know.

@itsjustrafli
Copy link
Author

Hi, @glenn-jocher! I just read same issues about resizing inference preview here #877.

On the issues, you were suggesting editing the dataset.py script, but apparently, I didn't find the script named dataset.py in yolov5/utils folder. I wonder what does this script named now? Is it changed?

Once again, thanks in advance!

@glenn-jocher
Copy link
Member

@itsjustrafli hi there!

I apologize for any confusion caused. It seems there was some outdated information in my previous response. The dataset.py script you mentioned does not exist in the current structure of the YOLOv5 repository. I apologize for any inconvenience this may have caused.

To adjust the resizing and zoom level of the inference preview, you can modify the imgsz parameter in the detect.py script instead. By default, imgsz is set to 640 which determines the input image size during inference. You can try decreasing this value to achieve a zoomed-out effect.

Please note that modifying this parameter might affect the detection accuracy and performance, so it's important to experiment with different values to find the right balance for your specific use case.

If you have any further questions or need additional assistance, please let me know. Sorry for any confusion caused, and thank you for bringing this issue to my attention.

@itsjustrafli
Copy link
Author

Hi, @glenn-jocher! It's me! (again). I have another question about inferencing time.

Long story short, I am working on vehicle counting project, and there is output of counting recap in the form of CSV files, and I want to add TIME_STAMP column which is start time along with the end time of inference.

How to check what time (system time) is the inference start and stop, is there any log or something? Or is there a variable I can refer to?

Once again, thanks in advance!

@glenn-jocher
Copy link
Member

Hi @itsjustrafli!

Great to see you again! Regarding your question, you can check the start and stop time of the inference by utilizing the datetime module in Python.

At the beginning of your inference process, you can add the following code snippet to get the start time:

import datetime
start_time = datetime.datetime.now()

And at the end of the inference process, you can add this code snippet to get the end time:

end_time = datetime.datetime.now()

To calculate the total inference time, you can subtract the start time from the end time:

total_time = end_time - start_time

Now, you can add the total_time to your CSV file as the TIME_STAMP column.

I hope this helps! Let me know if you have any further questions or need any additional assistance.

@itsjustrafli
Copy link
Author

Thanks for the answer. I have another question, how to make the inference result video is not sped up (time-lapse)? And what makes them sped up? Is it the model or there are any settings to make it not sped up? Enlighten me, please.

Thanks for the assistance, @glenn-jocher. Maybe I will ask you again and this will be a long thread :)

watch_v_NcaGFp76BTY.mp4

@glenn-jocher
Copy link
Member

@itsjustrafli the speed of the inference result video is not determined by the model itself, but rather by the frame rate (FPS) at which the video is recorded or processed. It seems like your input video has a higher FPS compared to the output video, which results in a time-lapse effect.

To make the inference result video play at a normal speed, you can try using the --fps argument when running the detect.py script. This argument allows you to specify the FPS of the output video. For example, if you want the output video to match the FPS of the input video, you can set --fps to the FPS of your input video.

Here's an example command:

python detect.py --weights yolov5s.pt --source input.mp4 --fps 30

In this example, input.mp4 is the input video, and --fps 30 sets the output video to have a frame rate of 30 FPS. Adjust the value according to the frame rate of your input video.

I hope this helps! Feel free to ask any further questions, and I'll be glad to assist you.

@itsjustrafli
Copy link
Author

itsjustrafli commented Aug 2, 2023

Thanks for the respond, @glenn-jocher! But apparently there is no flag --fps ini detect.py yolov5

@glenn-jocher
Copy link
Member

@itsjustrafli hi there,

Apologies for any confusion caused. You're correct, the --fps flag is not available in the detect.py script of YOLOv5.

To control the output frame rate of the inference result video, you can modify the webcam.py script. Within this script, you can adjust the frames_write parameter to control the interval at which frames are saved to the output video. By default, frames_write is set to 10, which creates a time-lapse effect. Increasing this value will reduce the speed of the output video, creating a slower playback.

Please note that the webcam.py script is used specifically for live webcam inference. If you are performing inference on a video file rather than a live feed, you can use the detect.py script and use video editing software to adjust the playback speed of the resulting video, as the --fps flag is not available in the script itself.

Let me know if you have any further questions or need any more assistance.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 2, 2023

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Sep 2, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants