-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: The size of tensor a (24) must match the size of tensor b (20) at non-singleton dimension 2 #12946
Comments
Hey there! 😊 It sounds like you're encountering a shape mismatch issue during concurrent image processing. This often happens when images being processed in parallel do not have consistent dimensions or when batch processing is expected to have uniform input sizes. For sending multiple images concurrently for detection, ensure all images are preprocessed to have the same dimensions before they are fed into the model. If you're using batching, this step is crucial. Here's a quick tip on preprocessing your images to match dimensions: import cv2
# Example resizing function
def resize_image(image_path, size=(640, 640)):
image = cv2.imread(image_path)
return cv2.resize(image, size) # Resizes image to the specified size
# Preprocess your images
preprocessed_images = [resize_image(img) for img in images_to_detect] After preprocessing, you can proceed with your concurrent detection. Remember, consistency in input dimensions is key when processing multiple images. If the issue persists, ensure that the concurrent processing logic isn't altering the input tensors in a way that would cause dimension mismatches. Hope this helps! Let us know if you have any more questions. |
I'm not using batch processing but sending simultaneously for detection. I did resized to 640 which are rectangular images inside function. Sometimes it runs without error. Besides, I realized that when detecting concurrently, the inference will takes longer to process compared to sequential. Why? |
Hi again! 😊 When concurrently processing images without batching, it's crucial to maintain consistent image dimensions across all threads. Since you've resized your images to 640, ensure that aspect ratios are preserved to avoid any unexpected shape mismatches. Rectangular images may still cause issues if the model expects square input. Regarding the longer inference times during concurrent processing: this can happen due to resource contention, where multiple threads compete for GPU/CPU resources, leading to inefficiencies. Also, depending on how you're implementing concurrency, there might be overhead from thread management that affects performance. For a smoother experience with concurrent detections, consider:
Keep in mind, hardware limitations can also play a role in how effectively you can run detections concurrently. Hope this helps clarify things! If there's anything else, feel free to reach out. |
Hi again! Is it suitable to use thread local storage or Thread-Safe Inference to solve the unexpected shape mismatches issue? |
@KnightInsight hi there! 😊 Absolutely, using thread local storage (TLS) or ensuring thread-safe inference can be a practical approach to addressing shape mismatches in concurrent processing scenarios. This way, each thread can maintain its own instance of necessary data, avoiding interference between threads. For TLS in Python, you might consider using the import threading
thread_local_data = threading.local()
def process_image(image):
if not hasattr(thread_local_data, "model"):
# Load the model per thread, ensuring it's isolated
thread_local_data.model = your_model_loading_function()
# Your image processing logic here This ensures that each thread has its own version of the model (or any other data), reducing the risk of clashes and mismatches. Just remember, while TLS can help with managing data per thread, it's also important to ensure that all images are correctly preprocessed and consistent before being fed into the model to avoid shape mismatches. Hope that helps! Let us know if you need further assistance. |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Hi, I sent multiple images concurrently for detection. I use torch.hub.load to load the same model with thread function for detection. However, I have an error, "RuntimeError: The size of tensor a (24) must match the size of tensor b (20) at non-singleton dimension 2." I have used the latest yolov5 master version but the issue still exists. May I know how to solve this issue? Thanks
Additional
No response
The text was updated successfully, but these errors were encountered: