Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: The size of tensor a (24) must match the size of tensor b (20) at non-singleton dimension 2 #12946

Closed
1 task done
KnightInsight opened this issue Apr 21, 2024 · 6 comments
Labels
question Further information is requested Stale

Comments

@KnightInsight
Copy link

Search before asking

Question

Hi, I sent multiple images concurrently for detection. I use torch.hub.load to load the same model with thread function for detection. However, I have an error, "RuntimeError: The size of tensor a (24) must match the size of tensor b (20) at non-singleton dimension 2." I have used the latest yolov5 master version but the issue still exists. May I know how to solve this issue? Thanks

Additional

No response

@KnightInsight KnightInsight added the question Further information is requested label Apr 21, 2024
@glenn-jocher
Copy link
Member

Hey there! 😊 It sounds like you're encountering a shape mismatch issue during concurrent image processing. This often happens when images being processed in parallel do not have consistent dimensions or when batch processing is expected to have uniform input sizes.

For sending multiple images concurrently for detection, ensure all images are preprocessed to have the same dimensions before they are fed into the model. If you're using batching, this step is crucial. Here's a quick tip on preprocessing your images to match dimensions:

import cv2

# Example resizing function
def resize_image(image_path, size=(640, 640)):
    image = cv2.imread(image_path)
    return cv2.resize(image, size)  # Resizes image to the specified size

# Preprocess your images
preprocessed_images = [resize_image(img) for img in images_to_detect]

After preprocessing, you can proceed with your concurrent detection. Remember, consistency in input dimensions is key when processing multiple images. If the issue persists, ensure that the concurrent processing logic isn't altering the input tensors in a way that would cause dimension mismatches.

Hope this helps! Let us know if you have any more questions.

@KnightInsight
Copy link
Author

I'm not using batch processing but sending simultaneously for detection. I did resized to 640 which are rectangular images inside function. Sometimes it runs without error. Besides, I realized that when detecting concurrently, the inference will takes longer to process compared to sequential. Why?

@glenn-jocher
Copy link
Member

Hi again! 😊 When concurrently processing images without batching, it's crucial to maintain consistent image dimensions across all threads. Since you've resized your images to 640, ensure that aspect ratios are preserved to avoid any unexpected shape mismatches. Rectangular images may still cause issues if the model expects square input.

Regarding the longer inference times during concurrent processing: this can happen due to resource contention, where multiple threads compete for GPU/CPU resources, leading to inefficiencies. Also, depending on how you're implementing concurrency, there might be overhead from thread management that affects performance.

For a smoother experience with concurrent detections, consider:

  • Ensuring all preprocessing, including aspect ratio preservation, is consistently applied.
  • Using a fixed input size that matches the model's expectations if not already doing so.
  • Exploring Python's concurrent.futures or multiprocessing to efficiently manage threads or processes, if you're not already.

Keep in mind, hardware limitations can also play a role in how effectively you can run detections concurrently.

Hope this helps clarify things! If there's anything else, feel free to reach out.

@KnightInsight
Copy link
Author

Hi again! Is it suitable to use thread local storage or Thread-Safe Inference to solve the unexpected shape mismatches issue?

@glenn-jocher
Copy link
Member

@KnightInsight hi there! 😊

Absolutely, using thread local storage (TLS) or ensuring thread-safe inference can be a practical approach to addressing shape mismatches in concurrent processing scenarios. This way, each thread can maintain its own instance of necessary data, avoiding interference between threads.

For TLS in Python, you might consider using the threading module's local storage to keep each thread's data isolated:

import threading

thread_local_data = threading.local()

def process_image(image):
    if not hasattr(thread_local_data, "model"):
        # Load the model per thread, ensuring it's isolated
        thread_local_data.model = your_model_loading_function()
    # Your image processing logic here

This ensures that each thread has its own version of the model (or any other data), reducing the risk of clashes and mismatches.

Just remember, while TLS can help with managing data per thread, it's also important to ensure that all images are correctly preprocessed and consistent before being fed into the model to avoid shape mismatches.

Hope that helps! Let us know if you need further assistance.

Copy link
Contributor

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label May 28, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants