Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GradCAM integration - Make YOLOv5 Interpretable #10649

Closed
wants to merge 494 commits into from

Conversation

pourmand1376
Copy link
Contributor

@pourmand1376 pourmand1376 commented Jan 2, 2023

Why this PR?

This PR will adapt GradCAM library to YOLOv5. This is required since black box models are not always acceptable. We need to know why a certain prediction was made. This is completely different from feature visualization which is already implemented. This explains the model results on a per image basis. For example, we want to know why the model has detected this Person. What pixels are mostly responsible for this prediction? This will result in a heatmap like this.

EigenCAM layer -2:
image

EigenCAM layer -3:
image

Current State

Currently, I've implemented EigenCAM and it works perfectly. Still, I have to write documentation to understand how it works.

Related Issues and Links

This is a long-requested feature.
YOLOv5 Related issues:

Related Issues in other repositories:

Useful Links:

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

WARNING ⚠️ this PR is very large, summary may not cover all changes.

🌟 Summary

This PR introduces new Makefile options and Jupyter notebook improvements for interpretability demos.

📊 Key Changes

  • Added run_interpretability and run_interpretability_old commands to Makefile for running interpretability scripts.
  • Included a demo Jupyter notebook with detailed code execution steps and outputs for interpretability methods.
  • Updated .pre-commit-config.yaml to exclude the demo notebook from the codespell check.

🎯 Purpose & Impact

  • Provides users with Makefile shortcuts for executing interpretability code, streamlining the process.
  • Offers a clear, demonstrable notebook that guides users through using GradCAM and other interpretability tools in a practical setting.
  • Enhances code quality by bypassing irrelevant spell checks on a demonstration notebook without affecting the rest of the codebase.

@hlmhlr
Copy link

hlmhlr commented Jun 23, 2023

Hi @pourmand1376, your efforts towards making yolov5 interpretable are appreciable!!
Can you please share your code that you used to implement EigenCAM with YOLOV5 which worked perfectly?
Many thanks in advance!

Why this PR?

This PR will adapt GradCAM library to YOLOv5. This is required since black box models are not always acceptable. We need to know why a certain prediction was made. This is completely different from feature visualization which is already implemented. This explains the model results on a per image basis. For example, we want to know why the model has detected this Person. What pixels are mostly responsible for this prediction? This will result in a heatmap like this.

EigenCAM layer -2: image

EigenCAM layer -3: image

Current State

Currently, I've implemented EigenCAM and it works perfectly. Still, I have to write documentation to understand how it works.

Related Issues and Links

This is a long-requested feature. YOLOv5 Related issues:

* [Visualize Features in Yolov5 #8717](https://github.com/ultralytics/yolov5/issues/8717)

* [Grad-Cam for yolov5-5.0 #5863](https://github.com/ultralytics/yolov5/issues/5863)

* [How would I call individual layers of the network? yolov5 #4575](https://github.com/ultralytics/yolov5/issues/4575)

* [Interpreting model YoloV5 by Grad-cam #2065](https://github.com/ultralytics/yolov5/issues/2065)

* [Grad-Cam for yolov5-5.0 #5863](https://github.com/ultralytics/yolov5/issues/5863)

Related Issues in other repositories:

* [Possibly inverted heatmaps for Score-CAM for YOLOv5 jacobgil/pytorch-grad-cam#364](https://github.com/jacobgil/pytorch-grad-cam/issues/364)

* [[question] Support Grad-CAM in MMYOLO's yolov5 jacobgil/pytorch-grad-cam#359](https://github.com/jacobgil/pytorch-grad-cam/issues/359)

* [YOLOv5 and ScoreCAM jacobgil/pytorch-grad-cam#242](https://github.com/jacobgil/pytorch-grad-cam/issues/242)

Useful Links:

* https://github.com/pooya-mohammadi/yolov5-gradcam: This one is actually fine but it is too old. Also, It doesn't add this functionality to YOLO in a way that it works with later versions. It implements YOLO from scratch.

* [Tutorial: Class Activation Maps for Object Detection with Faster RCNN — Advanced AI explainability with pytorch-gradcam](https://jacobgil.github.io/pytorch-gradcam-book/Class%20Activation%20Maps%20for%20Object%20Detection%20With%20Faster%20RCNN.html)

* [EigenCAM for YOLO5 — Advanced AI explainability with pytorch-gradcam](https://jacobgil.github.io/pytorch-gradcam-book/EigenCAM%20for%20YOLO5.html)

@glenn-jocher
Copy link
Member

Hi @hlmhlr,

Thank you for your efforts in implementing EigenCAM with YOLOv5. We appreciate your dedication to making YOLOv5 interpretable.

The community is looking forward to seeing the code you used to implement EigenCAM and its perfect functioning. It would be great if you could share the code with us.

Your contribution will add value to YOLOv5 and help users understand the reasoning behind model predictions on a per-image basis. The heatmap examples you shared look promising and will greatly enhance interpretability.

We understand that you are still working on documenting the implementation. It would be helpful if you can complete the documentation to provide a better understanding of how EigenCAM works with YOLOv5.

Thank you once again for your efforts, and we look forward to your code contribution.

@pourmand1376
Copy link
Contributor Author

Hi @hlmhlr,
Just use this link and Change GradCAM to EigenCAM. It will work perfectly.

@glenn-jocher
Copy link
Member

Hi @pourmand1376,

You can use this link and simply change the GradCAM implementation to EigenCAM. This should allow you to use EigenCAM with YOLOv5 and achieve the desired results.

Please let me know if you have any further questions or need any additional assistance.

Best,

@hlmhlr
Copy link

hlmhlr commented Jun 25, 2023

Hi @glenn-jocher and @pourmand1376, I gone through the code to the link that you shared, its working fine. Many thanks for your input.

Further, I have slightly modified the code of @pourmand1376 explainer.py from the link to make it more robust regarding Cam output visualization by taking the idea from this link. Mainly the following changes are made:

  • Concerning object detection, removing heatmap data outside of the bounding boxes and scaling the heatmaps within each box.
  • Drawing the bounding boxes for specific class(es) if mentioned via argument otherwise for all classes.
  • Showcasing or hiding the class labels. For example, if there is aerial image with large number of objects, then visualizing the labels would resist to analyze the heatmaps optimistically. Hiding might be preferable in that case.
  • Concatenating the images together (original image, cam image, cam image with boxes).

I am attaching the snapshots below:

  • With single class bus:
  • With single class person:
  • Both classes together

I believe, the changes are significant and beneficial to the YOLOv5 project as a whole, they should be integrated along with the pull request of @pourmand1376. Further, @glenn-jocher and @pourmand1376, please suggest how to proceed?

Many thanks and kind regards,

@pourmand1376
Copy link
Contributor Author

pourmand1376 commented Jul 6, 2023

Hi, The changes you made are interesting. It am flattered that someone continued what I've done.

I think, maybe, it is better to sync these changes to YOLOv7 as YOLOv5 is not maintained anymore, I assume.

@hlmhlr
Copy link

hlmhlr commented Jul 6, 2023

Hi, The changes you made are interesting. It am flattered that someone continued what I've done.

I think, maybe, it is better to sync these changes to YOLOv7 as YOLOv5 is not maintained anymore, I assume.

Dear @pourmand1376,

I have created the pull request here along with your contribution. You can check the details.

Sure, we can do it for the YOLOv7, however, the maintenance of YOLOv5 and merging of the pull request in that may be confirmed by the @glenn-jocher.

@nyj-ocean
Copy link

@pourmand1376
I use your code https://github.com/pourmand1376/yolov5/tree/add_gradcam
when the method=='EigenCAM', it can run no problem
but when the method=='GradCAM' or method=='GradCAM++' , it will meet the problem like following

Adding AutoShape...
unsupported operand type(s) for ** or pow(): 'NoneType' and 'int'
Traceback (most recent call last):
  File "explainer/explainer.py", line 491, in <module>
    main(opt)
  File "explainer/explainer.py", line 486, in main
    run(**vars(opt))
  File "explainer/explainer.py", line 432, in run
    cv2.imwrite(save_path, cam_image)
TypeError: Expected Ptr<cv::UMat> for argument 'img'

@glenn-jocher
Copy link
Member

@nyj-ocean the error you encountered seems to be related to an unsupported operand type when using the GradCAM or GradCAM++ method. Specifically, the error mentions an issue with the cv2.imwrite function.

To investigate further, could you please provide more details about the input data and the specific steps you took when running the code? This will help us identify the root cause of the issue and provide a suitable solution.

In the meantime, please ensure that you have the necessary dependencies installed and that your input data is in the correct format for the GradCAM or GradCAM++ method.

Looking forward to your response so that we can assist you further.

@github-actions
Copy link
Contributor

👋 Hello there! We wanted to let you know that we've decided to close this pull request due to inactivity. We appreciate the effort you put into contributing to our project, but unfortunately, not all contributions are suitable or aligned with our product roadmap.

We hope you understand our decision, and please don't let it discourage you from contributing to open source projects in the future. We value all of our community members and their contributions, and we encourage you to keep exploring new projects and ways to get involved.

For additional resources and information, please see the links below:

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

@github-actions github-actions bot added the Stale label Oct 17, 2023
@github-actions github-actions bot closed this Nov 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants