Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dear author, can you provide a visualization scheme for YOLOV5 feature graphs during detect.py? Thank you! #2259

Closed
113HQ opened this issue Feb 21, 2021 · 16 comments · Fixed by #3804
Labels
enhancement New feature or request Stale

Comments

@113HQ
Copy link

113HQ commented Feb 21, 2021

🚀 Feature

Motivation

Pitch

Alternatives

Additional context

@113HQ 113HQ added the enhancement New feature or request label Feb 21, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Feb 21, 2021

👋 Hello @113HQ, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@113HQ well, what features do you want to visualize exactly? The models typically have hundreds of layers and each layer has hundreds of feature maps.

@113HQ
Copy link
Author

113HQ commented Feb 21, 2021 via email

@glenn-jocher
Copy link
Member

glenn-jocher commented Feb 22, 2021

@113HQ yes but this is what i mean, within each of these 3 stages (17, 20 and 23) there are very many feature maps. If we take yolov5l.yaml for example, layer 17 has 256 feature maps, layer 20 has 512 and layer 23 has 1024 feature maps. So there are almost 1800 feature maps that you can look at in just the YOLOv5l output layers (for a single input image).

https://github.com/ultralytics/yolov5/blob/master/models/yolov5l.yaml

@113HQ
Copy link
Author

113HQ commented Feb 22, 2021 via email

@glenn-jocher
Copy link
Member

glenn-jocher commented Feb 22, 2021

@113HQ well, yes its certainly possible to build a visualizer. You could make maybe a 4x3 matplotlib grid, put the original image in the top left subplot, then in each of the rows put the first 3 feature maps from the output layers. You could do this by updating the model forward method to capture the outputs at the layers you were interested in, then it's just a matter of plotting and displaying them nicely. The place you'd capture the feature maps is here. I don't have a lot of free time to work on this, but if you want to get started and submit a PR that would great!

yolov5/models/yolo.py

Lines 120 to 136 in 095d2c1

def forward_once(self, x, profile=False):
y, dt = [], [] # outputs
for m in self.model:
if m.f != -1: # if not from previous layer
x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
if profile:
o = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPS
t = time_synchronized()
for _ in range(10):
_ = m(x)
dt.append((time_synchronized() - t) * 100)
print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type))
x = m(x) # run
y.append(x if m.i in self.save else None) # save output

@glenn-jocher
Copy link
Member

@113HQ BTW, a single feature map may be in my opinion a shallow set of information, as you are looking at a 2d spatial slice but are not aptly observing relationships across the feature space (as the convolutions do).

I guess an analogy is that you would be viewing the R, G, B layers of a color image by themselves, when it helps to view them together to get the complete picture.

@113HQ
Copy link
Author

113HQ commented Feb 22, 2021 via email

@JiaLim98
Copy link

Hi @glenn-jocher,

Do you have any image for the entire architecture of YOLOv5? For example:
image
obtained from this link.

If you do not have a nicely plotted one, a hand sketched version will do as well, I can make it nicely myself. I have looked through all YOLOv5 publications but none of them has one image regarding the architecture. I guess getting one from you is much straightforward, otherwise making one will have misunderstanding issues and draw it wrongly. Hope you have one :D.

Many thanks for the great work on YOLOv5!

@zhiqwang
Copy link
Contributor

zhiqwang commented Feb 23, 2021

Hi @JiaLim98 ,

Maybe you could check #280 and this (yolov5s release 3.1 specially).

@JiaLim98
Copy link

Hi @zhiqwang,

Thank you so much! Do you have one for v4.0? because my work uses only v4.0.
To my knowledge, there are major changes in the architecture between v3.1 and v4.0 right?

@JiaLim98
Copy link

I found one following the blog you attached, here. Is this correct?

Just to double confirm, @glenn-jocher, do you agree with the diagrams plotted? Do those directly reflect what YOLOv5 is doing?

@glenn-jocher
Copy link
Member

glenn-jocher commented Feb 23, 2021

@JiaLim98 yes, there were architectural changes between v3.1 and v4.0: the C3() modules replaced CSPBottleneck() modules, and SiLU() replaced HardSwish(). We actually haven't plotted the model ourselves, we typically just use the model yaml files a first order approximation of the structure, and also use Netron to view the block diagram (sometimes looks better with ONNX exported models than the pytorch ones), and sometimes the interactive TF graph view is useful also (when it works):

yolov5/train.py

Line 324 in 7a6870b

# tb_writer.add_graph(model, imgs) # add model to tensorboard

The linked models are good too, they seem correct.

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@WANGCHAO1996
Copy link

Ok,I''ll have a try. Thanks

---Original--- From: "Glenn Jocher"<notifications@github.com> Date: Mon, Feb 22, 2021 13:04 PM To: "ultralytics/yolov5"<yolov5@noreply.github.com>; Cc: "113HQ"<948107836@qq.com>;"Mention"<mention@noreply.github.com>; Subject: Re: [ultralytics/yolov5] Dear author, can you provide a visualization scheme for YOLOV5 feature graphs during detect.py? Thank you! (#2259) @113HQ BTW, a single feature map may be in my opinion a shallow set of information, as you are looking at a 2d spatial slice but are not aptly observing relationships across the feature space (as the convolutions do). I guess an analogy is that you would be viewing the R, G, B layers of a color image by themselves, when it helps to view them together to get the complete picture. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

Ok,I''ll have a try. Thanks

---Original--- From: "Glenn Jocher"<notifications@github.com> Date: Mon, Feb 22, 2021 13:04 PM To: "ultralytics/yolov5"<yolov5@noreply.github.com>; Cc: "113HQ"<948107836@qq.com>;"Mention"<mention@noreply.github.com>; Subject: Re: [ultralytics/yolov5] Dear author, can you provide a visualization scheme for YOLOV5 feature graphs during detect.py? Thank you! (#2259) @113HQ BTW, a single feature map may be in my opinion a shallow set of information, as you are looking at a 2d spatial slice but are not aptly observing relationships across the feature space (as the convolutions do). I guess an analogy is that you would be viewing the R, G, B layers of a color image by themselves, when it helps to view them together to get the complete picture. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

Hello, have you realized the visualization of feature map? Thank you

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 28, 2021

@WANGCHAO1996 @zhiqwang @113HQ @JiaLim98 good news 😃! Feature map visualization was added ✅ in PR #3804 by @Zigars today. This allows for visualizing feature maps from any part of the model from any function (i.e. detect.py, train.py, test.py). Feature maps are saved as *.png files in runs/features/exp directory. To turn on feature visualization set feature_vis=True in the model forward method and define the layer you want to visualize (default is SPP layer).

yolov5/models/yolo.py

Lines 158 to 160 in 20d45aa

if feature_vis and m.type == 'models.common.SPP':
feature_visualization(x, m.type, m.i)

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload with model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

layer_8_SPP_features

@glenn-jocher glenn-jocher linked a pull request Jun 28, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Stale
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants