Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to configure channel size #1739

Closed
thhart opened this issue Dec 19, 2020 · 12 comments · Fixed by #1741
Closed

How to configure channel size #1739

thhart opened this issue Dec 19, 2020 · 12 comments · Fixed by #1741
Labels
question Further information is requested

Comments

@thhart
Copy link

thhart commented Dec 19, 2020

I checked the training tutorial but could not find a central configuration setting to specify the input channel size for training. Is this possible within yaml files or is it necessary to change something in yolo.py for instance?
I tried to put ch into yaml but it looks like not being parsed.

@thhart thhart added the question Further information is requested label Dec 19, 2020
@glenn-jocher
Copy link
Member

glenn-jocher commented Dec 19, 2020

@thhart you can create a YOLOv5 model with non-default channel size with PyTorch Hub. See PyTorch Hub tutorial:
https://docs.ultralytics.com/yolov5

The training dataloader defaults to 3 ch images, you'd have to manually modify it to your needs:

class LoadImagesAndLabels(Dataset): # for training/testing

@glenn-jocher
Copy link
Member

@thhart by the way, yaml['ch'] fields are not currently used, but you could modify yolo.py directly to use these similar to now yaml['nc'] is used or overridden.

yolov5/models/yolo.py

Lines 69 to 87 in ab0db8d

class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict
# Define model
if nc and nc != self.yaml['nc']:
logger.info('Overriding model.yaml nc=%g with nc=%g' % (self.yaml['nc'], nc))
self.yaml['nc'] = nc # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist, ch_out
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
# print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])

@glenn-jocher
Copy link
Member

glenn-jocher commented Dec 19, 2020

@thhart I would perhaps update yolo.py L84 to this to allow you to use a ch field in the yaml.

        self.model, self.save = parse_model(deepcopy(self.yaml), ch=[self.yaml.get('ch', ch)])  # model, savelist

@glenn-jocher glenn-jocher linked a pull request Dec 19, 2020 that will close this issue
@glenn-jocher
Copy link
Member

@thhart PR #1741 is merged now, adding support for optional input channel definition in model yaml files, i.e.

# parameters
nc: 80  # number of classes
ch: 10  # input channels  <------------------------
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple

Tested with 1, 3 and 10 channel models.

@wangfurong123
Copy link

Hello, I used the YOLOv5 model to train a single-channel image, but it was unsuccessful. How can I modify the data loader part in the datasets.py file to train a single-channel image? Sincerely hope you can answer, thank you.

@glenn-jocher
Copy link
Member

@wangfurong123 dataloaders are in datasets.py:

yolov5/utils/datasets.py

Lines 377 to 378 in 7a39803

class LoadImagesAndLabels(Dataset):
# YOLOv5 train_loader/val_loader, loads images and labels for training and validation

@xiaoche-24
Copy link

xiaoche-24 commented Sep 18, 2023

I have successfully obtained a six-channel input pt model. How can I convert it to an onnx model? What modifications need to be made to export.py?I ran export.py directly and reported the following error:
Traceback (most recent call last):
File "export.py", line 653, in
main(opt)
File "export.py", line 648, in main
run(**vars(opt))
File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "export.py", line 549, in run
y = model(im) # dry runs
File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lixiaojun/workfile/yolov5-master/models/yolo.py", line 245, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/home/lixiaojun/workfile/yolov5-master/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lixiaojun/workfile/yolov5-master/models/common.py", line 3013, in forward_fuse
return self.act(self.conv(x))
File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [32, 3, 6, 6], expected input[1, 6, 640, 640] to have 3 channels, but got 6 channels instead

@glenn-jocher
Copy link
Member

Hi @xiaoche-24! To convert a six-channel input PyTorch model to an ONNX model, you need to make some modifications in the export.py script.

You received the error because the weights of one of the layers in your model have a fixed size of [32, 3, 6, 6]. This implies that it expects an input with 3 channels, but your input has 6 channels.

To resolve this issue, you need to modify the export.py script. Specifically, you should update the line where the model is defined to handle the case of your six-channel input. You can modify the parse_model function in models/yolo.py to accept a custom number of input channels.

Make sure to change the default value of ch to 6 (or the number of channels in your input) and re-run the export script.

Let me know if you have any further questions or need additional assistance!

@wq247726404
Copy link

I have successfully obtained a six-channel input pt model. How can I convert it to an onnx model? What modifications need to be made to export.py?I ran export.py directly and reported the following error: Traceback (most recent call last): File "export.py", line 653, in main(opt) File "export.py", line 648, in main run(**vars(opt)) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "export.py", line 549, in run y = model(im) # dry runs File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/workfile/yolov5-master/models/yolo.py", line 245, in forward return self._forward_once(x, profile, visualize) # single-scale inference, train File "/home/lixiaojun/workfile/yolov5-master/models/yolo.py", line 121, in _forward_once x = m(x) # run File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/workfile/yolov5-master/models/common.py", line 3013, in forward_fuse return self.act(self.conv(x)) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [32, 3, 6, 6], expected input[1, 6, 640, 640] to have 3 channels, but got 6 channels instead

Hi bro, can I tweet you to ask some questions?

@xiaoche-24
Copy link

I have successfully obtained a six-channel input pt model. How can I convert it to an onnx model? What modifications need to be made to export.py?I ran export.py directly and reported the following error: Traceback (most recent call last): File "export.py", line 653, in main(opt) File "export.py", line 648, in main run(**vars(opt)) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "export.py", line 549, in run y = model(im) # dry runs File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/workfile/yolov5-master/models/yolo.py", line 245, in forward return self._forward_once(x, profile, visualize) # single-scale inference, train File "/home/lixiaojun/workfile/yolov5-master/models/yolo.py", line 121, in _forward_once x = m(x) # run File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/workfile/yolov5-master/models/common.py", line 3013, in forward_fuse return self.act(self.conv(x)) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [32, 3, 6, 6], expected input[1, 6, 640, 640] to have 3 channels, but got 6 channels instead

Hi bro, can I tweet you to ask some questions?

sry,I don't have Twitter.You can ask the question directly here.

@wq247726404
Copy link

I have successfully obtained a six-channel input pt model. How can I convert it to an onnx model? What modifications need to be made to export.py?I ran export.py directly and reported the following error: Traceback (most recent call last): File "export.py", line 653, in main(opt) File "export.py", line 648, in main run(**vars(opt)) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "export.py", line 549, in run y = model(im) # dry runs File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/workfile/yolov5-master/models/yolo.py", line 245, in forward return self._forward_once(x, profile, visualize) # single-scale inference, train File "/home/lixiaojun/workfile/yolov5-master/models/yolo.py", line 121, in _forward_once x = m(x) # run File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/workfile/yolov5-master/models/common.py", line 3013, in forward_fuse return self.act(self.conv(x)) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/lixiaojun/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [32, 3, 6, 6], expected input[1, 6, 640, 640] to have 3 channels, but got 6 channels instead

Hi bro, can I tweet you to ask some questions?

sry,I don't have Twitter.You can ask the question directly here.

do you have wechat xiongdi

@glenn-jocher
Copy link
Member

@wq247726404 i'm here to help with any questions you may have regarding YOLOv5! Feel free to ask here and I'll do my best to assist you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants