Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

changing yolo input dimensions using coco dataset #569

Closed
yosefyehoshua opened this issue Jul 30, 2020 · 7 comments
Closed

changing yolo input dimensions using coco dataset #569

yosefyehoshua opened this issue Jul 30, 2020 · 7 comments
Labels
question Further information is requested Stale

Comments

@yosefyehoshua
Copy link

hi, how can I change yolo to take input of size WxHx6xN instead of WxHx3xN?

Hi, I want to modify yolo to take 6 channel image, instead of 3, modifying to first layer is enough
Is there an easy way to do it? thanksntext

@yosefyehoshua yosefyehoshua added the question Further information is requested label Jul 30, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Jul 30, 2020

Hello @yosefyehoshua, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 30, 2020

@yosefyehoshua torch hub tutorial has some info on varying input channels: https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading

@yosefyehoshua
Copy link
Author

yosefyehoshua commented Jul 31, 2020

thanks for the answer. I'm now trying to train the model from torch.hub
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=False, classes=10, channels=6)

but while training:

`def train_m(model, criterion, optimizer, scheduler, dataloaders, datasets_size, num_epochs=25, phase='train'):
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    for epoch in range(num_epochs):
        model.train()
        i = 0    
        for imgs, annotations in dataloaders['train']:
            print("imgs type: {}".format(type(imgs)))
            i += 1
            imgs = imgs.to(device, non_blocking=True).float() / 255.0
            annotations = [{k: v.to(device) for k, v in t.items()} for t in annotations]
            loss_dict = model(imgs, annotations)
            losses = sum(loss for loss in loss_dict.values())

            optimizer.zero_grad()
            losses.backward()
            optimizer.step()

            print(f'Iteration: {i}/{len_dataloader}, Loss: {losses}')`

my dataloader:

    train_ds = CustomCoco(root=train_coco_dir,
                          annotation=train_anno)
    val_ds = CustomCoco(root=val_coco_dir,
                          annotation=val_anno)

    
    def collate_fn(batch):
        return tuple(zip(*batch))

    train_batch_size = 4

    d_loader_train = torch.utils.data.DataLoader(train_ds,
                                            batch_size=train_batch_size,
                                            shuffle=True,
                                            num_workers=4,
                                            collate_fn=collate_fn)

    d_loader_val = torch.utils.data.DataLoader(val_ds,
                                            batch_size=train_batch_size,
                                            shuffle=True,
                                            num_workers=4,
                                            collate_fn=collate_fn)
    dataloaders = {'train': d_loader_train, 'val': d_loader_val}

    datasets_size = {'train': len(train_ds), 'val': len(val_ds)}

I'm gettting this error that my datakoader imgs is of 'tuple' type:

imgs = imgs.to(device, non_blocking=True).float() / 255.0
AttributeError: 'tuple' object has no attribute 'to'

what am I doing wrong?

@glenn-jocher
Copy link
Member

@yosefyehoshua only torch tensors can be sent to a device. You've tried to do the same with a python tuple.

@yosefyehoshua
Copy link
Author

@yosefyehoshua only torch tensors can be sent to a device. You've tried to do the same with a python tuple.

yes because I need to use this function in my dataloader of else I get an error from collate.py
def collate_fn(batch): return tuple(zip(*batch))
can you think about another solution?

@yosefyehoshua
Copy link
Author

@yosefyehoshua only torch tensors can be sent to a device. You've tried to do the same with a python tuple.

yes because I need to use this function in my dataloader of else I get an error from collate.py
def collate_fn(batch): return tuple(zip(*batch))
can you think about another solution?

I'm basically trying to train this:
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=False, classes=10, channels=6)
on custom coco dataset with 6 channels and a fixed 300x300 size images

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants