Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add nn.SiLU inplace in attempt_load() #1940

Merged
merged 3 commits into from
Jan 15, 2021
Merged

Add nn.SiLU inplace in attempt_load() #1940

merged 3 commits into from
Jan 15, 2021

Conversation

xlorne
Copy link
Contributor

@xlorne xlorne commented Jan 14, 2021

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

Enhancement of activation function compatibility in model loading.

πŸ“Š Key Changes

  • πŸš€ Added nn.SiLU to the list of activation functions setting inplace to True during model loading.

🎯 Purpose & Impact

  • πŸ’‘ Ensures compatibility with PyTorch 1.7.0, allowing models with SiLU activation functions to be loaded without issues.
  • πŸƒβ€β™‚οΈ Potentially improves in-place operation performance, reducing memory overhead.
  • πŸ‘₯ Users employing the SiLU activation will experience smoother integrations and upgrades.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ‘‹ Hello @1991wangliang, thank you for submitting a πŸš€ PR! To allow your work to be integrated as seamlessly as possible, we advise you to:

  • βœ… Verify your PR is up-to-date with origin/master. If your PR is behind origin/master update by running the following, replacing 'feature' with the name of your local branch:
git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
git checkout feature  # <----- replace 'feature' with local branch name
git rebase upstream/master
git push -u origin -f
  • βœ… Verify all Continuous Integration (CI) checks are passing.
  • βœ… Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." -Bruce Lee

@glenn-jocher
Copy link
Member

@1991wangliang hi there. Could you explain the purpose behind the sleep call?

@xlorne
Copy link
Contributor Author

xlorne commented Jan 15, 2021

perhaps the CPU performance is too poor, or it may be blocked under multi-threading. It will get stuck when loading data.

    # DDP mode
    if cuda and rank != -1:
        model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank)

    print("sleep 3 sec to load data .")
    time.sleep(3)

    # Trainloader
    dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
                                            hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank,
                                            world_size=opt.world_size, workers=opt.workers,
                                            image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '))
    mlc = np.concatenate(dataset.labels, 0)[:, 0].max()  # max label class

before create_dataloader ,sleep some sec can be running.

@glenn-jocher glenn-jocher changed the title sleep 3 sec to load data . Add nn.SiLU inplace in attempt_load() Jan 15, 2021
@glenn-jocher
Copy link
Member

@1991wangliang I'm not sure if the sleep term brings any benefits. Your system may be low on resources. Perhaps you could try training with less --workers if you are running out of threads.

I've removed the sleep term and added a SiLU inplace change to this PR in it's place.

@glenn-jocher glenn-jocher merged commit 03ebe6e into ultralytics:master Jan 15, 2021
@xlorne
Copy link
Contributor Author

xlorne commented Jan 20, 2021

thanks, i known why can be locked . when train.py execute once ,that can be created labels.cache file on train data path . that train.py will be lock in execute load data. i remove that can be run.

KMint1819 pushed a commit to KMint1819/yolov5 that referenced this pull request May 12, 2021
* sleep 3 sec to load data .

* Update train.py

* Add nn.SiLU inplace in attempt_load()

Co-authored-by: wangliang <wangliang@codingapi.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
taicaile pushed a commit to taicaile/yolov5 that referenced this pull request Oct 12, 2021
* sleep 3 sec to load data .

* Update train.py

* Add nn.SiLU inplace in attempt_load()

Co-authored-by: wangliang <wangliang@codingapi.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
* sleep 3 sec to load data .

* Update train.py

* Add nn.SiLU inplace in attempt_load()

Co-authored-by: wangliang <wangliang@codingapi.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants