Skip to content

Commit

Permalink
Code Refactor ruff check --fix --extend-select I (#2222)
Browse files Browse the repository at this point in the history
* Refactor code for speed and clarity

* Auto-format by https://ultralytics.com/actions

* Update README.md

* Update train.py

---------

Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
  • Loading branch information
glenn-jocher and UltralyticsAssistant committed Jun 16, 2024
1 parent c9bd9e8 commit 7c031b8
Show file tree
Hide file tree
Showing 7 changed files with 59 additions and 57 deletions.
80 changes: 40 additions & 40 deletions README.zh-CN.md

Large diffs are not rendered by default.

4 changes: 3 additions & 1 deletion classify/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,9 @@ def train(opt, device):
# Scheduler
lrf = 0.01 # final lr (fraction of lr0)
# lf = lambda x: ((1 + math.cos(x * math.pi / epochs)) / 2) * (1 - lrf) + lrf # cosine
lf = lambda x: (1 - x / epochs) * (1 - lrf) + lrf # linear
def lf(x):
return (1 - x / epochs) * (1 - lrf) + lrf # linear

scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
# scheduler = lr_scheduler.OneCycleLR(optimizer, max_lr=lr0, total_steps=epochs, pct_start=0.1,
# final_div_factor=1 / 25 / lrf)
Expand Down
5 changes: 4 additions & 1 deletion models/yolo.py
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,10 @@ def __init__(self, cfg="yolov5s.yaml", ch=3, nc=None, anchors=None): # model, i
if isinstance(m, (Detect, Segment)):
s = 256 # 2x min stride
m.inplace = self.inplace
forward = lambda x: self.forward(x)[0] if isinstance(m, Segment) else self.forward(x)

def forward(x):
return self.forward(x)[0] if isinstance(m, Segment) else self.forward(x)

m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
check_anchor_order(m)
m.anchors /= m.stride.view(-1, 1, 1)
Expand Down
5 changes: 4 additions & 1 deletion segment/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,10 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
if opt.cos_lr:
lf = one_cycle(1, hyp["lrf"], epochs) # cosine 1->hyp['lrf']
else:
lf = lambda x: (1 - x / epochs) * (1.0 - hyp["lrf"]) + hyp["lrf"] # linear

def lf(x):
return (1 - x / epochs) * (1.0 - hyp["lrf"]) + hyp["lrf"] # linear

scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs)

# EMA
Expand Down
5 changes: 4 additions & 1 deletion train.py
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,10 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
if opt.cos_lr:
lf = one_cycle(1, hyp["lrf"], epochs) # cosine 1->hyp['lrf']
else:
lf = lambda x: (1 - x / epochs) * (1.0 - hyp["lrf"]) + hyp["lrf"] # linear

def lf(x):
return (1 - x / epochs) * (1.0 - hyp["lrf"]) + hyp["lrf"] # linear

scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) # plot_lr_scheduler(optimizer, scheduler, epochs)

# EMA
Expand Down
5 changes: 4 additions & 1 deletion utils/loggers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,10 @@
try:
from torch.utils.tensorboard import SummaryWriter
except ImportError:
SummaryWriter = lambda *args: None # None = SummaryWriter(str)

def SummaryWriter(*args):
return None # None = SummaryWriter(str)


try:
import wandb
Expand Down
12 changes: 0 additions & 12 deletions utils/loggers/clearml/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,10 @@

πŸ”­ Turn your newly trained <b>YOLOv5 model into an API</b> with just a few commands using ClearML Serving

<br />
And so much more. It's up to you how many of these tools you want to use, you can stick to the experiment manager, or chain them all together into an impressive pipeline!
<br />
<br />

![ClearML scalars dashboard](https://github.com/thepycoder/clearml_screenshots/raw/main/experiment_manager_with_compare.gif)

<br />
<br />

## 🦾 Setting Things Up

To keep track of your experiments and/or data, ClearML needs to communicate to a server. You have 2 options to get one:
Expand All @@ -46,8 +40,6 @@ Either sign up for free to the [ClearML Hosted Service](https://cutt.ly/yolov5-t

That's it! You're done 😎

<br />

## πŸš€ Training YOLOv5 With ClearML

To enable ClearML experiment tracking, simply install the ClearML pip package.
Expand Down Expand Up @@ -89,8 +81,6 @@ That's a lot right? 🀯 Now, we can visualize all of this information in the Cl

There even more we can do with all of this information, like hyperparameter optimization and remote execution, so keep reading if you want to see how that works!

<br />

## πŸ”— Dataset Version Management

Versioning your data separately from your code is generally a good idea and makes it easy to acquire the latest version too. This repository supports supplying a dataset version ID, and it will make sure to get the data if it's not there yet. Next to that, this workflow also saves the used dataset ID as part of the task parameters, so you will always know for sure which data was used in which experiment!
Expand Down Expand Up @@ -157,8 +147,6 @@ Now that you have a ClearML dataset, you can very simply use it to train custom
python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_id> --weights yolov5s.pt --cache
```

<br />

## πŸ‘€ Hyperparameter Optimization

Now that we have our experiments and data versioned, it's time to take a look at what we can build on top!
Expand Down

0 comments on commit 7c031b8

Please sign in to comment.