Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added deafult UOLO behaviour to main branch of repo #1

Merged
merged 118 commits into from
Apr 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
118 commits
Select commit Hold shift + click to select a range
23d61c6
Remake #0007
manole-alexandru Mar 25, 2023
260bc2a
Fixed seg architecture
manole-alexandru Mar 26, 2023
ab57c46
Changed Seg Loss Reduction
manole-alexandru Mar 26, 2023
db3353f
Seg Loss update
manole-alexandru Mar 26, 2023
7af36d5
Loss concat error fix
manole-alexandru Mar 26, 2023
5326a38
Removed Logs
manole-alexandru Mar 26, 2023
355a4d3
Log for Nan error
manole-alexandru Mar 26, 2023
eb2bbf6
Extra tool for debug
manole-alexandru Mar 26, 2023
9859600
Disabled Scaler for Seg Optimizer
manole-alexandru Mar 26, 2023
2e74b05
Fixed #0007
manole-alexandru Mar 26, 2023
25b23f6
Weighted Loss #0009
manole-alexandru Mar 26, 2023
bf0183b
Seg loss afects whole network #0009
manole-alexandru Mar 26, 2023
7f24a83
Changed Loss to Focal Loss
manole-alexandru Mar 27, 2023
b6ccc61
Focal Weighted Loss
manole-alexandru Mar 27, 2023
1888d5c
Focal Weighted Loss 2
manole-alexandru Mar 28, 2023
72c425a
Disabled weighted focal loss
manole-alexandru Mar 28, 2023
2b5cc45
Changed Optimizer to Alpha #0010
manole-alexandru Mar 28, 2023
6956f3b
Learning Rate Update
manole-alexandru Mar 28, 2023
10071c7
Changed LR from str to float
manole-alexandru Mar 28, 2023
1eb0cd9
Print for log update
manole-alexandru Mar 28, 2023
1a859da
Initial changes for seg logs
manole-alexandru Mar 29, 2023
8ec799c
Val Image Log (I think and hope)
manole-alexandru Mar 29, 2023
38c6b86
Val Image Log
manole-alexandru Mar 29, 2023
d723404
Image shape fix
manole-alexandru Mar 29, 2023
b5b327f
Fixed Plot Issues
manole-alexandru Mar 29, 2023
11b443b
Log seg GT
manole-alexandru Mar 29, 2023
28f64f7
Fixed seg gt plot
manole-alexandru Mar 29, 2023
662eddf
Mask Filename fix
manole-alexandru Mar 29, 2023
b6b5008
#0011 - A new hope
manole-alexandru Mar 30, 2023
0eeaf7a
Logs for plot debug
manole-alexandru Apr 2, 2023
fe7a74c
Naive attempt at result plot fix
manole-alexandru Apr 2, 2023
4992267
Naive attempt at result plot fix 2
manole-alexandru Apr 2, 2023
b9ee178
Naive attempt at result plot fix 3
manole-alexandru Apr 2, 2023
a2a2b1f
Removed Plots
manole-alexandru Apr 3, 2023
cd1ffae
Removed prints
manole-alexandru Apr 8, 2023
dcd4227
CudnnBatchNormBackward potential fix #0012
manole-alexandru Apr 9, 2023
2975086
Anomaly Detection Reenabled
manole-alexandru Apr 10, 2023
ef08e87
CudnnBatchNormBackward nan temporary fix #0012
manole-alexandru Apr 10, 2023
00d6490
Added Dropout
manole-alexandru Apr 11, 2023
5ee40c3
Relaxed Dropout
manole-alexandru Apr 12, 2023
8ead728
Relaxed Dropout as new version #0014
manole-alexandru Apr 12, 2023
0ad7de0
No warm up + Moved dropout back #0015
manole-alexandru Apr 12, 2023
3a23002
Reduced Dropout Rate #0015
manole-alexandru Apr 13, 2023
3936dba
Yolo Train Metrics
manole-alexandru Apr 13, 2023
f30890f
Random Perspective Aug for Mask Attempt 1
manole-alexandru Apr 15, 2023
7eb9bd6
Aug logging for testing
manole-alexandru Apr 15, 2023
b165524
Mosaic Aug Attempt 1
manole-alexandru Apr 15, 2023
19b1bc7
Disabled mosaic aug for now
manole-alexandru Apr 15, 2023
9835fcc
Perspective Aug #0016
manole-alexandru Apr 15, 2023
a055cb5
Fixed code small error #0016
manole-alexandru Apr 15, 2023
e9e055f
Mask perspective border color change
manole-alexandru Apr 15, 2023
6fb3525
extra image log
manole-alexandru Apr 15, 2023
7747932
test only seg with perspective aug
manole-alexandru Apr 15, 2023
098de32
extra image logs 2
manole-alexandru Apr 15, 2023
73a8355
Fix image log
manole-alexandru Apr 15, 2023
1d8bcb9
extra image logs 3
manole-alexandru Apr 15, 2023
6e4706f
Attempt fix for perspective aug
manole-alexandru Apr 15, 2023
0e2d8be
Reactivated detection
manole-alexandru Apr 15, 2023
3101083
Perspective Aug Fix
manole-alexandru Apr 15, 2023
18e53b0
Droped Dropout
manole-alexandru Apr 16, 2023
58ae220
Semantic Segmentation with OG Img Resolution #0017
manole-alexandru Apr 16, 2023
bfbd2d7
Descending number of channel in decoding path #0018
manole-alexandru Apr 16, 2023
7bdb866
Reduced no. of channels in decoding path
manole-alexandru Apr 17, 2023
0741c06
Increased no of channels in decoding path
manole-alexandru Apr 17, 2023
b581bc4
Reverted to #0017
manole-alexandru Apr 17, 2023
c2bf575
Readded Mosaic aug
manole-alexandru Apr 17, 2023
e40d511
Attempt at Mosaic aug fix
manole-alexandru Apr 17, 2023
c0954d2
Experiment with mosaic aug only #0017.5
manole-alexandru Apr 18, 2023
aa314e9
Fix mosaic only attempt 1
manole-alexandru Apr 18, 2023
00bc83a
Fix mosaic only attempt 2
manole-alexandru Apr 18, 2023
fd56cb6
Fix mosaic only attempt 3
manole-alexandru Apr 18, 2023
aaea44c
Fix mosaic only attempt 4
manole-alexandru Apr 19, 2023
80b5ff6
Revert "Fix mosaic only attempt 4"
manole-alexandru Apr 19, 2023
b13d6df
Revert "Fix mosaic only attempt 3"
manole-alexandru Apr 19, 2023
4f3c30c
Revert "Fix mosaic only attempt 2"
manole-alexandru Apr 19, 2023
13280d4
Revert "Fix mosaic only attempt 1"
manole-alexandru Apr 19, 2023
79a441c
Revert "Experiment with mosaic aug only #0017.5"
manole-alexandru Apr 19, 2023
9616cb4
Disabled val scrip on train data
manole-alexandru Apr 19, 2023
4d49801
Best variant so far
manole-alexandru Apr 20, 2023
f6e14f9
Changed no. of channels #0018
manole-alexandru Apr 20, 2023
32b4908
Extra skip connect exp
manole-alexandru Apr 21, 2023
2aa857f
Fixes to new conection
manole-alexandru Apr 21, 2023
0d57b4d
More U-Net Like UOLO (#0019)
manole-alexandru Apr 21, 2023
84841ad
Extended skip connection further
manole-alexandru Apr 21, 2023
b914a96
Attempt at 96 replication
manole-alexandru May 4, 2023
0ce5429
96 bug fix
manole-alexandru May 4, 2023
18608f1
97
manole-alexandru May 5, 2023
a75c19f
97v2
manole-alexandru May 5, 2023
562bab1
Fixed 97 v2
manole-alexandru May 5, 2023
ecdc356
97exp v3
manole-alexandru May 5, 2023
4ce1cb9
97exp v4
manole-alexandru May 5, 2023
290501c
97 exp v5
manole-alexandru May 6, 2023
79293d3
Revert "97 exp v5"
manole-alexandru May 6, 2023
433f52f
Revert "97exp v4"
manole-alexandru May 6, 2023
a7efe66
Revert "97exp v3"
manole-alexandru May 6, 2023
9d77118
Revert "Fixed 97 v2"
manole-alexandru May 6, 2023
6d745de
Revert "97v2"
manole-alexandru May 6, 2023
e9d9adf
Revert "97"
manole-alexandru May 6, 2023
e1f3513
Fixed validation script
manole-alexandru Jun 2, 2023
0cfc71e
Train val and val val inconsistency
manole-alexandru Jun 2, 2023
63ecc52
General focal loss
manole-alexandru Jun 6, 2023
6041927
Consistency in Seg Path
manole-alexandru Jun 9, 2023
5f7a441
Updated yolox
manole-alexandru Jul 26, 2023
6d9eeac
Updated UOLOx
manole-alexandru Jul 26, 2023
41fd727
In chanels for UoloX
manole-alexandru Jul 26, 2023
3cb9924
In channels UoloX2
manole-alexandru Jul 26, 2023
1046e08
In channels UoloX3
manole-alexandru Jul 26, 2023
92d9cc2
disable print
manole-alexandru Jul 26, 2023
c33956f
Revert "disable print"
manole-alexandru Jul 26, 2023
3f43ccd
Revert "In channels UoloX3"
manole-alexandru Jul 26, 2023
f0b51fd
Revert "In channels UoloX2"
manole-alexandru Jul 26, 2023
e9a2b1d
Revert "In chanels for UoloX"
manole-alexandru Jul 26, 2023
459aa18
Revert "Updated UOLOx"
manole-alexandru Jul 26, 2023
be1d439
Revert "Updated yolox"
manole-alexandru Jul 26, 2023
379f309
Squash Reverts
manole-alexandru Jul 26, 2023
2bb2a17
Revert "Squash Reverts"
manole-alexandru Jul 26, 2023
20e27d9
Merge branch 'Nineteen' of https://github.com/ManoleAlexandru99/yolov…
manole-alexandru Jul 26, 2023
6951bcf
Merge branch 'master' into Nineteen
manole-alexandru Apr 7, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions data/hyps/hyp.Objects365.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@ lr0: 0.00258
lrf: 0.17
momentum: 0.779
weight_decay: 0.00058
warmup_epochs: 1.33
warmup_epochs: 0
warmup_momentum: 0.86
warmup_bias_lr: 0.0711
box: 0.0539
seg: 0.1 # seg loss
seg: 1 # Weight for segmentation loss
det: 1 # Weights all detection losses in the same time (instead of having to change all 3 values)
cls: 0.299
cls_pw: 0.825
obj: 0.632
Expand Down
5 changes: 3 additions & 2 deletions data/hyps/hyp.VOC.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,12 @@ lr0: 0.00334
lrf: 0.15135
momentum: 0.74832
weight_decay: 0.00025
warmup_epochs: 3.3835
warmup_epochs: 0
warmup_momentum: 0.59462
warmup_bias_lr: 0.18657
box: 0.02
seg: 0.1 # seg loss
seg: 1 # seg loss
det: 1 # Weights all detection losses in the same time (instead of having to change all 3 values)
cls: 0.21638
cls_pw: 0.5
obj: 0.51728
Expand Down
5 changes: 3 additions & 2 deletions data/hyps/hyp.no-augmentation.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@ lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_epochs: 0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
seg: 0.1 # seg loss
seg: 1 # seg loss
det: 1 # Weights all detection losses in the same time (instead of having to change all 3 values)
cls: 0.3 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 0.7 # obj loss gain (scale with pixels)
Expand Down
5 changes: 3 additions & 2 deletions data/hyps/hyp.scratch-high.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@ lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_epochs: 0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
seg: 0.1 # seg loss
seg: 1 # seg loss
det: 1 # Weights all detection losses in the same time (instead of having to change all 3 values)
cls: 0.3 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 0.7 # obj loss gain (scale with pixels)
Expand Down
7 changes: 4 additions & 3 deletions data/hyps/hyp.scratch-low.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,16 @@
# python train.py --batch 64 --cfg yolov5n6.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --linear
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials

lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_epochs: 0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
seg: 0.1 # seg loss
seg: 1 # seg loss
det: 1 # Weights all detection losses in the same time (instead of having to change all 3 values)
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
Expand Down
7 changes: 4 additions & 3 deletions data/hyps/hyp.scratch-med.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,16 @@
# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300
# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials

lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_epochs: 0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
seg: 0.1 # seg loss
seg: 1 # seg loss
det: 1 # Weights all detection losses in the same time (instead of having to change all 3 values)
cls: 0.3 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 0.7 # obj loss gain (scale with pixels)
Expand Down
33 changes: 21 additions & 12 deletions models/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -850,26 +850,35 @@ def forward(self, x):
class Seg(nn.Module):

def __init__(self, in_channels):

super().__init__()
self.cv1 = Conv(in_channels, 32, k=3)
# print('\nIN CHANNELS SEG:', in_channels, '\n')
self.cv1 = Conv(in_channels, 96, k=3)
# self.cv11 = Conv(96, 32, k=3)
# self.cv22 = Conv(48, 16, k=3)

self.upsample = nn.Upsample(scale_factor=2, mode='nearest')
self.cv2 = Conv(32, 64, k=3)
self.cv3 = Conv(64, 1)
self.cv2 = Conv(192, 48, k=3)
self.cv3 = Conv(96, 16, k=3)
self.cv4 = Conv(16, 1, act=False)
self.relu = nn.ReLU()
# self.sigmoid = nn.Sigmoid()
self.dropout_normal = nn.Dropout(0.5)

def forward(self, x, skipped_input):

def forward(self, x):
# print('----entry shape', x.shape, '---\n')
x = self.upsample(x)
x = self.cv1(x)
x = self.relu(x)
# print('----upsample shape', x.shape, '---\n')
x = self.upsample(x)
# x2 = self.cv11(skipped_input[0])
x = torch.cat((x, skipped_input[0]), 1) # Skip connection

x = self.cv2(x)
x = self.relu(x)
x = self.upsample(x)
# x2 = self.cv22(skipped_input[1])
x = torch.cat((x, skipped_input[1]), 1) # Skip connection

x = self.cv3(x)
# print('----out shape', x.shape, '---\n')
# x = self.sigmoid(x)
x = self.upsample(x)
x = self.cv4(x)
return x


Expand Down
5 changes: 4 additions & 1 deletion models/yolo.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,10 @@ def __init__(self, nc=80, anchors=(), ch=(), inplace=True):
self.detect = Detect.forward

def forward(self, x):
p = self.semantic_seg(x[0])
old_x = x[:3]
new_skip_connect_info = x[3:]
x = old_x
p = self.semantic_seg(x[0], new_skip_connect_info)
x = self.detect(self, x)
return (x, p) if self.training else (x[0], p) if self.export else (x[0], p, x[1])

Expand Down
2 changes: 1 addition & 1 deletion models/yolov5m.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,5 +44,5 @@ head:
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)

[[17, 20, 23], 1, SemanticSegment, [nc, anchors]], # Detect(P3, P4, P5)
[[17, 20, 23, 2, 0], 1, SemanticSegment, [nc, anchors]], # Detect(P3, P4, P5)
]
67 changes: 54 additions & 13 deletions train.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
from utils.loggers import Loggers
from utils.loggers.comet.comet_utils import check_comet_resume
from utils.loss import ComputeLoss
from utils.metrics import fitness
from utils.metrics import fitness, seg_fitness
from utils.plots import plot_evolve
from utils.torch_utils import (EarlyStopping, ModelEMA, de_parallel, select_device, smart_DDP, smart_optimizer,
smart_resume, torch_distributed_zero_first)
Expand All @@ -76,7 +76,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
# Directories
w = save_dir / 'weights' # weights dir
(w.parent if evolve else w).mkdir(parents=True, exist_ok=True) # make dir
last, best = w / 'last.pt', w / 'best.pt'
last, best, best_seg = w / 'last.pt', w / 'best.pt', w / 'best_seg.pt'

# Hyperparameters
if isinstance(hyp, str):
Expand Down Expand Up @@ -171,6 +171,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio

# Resume
best_fitness, start_epoch = 0.0, 0
best_fitness_seg = 0.0
if pretrained:
if resume:
best_fitness, start_epoch, epochs = smart_resume(ckpt, optimizer, ema, weights, epochs, resume)
Expand Down Expand Up @@ -240,7 +241,9 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers
hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers

hyp['seg'] = 1
hyp['box'] *= hyp['det']
hyp['cls'] *= hyp['det']
hyp['obj'] *= hyp['det']

hyp['label_smoothing'] = opt.label_smoothing
model.nc = nc # attach number of classes to model
Expand All @@ -255,18 +258,20 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
# nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
last_opt_step = -1
maps = np.zeros(nc) # mAP per class
results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
results = (0, 0, 0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls), mIoU, railIoU
scheduler.last_epoch = start_epoch - 1 # do not move
scheduler_seg.last_epoch = start_epoch - 1
scaler = torch.cuda.amp.GradScaler(enabled=amp)
scaler_seg = torch.cuda.amp.GradScaler(enabled=amp)
scaler_seg = torch.cuda.amp.GradScaler(enabled=False)
stopper, stop = EarlyStopping(patience=opt.patience), False
compute_loss = ComputeLoss(model) # init loss class
callbacks.run('on_train_start')
LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n'
f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
f"Logging results to {colorstr('bold', save_dir)}\n"
f'Starting training for {epochs} epochs...')

torch.autograd.set_detect_anomaly(True)
for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
callbacks.run('on_train_epoch_start')
model.train()
Expand Down Expand Up @@ -294,11 +299,12 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
callbacks.run('on_train_batch_start')
ni = i + nb * epoch # number integrated batches (since train start)
imgs = imgs.to(device, non_blocking=True).float() / 255 # uint8 to float32, 0-255 to 0.0-1.0
if not torch.all(segs >= 0):
print('Pre 0-1 Normalization - SEG MASK NOT VALID')
# if not torch.all(segs >= 0):
# print('Pre 0-1 Normalization - SEG MASK NOT VALID')
segs = segs.to(device, non_blocking=True).float() / 255
if not torch.all(segs >= 0):
print('Post 0-1 Normalization - SEG MASK NOT VALID')
# if not torch.all(segs >= 0):
# print('Post 0-1 Normalization - SEG MASK NOT VALID')

# Warmup
if ni <= nw:
xi = [0, nw] # x interp
Expand Down Expand Up @@ -334,7 +340,10 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
loss *= 4.

# Backward
scaler.scale(loss).backward(retain_graph=True)
try:
scaler.scale(loss).backward(retain_graph=True)
except Exception as e:
print('\n-------- FOUND EXCEPTION: ', e, ' Life goes on.-------\n')

scaler_seg.scale(loss_seg).backward()

Expand Down Expand Up @@ -371,6 +380,7 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio

# Scheduler
lr = [x['lr'] for x in optimizer.param_groups] # for loggers
# print('\n------LR', lr)
scheduler.step()

scheduler_seg.step()
Expand All @@ -381,6 +391,20 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights'])
final_epoch = (epoch + 1 == epochs) or stopper.possible_stop
if not noval or final_epoch: # Calculate mAP
'''
# Validation script on train data does not work for Mosaic Aug
results_train, maps_train, _ = validate.run(data_dict,
batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz,
half=amp,
model=ema.ema,
single_cls=single_cls,
dataloader=train_loader,
save_dir=save_dir,
plots=False,
callbacks=callbacks,
compute_loss=compute_loss)
'''
results, maps, _ = validate.run(data_dict,
batch_size=batch_size // WORLD_SIZE * 2,
imgsz=imgsz,
Expand All @@ -395,11 +419,14 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio

# Update best mAP
fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
fi_seg = seg_fitness(np.array(results).reshape(1, -1))
stop = stopper(epoch=epoch, fitness=fi) # early stop check
if fi > best_fitness:
best_fitness = fi
if fi_seg > best_fitness_seg:
best_fitness_seg = fi_seg
log_vals = list(mloss) + list(results) + lr
callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi)
callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi, best_fitness_seg, fi_seg)

# Save model
if (not nosave) or (final_epoch and not evolve): # if save
Expand All @@ -414,13 +441,27 @@ def train(hyp, opt, device, callbacks): # hyp is path/to/hyp.yaml or hyp dictio
'git': GIT_INFO, # {remote, branch, commit} if a git repo
'date': datetime.now().isoformat()}

ckpt_seg = {
'epoch': epoch,
'best_fitness': best_fitness_seg,
'model': deepcopy(de_parallel(model)).half(),
'ema': deepcopy(ema.ema).half(),
'updates': ema.updates,
'optimizer': optimizer_seg.state_dict(),
'opt': vars(opt),
'git': GIT_INFO, # {remote, branch, commit} if a git repo
'date': datetime.now().isoformat()}

# Save last, best and delete
torch.save(ckpt, last)
if best_fitness == fi:
torch.save(ckpt, best)
if best_fitness_seg == fi_seg:
torch.save(ckpt_seg, best_seg)
if opt.save_period > 0 and epoch % opt.save_period == 0:
torch.save(ckpt, w / f'epoch{epoch}.pt')
del ckpt
del ckpt_seg
callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi)

# EarlyStopping
Expand Down Expand Up @@ -469,7 +510,7 @@ def parse_opt(known=False):
parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path')
parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path')
parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-high.yaml', help='hyperparameters path')
parser.add_argument('--epochs', type=int, default=100, help='total training epochs')
parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
Expand Down Expand Up @@ -511,7 +552,7 @@ def parse_opt(known=False):


def main(opt, callbacks=Callbacks()):
print('\n---------- VERSION:', '#0007', '----------\n')
# print('\n---------- VERSION:', '#0019_96exp', '----------\n')
# Checks
if RANK in {-1, 0}:
print_args(vars(opt))
Expand Down
Loading