Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New issue appeared today during running train.py file #16

Open
abirsince92 opened this issue Feb 10, 2023 · 2 comments
Open

New issue appeared today during running train.py file #16

abirsince92 opened this issue Feb 10, 2023 · 2 comments

Comments

@abirsince92
Copy link

Got the error during running the train.py file:

Transferred 308/361 items from yolov5s.pt
Scaled weight_decay = 0.0005
optimizer: Adam with parameter groups 59 weight (no decay), 62 weight, 62 bias
albumentations: Blur(always_apply=False, p=0.01, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.01, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01), CLAHE(always_apply=False, p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))
train: Scanning '/content/data/train/labels' images and labels...11402 found, 0 missing, 0 empty, 0 corrupt: 100% 11402/11402 [00:22<00:00, 502.88it/s]
train: New cache created: /content/data/train/labels.cache
val: Scanning '/content/data/val/labels' images and labels...3801 found, 0 missing, 0 empty, 0 corrupt: 100% 3801/3801 [00:08<00:00, 472.10it/s]
val: New cache created: /content/data/val/labels.cache
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)
Plotting labels to runs/train/exp/labels.jpg...

AutoAnchor: 4.46 anchors/target, 1.000 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅
Image sizes 640 train, 640 val
Using 4 dataloader workers
Logging results to runs/train/exp
Starting training for 30 epochs...
Traceback (most recent call last):
File "train.py", line 745, in
main(opt)
File "train.py", line 641, in main
train(opt.hyp, opt, device, callbacks)
File "train.py", line 336, in train
sparseml_wrapper.initialize_loggers(loggers.logger, loggers.tb, loggers.wandb)
File "/content/yolov5-deepsparse-blogpost/yolov5-train/utils/sparse.py", line 97, in initialize_loggers
SparsificationGroupLogger(
File "/usr/local/lib/python3.8/dist-packages/sparseml/pytorch/utils/logger.py", line 630, in init
TensorBoardLogger(
File "/usr/local/lib/python3.8/dist-packages/sparseml/pytorch/utils/logger.py", line 443, in init
raise tensorboard_import_error
File "/usr/local/lib/python3.8/dist-packages/sparseml/pytorch/utils/logger.py", line 30, in
from torch.utils.tensorboard import SummaryWriter
File "/usr/local/lib/python3.8/dist-packages/torch/utils/tensorboard/init.py", line 4, in
LooseVersion = distutils.version.LooseVersion
AttributeError: module 'distutils' has no attribute 'version'

@dnth
Copy link
Owner

dnth commented Feb 10, 2023

@abirsince92 have you tried running it on Colab? How do I produce this error on my machine?

@abirsince92
Copy link
Author

Sir,
Today I got this error on Colab notebook after running the train.py file, yesterday it was fine...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants