-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine Tuning YoloV5 with custom dataset #11470
Comments
@Benti98 hello! To fine-tune the YOLOv5 neural net for an additional object, you can first download the YOLOv5s pre-trained weights and then train on your custom dataset of the new classes. You can use the Here's an example command to train YOLOv5s on a new dataset without retraining on the COCO dataset:
Make sure to update the
If you run into any specific issues or errors during the training process, feel free to create a new issue and we will be happy to assist you. |
Hello i've similar question, just slightly different. |
@FeliceSchena hello! To modify the loss to mask potentially unlabelled COCO classes, you can edit the file You can check the class label associated with each anchor (tile) by accessing the 'tcls' variable. In order to mask the loss for unlabelled classes, you can simply multiply the appropriate cells of the
Regarding the scheduler and job array, you can resume the training with the We hope this helps! If you have any further questions, please don't hesitate to ask. |
Thanks for the quick reply. |
@FeliceSchena I apologize for the confusion, you are correct. The loss computation is defined in You can modify the code block for calculating the binary cross-entropy loss to only consider the classes you're interested in by creating a mask. Here's an example snippet of how to modify the
Regarding your second question, you can resume training using the Hope this helps! Let me know if you have any further questions. |
@glenn-jocher Thanks for your help. But the p are a list of tensors, so it's impossible to iterate through in this manner: The same it's for cls_tgts. cls_tgts, box_tgts, indices, anchors = self.build_targets(p, targets) # get the indices of the appropriate class
loss=[]
for i in range (self.nl):
print(p[i].shape)
#apply a mask to the class targets to only consider the classes of interest
mask = torch.logical_and((cls_tgts[i] > 79) ,(cls_tgts[i] < 83)) # create a boolean mask to select the appropriate classes
cls_tgts_masked = torch.where(mask, cls_tgts[i] - 80, torch.zeros_like(cls_tgts[i])) # shift class labels and set others to 0
# compute loss using binary cross-entropy with logits and the mask we created
loss_tmp=nn.functional.binary_cross_entropy_with_logits(
p[i][self.nc:self.nc+3], cls_tgts_masked[...,None].float(), reduction='none') # only consider the classes of interest
loss.append(loss_tmp * mask.float())
return mean(loss) Of this code it's wrong because the shapes doesn't match. |
@FeliceSchena hello! It seems that there was a mistake in my previous reply. Since
This should calculate the loss across all tensors in |
Thank you, but the cls_tgts it's a list too. So i can't use the operator: cls_tgts > 79 Due to this it's impossible to create a mask of the same dimension of p. mask = (cls_tgts > 79) & (cls_tgts < 83) In my case tls_tgls[0] has shape 850 |
Hello @FeliceSchena, I apologize for my previous mistake and any confusion it may have caused. You are correct that One way to create a boolean mask for the appropriate classes is to manually iterate over the classes and use the same criteria for masking the classes as before. Here's an example code snippet to get you started: def compute_loss(self, p, targets):
cls_tgts, box_tgts, indices, anchors = self.build_targets(p, targets)
loss = 0
for i in range(self.nl):
# apply a mask to the class targets to only consider the classes of interest
cls_mask = torch.zeros_like(cls_tgts[i], dtype=torch.bool) # create a boolean mask tensor
for cls_idx in range(80, 83):
cls_mask = (cls_mask | (cls_tgts[i] == cls_idx)) # set mask element to true if the class is in the list of valid classes
# compute loss using binary cross-entropy with logits and the mask we created
cls_loss_i = nn.functional.binary_cross_entropy_with_logits(
p[i][..., self.nc:self.nc+3], cls_tgts[i][..., None].float(), reduction='none')
cls_loss_i = (cls_loss_i * cls_mask.float()[..., None]).sum() / (cls_mask.sum().float() + 1e-16) # avoid division by 0
loss += cls_loss_i
return loss / self.nl This code should create a boolean mask for the classes of interest and apply it to the corresponding logits in the |
@glenn-jocher it maybe a bit of disrespectful but your replies seems a chatgpt generated content, to me 😅 |
The last answers were also incorrect. However with a little patience they guided me to understand what the given shape of the tensor represented. mask=torch.logical_and(tcls[i]>79,tcls[i]<83)
masked_p=mask.where(mask,1.0)
masked_p=masked_p.where(~mask,self.cn)
t = torch.full_like(pcls, self.cn, device=self.device) # targets
t=t.float()
t[range(n), tcls[i]] = self.cp*masked_p
# compute loss using binary cross-entropy with logits and the mask we created
lcls += (self.BCEcls(pcls, t)) It's definitely not the height of elegance, but they seem to work. masked_p=masked_p.where(~mask,self.cn) It should be redundant, but instead of 0 I decided to use the negatives on the classes. Furthermore: t=t.float() It allowed me to increase the accuracy of the tensor and therefore be able to multiply the targets with something that could be different than 1 and 0. |
@FeliceSchena hello, Thank you for providing an update regarding your progress and for sharing your code for solving the issue for your specific use case. It's often the case that different applications of a model require slightly different modifications to the code and it's great to see that you were able to find a solution that worked for you. If you have any further questions or need any additional assistance, feel free to ask and the community will do its best to help. Best regards, YOLOv5 Team. |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Hy!
I am working with Yolov5, and i have a question, how to do Fine Tuning with this neural net.
The things that i would is to take the weights of yolov5s neural net and train it to recognize another object in addition to the other 80 classes that already can recognize in order to detect 81 classes.
So train the neural net only with the dataset of the new classes, not with both datasets (coco and custom).
Additional
No response
The text was updated successfully, but these errors were encountered: