You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this case, transforms are defined in default_yolo_nas_coco_processing_params function in processing.py
There are StandardizeImage and DetectionCenterPadding. No RGB to BGR transform.
Training with coco_detection_yolo_nas_dataset_params.yaml.
Validation transforms include StandardizeImage, DetectionPadToSize (which is center padding), and RGB to BGR transform.
Also, there is a popular notebook for fine-tuning Yolo-NAS on the custom dataset: https://github.com/roboflow/notebooks/blob/main/notebooks/train-yolo-nas-on-custom-dataset.ipynb
In this notebook coco_detection_yolo_format_train is used for dataset and dataloader creation. Transforms are defined in coco_detection_yolo_format_base_dataset_params.yaml. And there is DetectionPaddedRescale, which includes bottom right padding. No StandardizeImage or RGB to BGR transforms.
So, my question is which transforms should be used for training, validation, and prediction with Yolo-NAS?
Specifically, I'm interested in whether Yolo-NAS accepts RGB or BGR images, whether should images be normalized or not, and what is the proper way of padding (bottom right or center)?
I want to fine-tune the model with the same transforms that were used during training/validation to minimize differences in input data.
Versions
No response
The text was updated successfully, but these errors were encountered:
馃挕 Your Question
It seems there are several places in the code where image transforms are defined for Yolo-NAS:
default_yolo_nas_coco_processing_params
function inprocessing.py
There are
StandardizeImage
andDetectionCenterPadding
. No RGB to BGR transform.coco_detection_yolo_nas_dataset_params.yaml
.Validation transforms include
StandardizeImage
,DetectionPadToSize
(which is center padding), and RGB to BGR transform.In this notebook
coco_detection_yolo_format_train
is used for dataset and dataloader creation. Transforms are defined incoco_detection_yolo_format_base_dataset_params.yaml
. And there isDetectionPaddedRescale
, which includes bottom right padding. NoStandardizeImage
or RGB to BGR transforms.So, my question is which transforms should be used for training, validation, and prediction with Yolo-NAS?
Specifically, I'm interested in whether Yolo-NAS accepts RGB or BGR images, whether should images be normalized or not, and what is the proper way of padding (bottom right or center)?
I want to fine-tune the model with the same transforms that were used during training/validation to minimize differences in input data.
Versions
No response
The text was updated successfully, but these errors were encountered: