Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make cache annotation optional #1332

Merged
merged 15 commits into from
Aug 1, 2023
4 changes: 2 additions & 2 deletions src/super_gradients/recipes/coco2017_yolox.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ training_hyperparams:

architecture: yolox_s

multi_gpu: DDP
num_gpus: 8
multi_gpu: Off
BloodAxe marked this conversation as resolved.
Show resolved Hide resolved
num_gpus: 1

experiment_suffix: res${dataset_params.train_dataset_params.input_dim}
experiment_name: ${architecture}_coco2017_${experiment_suffix}
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ train_dataset_params:
input_dim: [640, 640]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionMosaic:
input_dim: ${dataset_params.train_dataset_params.input_dim}
Expand Down Expand Up @@ -60,6 +62,8 @@ val_dataset_params:
input_dim: [640, 640]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionPaddedRescale:
input_dim: ${dataset_params.val_dataset_params.input_dim}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ train_dataset_params:
input_dim: # None, do not resize dataset on load
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionRandomAffine:
degrees: 0 # rotation degrees, randomly sampled from [-degrees, degrees]
Expand Down Expand Up @@ -70,6 +72,8 @@ val_dataset_params:
input_dim:
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionRescale:
output_shape: [640, 640]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ train_dataset_params:
input_dim: [320, 320]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionRandomAffine:
degrees: 0. # rotation degrees, randomly sampled from [-degrees, degrees]
Expand Down Expand Up @@ -56,6 +58,8 @@ val_dataset_params:
input_dim: [320, 320]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionPaddedRescale:
input_dim: ${dataset_params.val_dataset_params.input_dim}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ train_dataset_params:
input_dim: [640, 640]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionMosaic:
input_dim: ${dataset_params.train_dataset_params.input_dim}
Expand Down Expand Up @@ -70,6 +72,8 @@ val_dataset_params:
input_dim: [640, 640]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionPaddedRescale:
input_dim: ${dataset_params.val_dataset_params.input_dim}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ train_dataset_params:
input_dim: [640, 640]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionRandomAffine:
degrees: 0 # rotation degrees, randomly sampled from [-degrees, degrees]
Expand Down Expand Up @@ -59,6 +61,8 @@ val_dataset_params:
input_dim: [636, 636]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: True
transforms:
- DetectionRGB2BGR:
prob: 1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ train_dataset_params:
input_dim: [640, 640]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: False
transforms:
- DetectionMosaic:
Expand Down Expand Up @@ -70,6 +71,7 @@ val_dataset_params:
input_dim: [640, 640]
cache_dir:
cache: False
cache_annotations: True
ignore_empty_annotations: False
transforms:
- DetectionPaddedRescale:
Expand Down

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions tests/unit_tests/detection_caching.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ def _load_annotation(self, sample_id: int) -> dict:
return {"img_path": str(sample_id), "target": np.array([[0, 0, 10, 10, cls_id]]), "resized_img_shape": self.image_size, "seed": sample_id}

# We overwrite this to fake images
def _load_image(self, index: int) -> np.ndarray:
np.random.seed(self.annotations[index]["seed"]) # Make sure that the generated random tensor of a given index will be the same over the runs
def _load_image(self, image_path: str) -> np.ndarray:
np.random.seed(int(image_path))
return np.random.random((self.image_size[0], self.image_size[1], 3)) * 255


Expand Down
2 changes: 1 addition & 1 deletion tests/unit_tests/detection_sub_classing_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ def _load_annotation(self, sample_id: int) -> dict:

# DetectionDatasetV2 will call _load_image but since we don't have any image we patch this method with
# tensor of image shape
def _load_image(self, index: int) -> np.ndarray:
def _load_image(self, image_path: str) -> np.ndarray:
return np.random.random(self.image_size)


Expand Down
2 changes: 1 addition & 1 deletion tests/unit_tests/detection_sub_sampling_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def _load_annotation(self, sample_id: int) -> dict:

# DetectionDatasetV2 will call _load_image but since we don't have any image we patch this method with
# tensor of image shape
def _load_image(self, index: int) -> np.ndarray:
def _load_image(self, image_path: str) -> np.ndarray:
return np.random.random(self.image_size)


Expand Down