Skip to content

Releases: MadryLab/robustness

robustness 1.2.1.post2

01 Dec 06:11
Compare
Choose a tag to compare
  • Support for SqueezeNet architectures
  • Fix incompatibility with PyTorch 1.7 (#83)
  • Allow user to specify only some device ids for training through the dp_device_ids argument to train.train_model
  • Update requirements.txt

robustness 1.2.1.post1

12 Aug 16:25
Compare
Choose a tag to compare

Small fixes in BREEDS dataset

robustness 1.2.1

06 Aug 15:42
Compare
Choose a tag to compare

Add BREEDS dataset, minor bug fixes

robustness 1.2-post1

11 Jul 05:56
375d9ef
Compare
Choose a tag to compare
  • Restore ImageNetHierarchy class
  • Improve type checking for dataset arguments

robustness v1.2

05 Jul 05:19
Compare
Choose a tag to compare
  • Biggest new features:
    • New ImageNet models
    • Mixed-precision training
    • OpenImages and Places365 datasets added
    • Ability to specify a custom accuracy function (custom loss functions
      were already supported, this is just for logging)
    • Improved resuming functionality
  • Changes to CLI-based training:
    • --custom-lr-schedule replaced by --custom-lr-multiplier (same format)
    • --eps-fadein-epochs replaced by general --custom-eps-multiplier
      (now same format as custom-lr schedule)
    • --step-lr-gamma now available to change the size of learning rate
      drops (used to be fixed to 10x drops)
    • --lr-interpolation argument added (can choose between linear and step
      interpolation between learning rates in the schedule)
    • --weight_decay is now called --weight-decay, keeping with
      convention
    • --resume-optimizer is a 0/1 argument for whether to resume the
      optimizer and LR schedule, or just the model itself
    • --mixed-precision is a 0/1 argument for whether to use mixed-precision
      training or not (required PyTorch compiled with AMP support)
  • Model and data loading:
    • DataParallel is now off by default when loading models, even when
      resume_path is specified (previously it was off for new models, and on
      for resumed models by default)
    • New add_custom_forward for make_and_restore_model (see docs for
      more details)
    • Can now pass a random seed for training data subsetting
  • Training:
    • See new CLI features---most have training-as-a-library counterparts
    • Fixed a bug that did not resume the optimizer and schedule
    • Support for custom accuracy functions
    • Can now disable torch.nograd for test set eval (in case you have a
      custom accuracy function that needs gradients even on the val set)
  • PGD:
    • Better random start for l2 attacks
    • Added a RandomStep attacker step (useful for large-noise training with
      varying noise over training)
    • Fixed bug in the with_image argument (minor)
  • Model saving:
    • Accuracies are now saved in the checkpoint files themselves (instead of
      just in the log stores)
    • Removed redundant checkpoints table from the log store, as it is a
      duplicate of the latest checkpoint file and just wastes space
  • Cleanup:
    • Remove redundant save_checkpoint function in helpers file
    • Code flow improvements

v1.1

01 Nov 06:18
Compare
Choose a tag to compare
Release stuff

v1.0-post1

01 Nov 06:06
e1c389d
Compare
Choose a tag to compare
Update README.rst