Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overhaul 2.1 Remove Dependencies / Add Full Timm Support #3

Merged
merged 6 commits into from
Feb 27, 2024

Conversation

isaaccorley
Copy link
Owner

@isaaccorley isaaccorley commented Jan 27, 2024

This PR seeks to do a few things:

  • remove the dependency on pretrainedmodels and efficientnet-pytorch as they are no longer maintained.
  • remove the mock and torchvision dependency (torchvision isn't used anywhere anyway)
  • Replace any lost model support by fully using timm (torchseg.encoders.TimmEncoder)
  • add support for timm ViT encoders (torchseg.encoders.TimmViTEncoder`)
  • Test all supported and unsupported timm encoders (they are defined here torchseg.encoders.supported). This includes ConvNext and Swin pretrained backbones
  • Update tests to be more thorough
  • Remove unecessary forward/backward calls in tests (this speeds up the tests significantly)

There's some other misc cleanup that is done:

  • remove Activation class. We instead let the user choose the head activation when creating a model. Still defaults to nn.Identity()
  • Allow for passing additional encoder_params to timm.create_model(**kwargs) in case users want to further customize the backbone

@isaaccorley isaaccorley self-assigned this Jan 27, 2024
@JulienMaille
Copy link

Looking forward to test this update!
Have you checked if you can use timm convenext_nano encoder with this PR?

@isaaccorley
Copy link
Owner Author

Looking forward to test this update!

Have you checked if you can use timm convenext_nano encoder with this PR?

Yes this PR allows the use of convnext as the encoder!

@isaaccorley
Copy link
Owner Author

@notprime Can you give this a review and maybe give some thoughts?

@ogencoglu
Copy link

Looking forward to this update...

Copy link

@notprime notprime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything seems fine to me, maybe we can specify act_layer and norm_layer directly in params instead of adding them through params.update via kwargs, just to make it clearer

torchseg/encoders/timm.py Show resolved Hide resolved
@isaaccorley isaaccorley merged commit 3401b84 into main Feb 27, 2024
14 checks passed
@isaaccorley isaaccorley deleted the dependencies/cleanup branch February 27, 2024 01:08
@isaaccorley
Copy link
Owner Author

@JulienMaille @ogencoglu @notprime These features are now merged and you can install them using

pip install --pre torchseg or pip install 'torchseg==0.0.1a2'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants