Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Activations #708

Merged
merged 14 commits into from
Sep 1, 2023
Merged

Activations #708

merged 14 commits into from
Sep 1, 2023

Conversation

EvgeniyS99
Copy link
Contributor

No description provided.

Copy link
Member

@SergeyTsimfer SergeyTsimfer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't mix different entities in code: each class/module should be responsible for its own thing. Rethink architecture of the proposed changes and their location in our code. One example of this can be:

  • getting names / layers of a TorchModel/Network can be a part of the model files, maybe even as a method of it. In that case, it should be automatic and not hardcoded;
  • an utility function to reduce the number of channels can be placed near to the model. Extra thought: you can do PCA on GPU: in that case, it can be integrated to the model outputs parameter smoothly and would probably look beautiful;
  • any batch-related things look absolutely random here and do not belong. Do we even need them?

@@ -118,3 +120,48 @@ def center_crop(inputs, target_shape, dims):
crops_ = [slice(crop_lefts[i], crop_lefts[i] + target_shape[i]) for i in range(dims)]
crops = [slice(dim) for dim in no_crop_shape] + crops_
return inputs[crops]


def get_blocks_and_activations(model):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Too much hardcode here: if this code becomes a part of TorchModel, it should utilize and use understanding of classes involved. As of now, this code works only for a selected subclass of models.

As model (class Network) consists of pre-defined modules (EncoderModule, DecoderModule, DefaultModule, etc), you can check the instance of submodules and extract necessary names.


return blocks, activation_names

def compress_activations(batch, activation_names, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

batch has no relation to TorchModel and will not be part of the model-related-utils


return batch, explained_variance

def reduce_channels(images, n_components=3, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would like an option to apply a user-provided PCA instance (so that we can accumulate data in it). Also, normalization should be optional

batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
activation_blocks.extend(blocks)
activation_names += [f'{module}_{i}' for i in range(len(blocks))]

return activation_blocks, activation_names
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you even need activation_names?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you even need activation_names?

I use them to assign activations to batch, as well as to provide titles for plots.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it is how you use them. Why something related to your code (batch and plots), that don't belong here, should be there?

batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
activation_blocks.extend(blocks)
activation_names += [f'{module}_{i}' for i in range(len(blocks))]

return activation_blocks, activation_names
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it is how you use them. Why something related to your code (batch and plots), that don't belong here, should be there?

batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
batchflow/models/torch/base.py Outdated Show resolved Hide resolved
@SergeyTsimfer SergeyTsimfer merged commit 0d1ccac into master Sep 1, 2023
19 checks passed
@SergeyTsimfer SergeyTsimfer deleted the activations branch September 1, 2023 14:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants