Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fetch argument None has invalid type <class 'NoneType'> #3

Open
ahof1704 opened this issue May 30, 2021 · 5 comments
Open

Fetch argument None has invalid type <class 'NoneType'> #3

ahof1704 opened this issue May 30, 2021 · 5 comments

Comments

@ahof1704
Copy link

Hi,

When running the DeepExplain as shown below, I run into the following error. Any suggestions? Please let me know if you need any further info. Thanks!

import keras
sess = K.get_session()
print('sess: ',sess)
from ConceptSaliencyMaps.deepexplain.tensorflow import DeepExplain
from ConceptSaliencyMaps.deepexplain.utils import preprocess

list_files = []
all_files = train_files + test_files
for file_name in files_max:
    for file_name2 in all_files:
        if file_name in file_name2:
            list_files.append(file_name2)
            
test_set2 = zfish_age(list_files, path_to_save = path_to_augmented, test=True, transform = True, new_channel=new_channel, new_size_frame=size_frame, 
                     verbose=False)
test_generator2 = data.DataLoader(test_set2,batch_size=1,
                                       shuffle=False,
                                       num_workers=20)

input_img = keras.Input(shape=(50, 128, 128)) 

with DeepExplain(session=sess, graph=sess.graph) as de:
    with torch.no_grad():
        for i, d in enumerate(test_generator2): 
            xis, _, _, labels_name = d
            print('labels_name: {}'.format(labels_name))
                
            input_tensor = input_img
            img_array = xis.reshape([1,50,128,128])
            ris, zis = model(xis.to(device))
            print('zis.shape: ',zis.shape) # torch.Size([1, 256])
            latents = reducer.transform(zis.cpu().detach())
            print('latents.shape: ',latents.shape) # (1, 2)
            method = 'guidedbp'

            concept_score = [K.sum(latents*i) for i in concept_vectors[attr]]
            attributions_guided = [de.explain(method, i, input_tensor, img_array) for i in concept_score]```


Error: 
TypeError                                 Traceback (most recent call last)
<ipython-input-169-177871cfe4fc> in <module>
     73 
     74             concept_score = [K.sum(latents*i) for i in concept_vectors[attr]]
---> 75             attributions_guided = [de.explain(method, i, input_tensor, img_array) for i in concept_score]

<ipython-input-169-177871cfe4fc> in <listcomp>(.0)
     73 
     74             concept_score = [K.sum(latents*i) for i in concept_vectors[attr]]
---> 75             attributions_guided = [de.explain(method, i, input_tensor, img_array) for i in concept_score]

../ConceptSaliencyMaps/deepexplain/tensorflow/methods.py in explain(self, method, T, X, xs, **kwargs)
    733         _ENABLED_METHOD_CLASS = method_class
    734         method = _ENABLED_METHOD_CLASS(T, X, xs, self.session, self.keras_phase_placeholder, **kwargs)
--> 735         result = method.run()
    736         if issubclass(_ENABLED_METHOD_CLASS, GradientBasedMethod) and _GRAD_OVERRIDE_CHECKFLAG == 0:
    737             warnings.warn('DeepExplain detected you are trying to use an attribution method that requires '

../ConceptSaliencyMaps/deepexplain/tensorflow/methods.py in run(self)
    463         for alpha in list(np.linspace(1. / self.steps, 1.0, self.steps)):
    464             xs_mod = [xs * alpha for xs in self.xs] if self.has_multiple_inputs else self.xs * alpha
--> 465             _attr = self.session_run(attributions, xs_mod)
    466             if gradient is None: gradient = _attr
    467             else: gradient = [g + a for g, a in zip(gradient, _attr)]

../ConceptSaliencyMaps/deepexplain/tensorflow/methods.py in session_run(self, T, xs)
     94         if self.keras_learning_phase is not None:
     95             feed_dict[self.keras_learning_phase] = 0
---> 96         return self.session.run(T, feed_dict)
     97 
     98     def _set_check_baseline(self):

../lib/python3.7/site-packages/tensorflow_core/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    954     try:
    955       result = self._run(None, fetches, feed_dict, options_ptr,
--> 956                          run_metadata_ptr)
    957       if run_metadata:
    958         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

../lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1163     # Create a fetch handler to take care of the structure of fetches.
   1164     fetch_handler = _FetchHandler(
-> 1165         self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
   1166 
   1167     # Run request and get response.

..lib/python3.7/site-packages/tensorflow_core/python/client/session.py in __init__(self, graph, fetches, feeds, feed_handles)
    472     """
    473     with graph.as_default():
--> 474       self._fetch_mapper = _FetchMapper.for_fetch(fetches)
    475     self._fetches = []
    476     self._targets = []

../lib/python3.7/site-packages/tensorflow_core/python/client/session.py in for_fetch(fetch)
    264     elif isinstance(fetch, (list, tuple)):
    265       # NOTE(touts): This is also the code path for namedtuples.
--> 266       return _ListFetchMapper(fetch)
    267     elif isinstance(fetch, collections_abc.Mapping):
    268       return _DictFetchMapper(fetch)

../lib/python3.7/site-packages/tensorflow_core/python/client/session.py in __init__(self, fetches)
    373     """
    374     self._fetch_type = type(fetches)
--> 375     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
    376     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
    377 

../lib/python3.7/site-packages/tensorflow_core/python/client/session.py in <listcomp>(.0)
    373     """
    374     self._fetch_type = type(fetches)
--> 375     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
    376     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)
    377 

../lib/python3.7/site-packages/tensorflow_core/python/client/session.py in for_fetch(fetch)
    261     if fetch is None:
    262       raise TypeError('Fetch argument %r has invalid type %r' %
--> 263                       (fetch, type(fetch)))
    264     elif isinstance(fetch, (list, tuple)):
    265       # NOTE(touts): This is also the code path for namedtuples.

TypeError: Fetch argument None has invalid type <class 'NoneType'>
@lenbrocki
Copy link
Owner

lenbrocki commented May 30, 2021 via email

@ahof1704
Copy link
Author

I am using a contrastive learning model (SimCLR) with a ResNet backbone. There is nothing particular about the SimCLR except the type of loss calculated. The ResNet is just the standard PyTorch implementation is defined as follows (although most of it doesn't matter for the contrastive learning):

from ResNets.model import (generate_model, load_pretrained_model, make_data_parallel,
                   get_fine_tuning_parameters)

   class Args:
        model = 'resnet'
        model_depth = 18
        n_classes=256
        n_input_channels=1
        resnet_shortcut='A'
        sample_duration = 16
        conv1_t_size=3
        conv1_t_stride=2
        no_max_pool=True
        resnet_widen_factor=1

opt=Args()
model = generate_model(opt).to(device)
print(model)

ResNet(
  (conv1): Conv3d(1, 64, kernel_size=(3, 7, 7), stride=(2, 2, 2), padding=(1, 3, 3), bias=False)
  (bn1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool3d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): BasicBlock(
      (conv1): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn2): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn2): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): BasicBlock(
      (conv1): Conv3d(64, 128, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)
      (bn1): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn2): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn1): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn2): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): BasicBlock(
      (conv1): Conv3d(128, 256, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)
      (bn1): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv3d(256, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn2): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv3d(256, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn1): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv3d(256, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn2): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): BasicBlock(
      (conv1): Conv3d(256, 512, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1), bias=False)
      (bn1): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv3d(512, 512, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn2): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv3d(512, 512, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn1): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv3d(512, 512, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
      (bn2): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (avgpool): AdaptiveAvgPool3d(output_size=(1, 1, 1))
  (fc1): Linear(in_features=512, out_features=512, bias=True)
  (fc2): Linear(in_features=512, out_features=256, bias=True)
)

I also intend to contact the developer of the DeepExplain. I was just wondering if that's something you have seen before in association with your code. Any insight would be greatly appreciated :) 

Thanks for the help!

@lenbrocki
Copy link
Owner

DeepExplain only supports tensorflow as far as I know.
You might want to have a look at https://github.com/pair-code/saliency which is a python package for obtaining saliency maps. It is framework-agnostic so it should work with your pytorch model.

@ahof1704
Copy link
Author

Thank you very much for the suggestions!

I quickly scanned the GitHub page for the Saliency code you recommended and apparently none of those options is meant for obtaining saliency from latent representations, right? Since I am interested in evaluating what the network has learned during embedding the samples, a framework like yours would be quite useful.

What about I convert my model from PyTorch to Tensorflow using something like the pytroch2keras? Maybe it could be a workaround to the problem.

@lenbrocki
Copy link
Owner

Yes, neither the Saliency package nor DeepExplain are meant for obtaining saliency from latent representation.

As is explained in our paper https://arxiv.org/abs/1910.13140, however, one can just take any of the saliency methods and replace the class score(so the activation of the class you are targeting in the prediction vector) and replace it with what we call the concept score(e.g. dot product of concept vector and latent vector).

So if you want to stay closer to our code you could indeed try to convert the model although I have no experience with that but assume it should work.
The other option would be to use the Saliency package, find where the class score is defined and replace it with the concept score.

Also, if you should publish your results I would be glad to read it because for now I have not seen many people use our method :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants