Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please run in eager mode or implement the compute_output_shape method on your layer (DenseVariational). #1505

Closed
harvetech opened this issue Jan 26, 2022 · 3 comments

Comments

@harvetech
Copy link

Dear TFP team,

I am trying to use DenseVariational Layer with TimeDistributed Layer of Keras. However, I get the following error.

<ipython-input-228-ecfae02fb877> in <module>
     14 conv_model = layers.TimeDistributed(layers.Flatten())(conv_model)
     15 conv_model = layers.TimeDistributed(layers.Dropout(rate=0.1))(conv_model)
---> 16 conv_model = layers.TimeDistributed(tfp.layers.DenseVariational(64,make_posterior_fn=posterior,make_prior_fn=prior, activation='relu'))(conv_model)
     17 conv_model = layers.TimeDistributed(tfp.layers.DenseFlipout(1, activation='relu',kernel_divergence_fn=kl_divergence_function, name='Output'))(conv_model)
     18 

~/anaconda3/envs/tfp/lib/python3.8/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
     65     except Exception as e:  # pylint: disable=broad-except
     66       filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67       raise e.with_traceback(filtered_tb) from None
     68     finally:
     69       del filtered_tb

~/anaconda3/envs/tfp/lib/python3.8/site-packages/keras/engine/base_layer.py in compute_output_shape(self, input_shape)
    826               self.__class__.__name__) from e
    827       return tf.nest.map_structure(lambda t: t.shape, outputs)
--> 828     raise NotImplementedError(
    829         'Please run in eager mode or implement the `compute_output_shape` '
    830         'method on your layer (%s).' % self.__class__.__name__)

NotImplementedError: Exception encountered when calling layer "time_distributed_476" (type TimeDistributed).

Please run in eager mode or implement the `compute_output_shape` method on your layer (DenseVariational).

Call arguments received:
  • inputs=tf.Tensor(shape=(None, 5, 384), dtype=float32)
  • training=None
  • mask=None

I checked the DenseVariational layer and TimeDistributed layers both have 'compute_output_shape' function already implemented. Can anyone please give me some leads?

I am out of ideas what to do next. I don't get this error when I use DenseFlipOut layer.

Thanks in advance.

@Frightera
Copy link
Contributor

Actually the DenseVariational layer you are using right now is from (densevariationalv2.py):

class DenseVariational(tf.keras.layers.Layer):

I think you checked these lines (densevariational.py):

def compute_output_shape(self, input_shape):
"""Computes the output shape of the layer.
Args:
input_shape: Shape tuple (tuple of integers) or list of shape tuples
(one per output tensor of the layer). Shape tuples can include None for
free dimensions, instead of an integer.
Returns:
output_shape: A tuple representing the output shape.
Raises:
ValueError: If innermost dimension of `input_shape` is not defined.
"""
input_shape = tf.TensorShape(input_shape)
input_shape = input_shape.with_rank_at_least(2)
if tf.compat.dimension_value(input_shape[-1]) is None:
raise ValueError(
'The innermost dimension of `input_shape` must be defined, '
'but saw: {}'.format(input_shape))
return input_shape[:-1].concatenate(self.units)

It should work if you define a class like:

class DenseVariationalExtended(tfp.layers.DenseVariational):
    ...
    ...
    def compute_output_shape(self, input_shape):
      return tf.TensorShape(input_shape)[:-1].concatenate(self.units)

and use it as a regular DenseVariational layer. I am not sure if that will break anything in long term, I may take a look this when I have time.

@harvetech
Copy link
Author

@Frightera Thankyou for your suggestion. I am going to give it a try.

@ColCarroll
Copy link
Contributor

thanks again, @Frightera !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants