Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UserWarning if doing predictive sampling with models containing Potentials #4419

Merged
merged 4 commits into from
Jan 16, 2021

Conversation

ricardoV94
Copy link
Member

@ricardoV94 ricardoV94 commented Jan 15, 2021

As discussed in #3865

Do we want tests for the Warning?

@ricardoV94 ricardoV94 changed the title Raise warning if doing predictive sampling with models containing Potentials UserWarning if doing predictive sampling with models containing Potentials Jan 15, 2021
Copy link
Member

@michaelosthege michaelosthege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! A simple test would be great.

Copy link
Contributor

@AlexAndorra AlexAndorra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks @ricardoV94 ! I agree with @michaelosthege that a simple test would be nice.
Also, this should probably be added to fast_sample_posterior_predictive too: https://github.com/pymc-devs/pymc3/blob/1769258e459e8f40aa8a56e0ac911aa99e7f67de/pymc3/distributions/posterior_predictive.py#L156

@ricardoV94
Copy link
Member Author

Nice, thanks @ricardoV94 ! I agree with @michaelosthege that a simple test would be nice.
Also, this should probably be added to fast_sample_posterior_predictive too:

https://github.com/pymc-devs/pymc3/blob/1769258e459e8f40aa8a56e0ac911aa99e7f67de/pymc3/distributions/posterior_predictive.py#L156

Nice catch!

I added some tests, do you think they are an overkill?

Copy link
Member

@michaelosthege michaelosthege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests are overkill in the sense that they will add unnecessarily lots of runtime.
Instead of calling pm.sample() which will be a complete overkill in terms of compute, it would be better to use az.InferenceData.from_dict and manually create a "trace" using numpy.random.

@ricardoV94
Copy link
Member Author

The tests are overkill in the sense that they will add unnecessarily lots of runtime.
Instead of calling pm.sample() which will be a complete overkill in terms of compute, it would be better to use az.InferenceData.from_dict and manually create a "trace" using numpy.random.

Agreed! By any chance, do you know if there is any test already doing something like this?

@michaelosthege
Copy link
Member

Agreed! By any chance, do you know if there is any test already doing something like this?

It doesn't looks like that's the case so far. Many tests probably could.

You can inspect the existing trace to find out which variables you'll need. Something along the lines of

for vname in trace.varnames:
    print(f"vname is shape {trace[vname].shape}")
    print(trace[vname])

@AlexAndorra
Copy link
Contributor

I guess you can also take some cues from ArviZ test suite -- we do stuff like that IIRC, and we also have built-in datasets

Copy link
Contributor

@AlexAndorra AlexAndorra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All good, thanks @ricardoV94 🍾

@AlexAndorra AlexAndorra merged commit 4e2c099 into pymc-devs:master Jan 16, 2021
@AlexAndorra
Copy link
Contributor

BTW @ricardoV94, do you have an email address I can contact you on please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Forward sampling functions should warn that they ignore Potential in a model
3 participants