Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update model_comp examples and ppc_w to work with idata #4042

Merged
merged 6 commits into from
Aug 13, 2020

Conversation

aloctavodia
Copy link
Member

No description provided.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

Review Jupyter notebook visual diffs & provide feedback on notebooks.


Powered by ReviewNB

@codecov
Copy link

codecov bot commented Aug 10, 2020

Codecov Report

Merging #4042 into master will increase coverage by 0.02%.
The diff coverage is 80.95%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4042      +/-   ##
==========================================
+ Coverage   86.79%   86.81%   +0.02%     
==========================================
  Files          88       88              
  Lines       14143    14150       +7     
==========================================
+ Hits        12276    12285       +9     
+ Misses       1867     1865       -2     
Impacted Files Coverage Δ
pymc3/sampling.py 86.71% <80.95%> (+0.36%) ⬆️

Copy link
Contributor

@AlexAndorra AlexAndorra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this nice update @aloctavodia ! I spotted some typos below. Also, you'll need to merge master into this branch to avoid conflicts in the relase notes

RELEASE-NOTES.md Outdated
Comment on lines 5 to 14
### Maintenance

### Documentation

### New features
- `sample_posterior_predictive_w` can now feed on `xarray.Dataset` - e.g. from `InferenceData.posterior`. (see [#4042](https://github.com/pymc-devs/pymc3/pull/4042))


## PyMC3 3.9.3 (11 August 2020)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the release notes in an earlier PR: can you merge master into this branch and then add a line for this PR please?

@@ -1,25 +1,34 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • THis This approximation is based on importance sampling
  • By default ArviZ, uses LOO, but WAIC is also available.

Reply via ReviewNB

@@ -1,25 +1,34 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't you call ArviZ directly in these last two cells? Especially since you're talking about ArviZ in the text


Reply via ReviewNB

@@ -1,25 +1,34 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first of this these functions is compare, which this one computes LOO (or WAIC) WAIC (or LOO) from a set of traces and models and returns a DataFrame.


Reply via ReviewNB

@@ -1,25 +1,34 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use Latex ordered list formatting for more beautiful display?

0) The index is are the names of the models taken from the keys of the dictionary passed to compare(.).

1) rank, the ranking of on the models starting from 0 (best model) to the number of models.

2) loo, the values of LOO (or WAIC). (needs a space after LOO)

5) weight, the weights assigned to each model. These weights can be loosely interpreted as the probability of each model being true (among the compared models) given the data

9)... Other options are deviance -- this is the log-score multiplied by -2 (this reverts the order: a lower higher LOO/WAIC will be better) -- and negative-log -- this is the log-score multiplied by -1 (as with the deviance scale, a lower value is better).


Reply via ReviewNB

@@ -1,25 +1,34 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The value of the highest LOO (i.e the best estimated model) is also indicated ...


Reply via ReviewNB

@@ -1,25 +1,34 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...there is little to choose between the models in this case, giving given that both models gives very similar values of the information criteria.


Reply via ReviewNB

Copy link
Contributor

@AlexAndorra AlexAndorra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All good now, thanks @aloctavodia !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants