Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sllh computation back to petab_objective.simulate_petab #1548

Merged
merged 79 commits into from
May 9, 2023

Conversation

dilpath
Copy link
Member

@dilpath dilpath commented Sep 13, 2021

Was removed in #1498 due to erroneous implementation.

  • parameters are now scaled within simulate_petab
  • added FD checks, essentially copied from pypesto.objective.ObjectiveBase.check_grad{_multi_eps} [1]
  • added a test case: a PEtab problem based on the Lotka-Volterra example in yaml2sbml [2], extended to test more of the simulate_petab SLLH computation with
    • multiple experimental conditions
    • preequilibration

[1] https://github.com/ICB-DCM/pyPESTO/blob/32a5daf888dd6c49fa3ca3b8b39e055b6f15ca9f/pypesto/objective/base.py#L397-L620
[2] https://github.com/yaml2sbml-dev/yaml2sbml/tree/773f4c563d90689cd974da8c114b6c7eaf316802/doc/examples/Lotka_Volterra/Lotka_Volterra_python

@codecov
Copy link

codecov bot commented Sep 13, 2021

Codecov Report

Merging #1548 (bc4c48c) into develop (255be19) will increase coverage by 1.16%.
The diff coverage is 95.83%.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##           develop    #1548      +/-   ##
===========================================
+ Coverage    53.73%   54.90%   +1.16%     
===========================================
  Files           29       30       +1     
  Lines         4548     4617      +69     
===========================================
+ Hits          2444     2535      +91     
+ Misses        2104     2082      -22     
Flag Coverage Δ
petab 54.90% <95.83%> (+1.16%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
python/sdist/amici/petab_objective.py 93.77% <93.47%> (+0.34%) ⬆️
python/tests/test_petab_objective.py 100.00% <100.00%> (ø)

... and 3 files with indirect coverage changes

@dilpath dilpath marked this pull request as ready for review November 5, 2021 09:12
Copy link
Member

@dweindl dweindl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but please wait for others' feedback.

Why was the extra test model required? Would keep things simpler if we could use benchmark collection model that's already used elsewhere.

python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
Copy link
Member

@yannikschaelte yannikschaelte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if llh and sllh are correct (in particular the gradient check is happy), LGTM

python/amici/petab_objective.py Outdated Show resolved Hide resolved
@@ -150,6 +160,15 @@ def simulate_petab(

# Compute total llh
llh = sum(rdata['llh'] for rdata in rdatas)
# Compute total sllh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this was not the case before either, but maybe one could specify whether or not to calculate sensis? In the latter case, the simulation in runAmiciSimulations would be cheaper, and this mapping here not be performed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The optional solver argument is expected to be provided and already setup to compute sensitivities, if sensitivities are desired. Could default to first order forward? Either makes sense to me (no default at least ensures the user defines how sensitivities should be computed). Before review, it defaulted to None, and was handled in aggregate_sllh. Now defaults to None here, and aggregate_sllh is only called if sensitivities were computed (so, an error is now raised there, as per your other suggestion).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The optional solver argument is expected to be provided and already setup to compute sensitivities, if sensitivities are desired.

Yes!

python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/tests/test_petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
# SLLH is (un)scaled here with methods that were written for
# parameters.
# First unscale w.r.t. model scale.
partial_parameter_sllh = unscale_parameter(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we had somewhere also a function rescale(val, scale_in, scale_out), but nvm

@dilpath
Copy link
Member Author

dilpath commented Nov 5, 2021

LGTM, but please wait for others' feedback.

Why was the extra test model required? Would keep things simpler if we could use benchmark collection model that's already used elsewhere.

The extra test model is because a suitable benchmark problem may not exist. I tested a few with pyPESTO and some errors were always high. I could look for more (I may have also been using an optimized vector).

The extra test model might still be a good idea, since many/all PEtab features could be added to it that do not all exist in a single benchmark problem simultaneously.

dilpath and others added 2 commits November 6, 2021 21:07
Co-authored-by: Yannik Schälte <31767307+yannikschaelte@users.noreply.github.com>
Co-authored-by: Daniel Weindl <dweindl@users.noreply.github.com>
Copy link
Member

@FFroehlich FFroehlich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure whether the implementation here is correct. Might also explain why errors are high for petab test models. I agree with Daniel that we should ideally get this working on all of the examples from the benchmark collection as well.

python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
python/amici/petab_objective.py Outdated Show resolved Hide resolved
dilpath and others added 3 commits November 7, 2021 11:49
@dilpath
Copy link
Member Author

dilpath commented Nov 7, 2021

Not sure whether the implementation here is correct. Might also explain why errors are high for petab test models. I agree with Daniel that we should ideally get this working on all of the examples from the benchmark collection as well.

Sure, then I'll compare the gradients for all benchmark problems between AMICI and pyPESTO. Might take a week or so due to other commitments.

@FFroehlich FFroehlich self-assigned this May 7, 2023
@FFroehlich FFroehlich marked this pull request as ready for review May 8, 2023 10:18
@FFroehlich FFroehlich requested a review from a team as a code owner May 8, 2023 10:18
@FFroehlich FFroehlich requested a review from dweindl May 8, 2023 10:19
Copy link
Member

@dweindl dweindl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tbc

tests/benchmark-models/test_petab_benchmark.py Outdated Show resolved Hide resolved
ATOL: float = 1e-3
RTOL: float = 1e-2

benchmark_path = Path(__file__).parent.parent.parent / "Benchmark-Models-PEtab" / "Benchmark-Models"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could use the benchmark models package to simplify things, but fine as is.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true, should then also change upstream tests, keeping as is for now

tests/benchmark-models/test_petab_benchmark.py Outdated Show resolved Hide resolved
python/tests/test_petab_objective.py Outdated Show resolved Hide resolved
documentation/example_errors.ipynb Outdated Show resolved Hide resolved
Copy link
Member

@dweindl dweindl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

python/sdist/amici/petab_objective.py Outdated Show resolved Hide resolved
python/sdist/amici/petab_objective.py Outdated Show resolved Hide resolved
python/sdist/amici/petab_objective.py Outdated Show resolved Hide resolved
python/sdist/amici/petab_objective.py Outdated Show resolved Hide resolved
@FFroehlich FFroehlich merged commit d2fb35e into develop May 9, 2023
@FFroehlich FFroehlich deleted the fix_simulate_petab_sllh branch May 9, 2023 07:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants