Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use multiple models for inference #117

Merged
merged 6 commits into from
Jun 23, 2022

Conversation

decarboxy
Copy link
Collaborator

@decarboxy decarboxy commented Jun 21, 2022

For the purposes of benchmarking and ensemble generation it is useful to compare the output of multiple models. Because feature generation is a substantial fraction of the total runtime of openfold, it would be particularly useful to generate features once and then run multiple models using those features.

This PR modifies run_pretrained_openfold.py by extracting feature generation code to generate_batch() and modifies the handling of --jax_param_path and --openfold_checkpoint_path so that they can optionally take a comma separated list of model files as input.

For example, running with --openfold_checkpoint_path /mnt/openfold_params/101-80999.ckpt,/mnt/openfold_params/116-84749.ckpt,/mnt/openfold_params/94-79249.ckpt

will produce models for all 3 checkpoint directories.

@decarboxy decarboxy changed the title [DO NOT MERGE] use multiple models for inference use multiple models for inference Jun 21, 2022
@gahdritz gahdritz merged commit 89dee90 into aqlaboratory:main Jun 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants