Skip to content

Releases: sustainable-processes/summit

0.8.8

02 Dec 20:34
f49f6a6
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.8.7...0.8.8

0.8.7

08 Sep 13:18
1755f6a
Compare
Choose a tag to compare

What's Changed

Bug Fixes 🐛

  • Upgrading the version of pytorch to 1.11 and the version of botorch to 0.7.0 (#199)

0.8.6

30 Jul 09:41
Compare
Choose a tag to compare

What's Changed

Bug Fixes 🐛

  • Fix bug in SnAr benchmark (#187) - thanks @Yujikaiya for the issue
  • Fix issue with sklearn imports (#188)

0.8.5

26 Apr 08:55
a067b27
Compare
Choose a tag to compare

What's Changed

Bug Fixes 🐛

  • Fix issue with MTBO rounding errors (#164)
  • Remove support for python 3.10 until pytorch supports python 3.10

0.8.3

18 Jul 08:17
Compare
Choose a tag to compare
Released version 0.8.3

0.8.1

29 Apr 09:26
Compare
Choose a tag to compare
Bump version

Denali

17 Apr 15:51
2a8dfba
Compare
Choose a tag to compare

Denali Mountain

This verison comes with new optimization strategies as well as improvements to existing functionality. You can install it using pip:

pip install --upgrade summit

Below are some highlights!

Multitask Bayesian Optimization Strategy

mtbo strategy code

Multitask models have been shown to improve performance of things like drug activity and site selectivity. We extended this concept to accelerate reaction optimization in a paper published in the NeurIPs ML4 Molecules workshop last year (see the code for the paper here). This functionality is encapsulated in the MTBO strategy. The strategy works by taking data from one reaction optimization and using it to help with another.

ENTMOOT Strategy

ENTMOOT is a technique that uses gradient boosted tree models inside a bayesian optimization loop. @jezsadler of Ruth Misener's research group kindly contributed a new strategy based on their original code. It is currently an experimental feature.

Improvements to TSEMO

TSEMO is the best performing strategy in Summit for multiobjective optimization, but it previously had issues with robustness. We changed from GPy to GPytorch for the implementation of gaussian processes (GPs), which resolved this issue. Additionally, TSEMO documentation was improved and more metadata about the GP hyperparameters were added to the return of suggest_experiments.

Overhaul of the Experimental Emulator

carbon (3)

The ExperimentalEmulator enables you to create new benchmarks based on experimental data. Underneath the hood, a machine learning model is trained, which predicts the outcomes of a reaction given the reaction conditions. The code for ExperimentalEmulator was simplified using Skorch, an extension to scikit-learn that works with pytorch. See this tutorial to learn how to create your own benchmark.

Deprecation of Gryffin

Gryffin is a strategy for optimization mixed categorical-continuous domains. This enables things like selecting catalysts when descriptors are not available. Unfortunately, there were repeated issues with installing Gryffin, so we removed it. Similar functionality can be achieved with the SOBO or MTBO strategy.

Other performance improvements and bug fixes

  • Some imports were inlined to improve startup performance of Summit
  • The dependency list was trimmed. We hope to improve this further by removing the need for GPy and GPyOpt and relying solely on GPytorch and BOtorch.
  • and many more!

Denali (pre-release)

08 Mar 19:57
Compare
Choose a tag to compare
Denali (pre-release) Pre-release
Pre-release
  • Replace GPy with GpyTorch (#94)
  • Improve documentation of TSEMO (#93) and the ExperimentalEmulator (#101)
  • Add the ability to use descriptors in the ExperimentalEmulator (#100 and #101)

Denali (pre-release)

19 Feb 00:41
70ba9a2
Compare
Choose a tag to compare
Denali (pre-release) Pre-release
Pre-release

This is a pre-release of Denali, our newest update to Summit. Key features include:

  • New Multitask strategy as in Multi-task Bayesian Optimization of Chemical Reactions (see #80)
  • New ENTMOOT optimization strategy from this paper (#77)
  • A refactor of the ExperimentalEmulator to use skorch (see #89)
  • Deprecation of Gryffin (this is not final and might change before the full release)
  • Trimming down of dependencies and faster imports due to better dependency management (see #87)

The docs still need to be updated to include the two new strategies and properly explain the changes to ExperimentalEmulator.

Summit 0.7.0

26 Jan 10:58
fc9a509
Compare
Choose a tag to compare
Summit 0.7.0 Pre-release
Pre-release
Constraints only applied when ENTMOOT uses Gurobi (#86)

* Add files via upload

First take on ENTMOOT strategy - init and suggest_experiments functions implemented.

* Add files via upload

Fixed some of the documentation

* Update and rename emstrat.py to entmoot.py

Renaming

* Add files via upload

* Adding ENTMOOT test

* Turning off verbose logging

* Update __init__.py

* Update ci.yml

* Update entmoot.py

Changed default optimizer type to Gurobi, and added an error if constraints are applied when optimizer type is set to sampling.

* Update entmoot.py

Set default optimizer type back to sampling