-
Notifications
You must be signed in to change notification settings - Fork 501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding Bayesian optimization with BOTorch instead of TPE with Optuna #363
Comments
Hello,
Well, I would be happy to receive a PR if it is backed by experimental results for RL. |
Skopt implement only BO basics, however, BOTorch models are more general
with the latest advances in research. TPE is not exactly BO.
I am happy with implementing BOTorch here as I am going to perform a
benchmark comparison and further specialized BO methods for DRL, so I will
PR if I have good results.
Will keep you informed,
Best,
El mar., 7 mar. 2023 17:12, Antonin RAFFIN ***@***.***>
escribió:
… Hello,
TPE is already kind of doing bayesian optimization, no? (predict the
outcome for a given set of parameters and provide uncertainy).
GP is already available here:
https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/rl_zoo3/exp_manager.py#L685-L691
In machine learning, Bayesian optimization gives state-of-the-art results
in hyper-parameter optimization outperforming TPE.
Well, I would be happy to receive a PR if it is backed by experimental
results for RL.
—
Reply to this email directly, view it on GitHub
<#363 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAECY7QADVWF3C54GRLXMILW25NATANCNFSM6AAAAAAVSS6F3I>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
just to be sure, you plan to use https://optuna.readthedocs.io/en/stable/reference/generated/optuna.integration.BoTorchSampler.html, right? |
No, I will integrste BOTorch directly, and you will be able to modify the
GP model, its kernel, hyperparameters, acquisition function and the rest of
hyperparameters of Bayesian optimization for a full use of the latest
advances of BO in DRL.
El mar., 7 mar. 2023 17:20, Antonin RAFFIN ***@***.***>
escribió:
… I am happy with implementing BOTorch here as I am going to perform a
just to be sure, you plan to use
https://optuna.readthedocs.io/en/stable/reference/generated/optuna.integration.BoTorchSampler.html,
right?
—
Reply to this email directly, view it on GitHub
<#363 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAECY7VL6AFPR3WJOVSZVP3W25N3FANCNFSM6AAAAAAVSS6F3I>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
🚀 Feature
For research, I am going to test the performance of Optuna's TPE with the one of Bayesian optimization with BOTorch. I am just asking for permission for a future pull request with all the functionality that I am going to do in a fork and just to know if you are interested on this.
Motivation
In machine learning, Bayesian optimization gives state-of-the-art results in hyper-parameter optimization outperforming TPE.
Pitch
Leave Bayesian optimization as an alternative for optimization (given by args) maintaining Optuna.
Alternatives
No response
Additional context
No response
Checklist
The text was updated successfully, but these errors were encountered: