Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Multiple model iterations per Optuna trial and mean performance objective #204

Open
seawee1 opened this issue Jan 25, 2022 · 5 comments · May be fixed by #225
Open

[Enhancement] Multiple model iterations per Optuna trial and mean performance objective #204

seawee1 opened this issue Jan 25, 2022 · 5 comments · May be fixed by #225
Labels
duplicate This issue or pull request already exists enhancement New feature or request

Comments

@seawee1
Copy link

seawee1 commented Jan 25, 2022

I currently have the problem that, a lot of times, the results Optuna optimization produces are not really too optimal, due to the stochastic nature of RL training. For example, training 3 agents with the same set of hyperparameters can result in 3 completely different learning curves (at least for the environment I'm training on).
Might it make sense to implement the optimization code in way, such that for each trial multiple agents are trained, and the mean or median performance is reported to Optuna instead?

Inside utils/exp_manager.py hyperparameter_optimization, line 713, I saw your comment "# TODO: eval each hyperparams several times to account for noisy evaluation". Is that maybe exactly what you mention there?

I already had a look at the code and thought a little bit about how one might be able to do that. If somebody would be interested I could implement it and issue a pull request!

@Miffyli Miffyli added Maintainers on vacation Maintainers are on vacation so they can recharge their batteries, we will be back soon ;) more information needed Please fill the issue template completely enhancement New feature or request and removed more information needed Please fill the issue template completely labels Jan 25, 2022
@seawee1 seawee1 changed the title [feature request] Multiple model iterations per Optuna trial and mean performance objective [Enhancement] Multiple model iterations per Optuna trial and mean performance objective Jan 25, 2022
@araffin araffin added the duplicate This issue or pull request already exists label Jan 25, 2022
@seawee1
Copy link
Author

seawee1 commented Jan 25, 2022

Regarding the duplicate tag (you are probably referring to issue #151 ?) I can definitely see your point, but why not implement it and let the user decide via a configurable training script argument.

If implemented correctly, I also don't see why this would hinder the use of pruners. They could work based on mean/median objective performance of current and past trials.

@araffin
Copy link
Member

araffin commented Mar 30, 2022

Hello,

sorry for the late reply was on holidays...

Is that maybe exactly what you mention there?

Yes

Regarding the duplicate tag (you are probably referring to issue #151 ?)

yes and that comment:
#151 (comment)

can definitely see your point, but why not implement it and let the user decide via a configurable training script argument.

I would be happy to have a draft PR ;)

You should also know that this exist: #114

If implemented correctly, I also don't see why this would hinder the use of pruners

How do you prune a trial before the end a run if your objective is the mean/median of several runs?

@araffin araffin removed the Maintainers on vacation Maintainers are on vacation so they can recharge their batteries, we will be back soon ;) label Mar 30, 2022
@qgallouedec
Copy link
Collaborator

I can definitely see your point, but why not implement it and let the user decide via a configurable training script argument.

I agree with this.
Faced with the same problem, I've already implemented a script that roughly does this. If you open a PR, I would be happy to contribute.

How do you prune a trial before the end a run if your objective is the mean/median of several runs?

By training multiple models simultaneously. Something like

# ...
for split in range(n):
    mean_rewards = []
    for model in models:
        model.learn(split_size, reset_num_timesteps=False)
        mean_reward, _ = evaluate_policy(model, eval_env)
        mean_rewards.append(mean_reward)
    median_score = np.median(mean_rewards)
    trial.report(median_score, split*split_size)

I wonder if you can run, say, 50 or so models simultaneously, without having memory problems or anything.

@araffin
Copy link
Member

araffin commented Mar 30, 2022

If you open a PR, I would be happy to contribute.

Please do =)

By training multiple models simultaneously. Something like

I was afraid of that answer... yes it does work but not for image-based environment and requires beefy machine anyway (for instance for DQN on Atari, a single model may require 40GB of RAM).
We also need to check if the model.learn(reset_num_timesteps=False) works well with schedules.

50 or so models simultaneously, without having memory problems or anything.

I would run only maximum 3-5 models simultaneously, unless the env is very simple and the network small.

@qgallouedec
Copy link
Collaborator

Please do =)

Let's open a draft PR and continue the discussion there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants