Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autoresume and duration mismatch on reload #3357

Open
antoinebrl opened this issue Jun 4, 2024 · 12 comments
Open

Autoresume and duration mismatch on reload #3357

antoinebrl opened this issue Jun 4, 2024 · 12 comments
Labels
bug Something isn't working

Comments

@antoinebrl
Copy link
Contributor

Description

While experimenting with the autoresume feature, I encountered issues related to duration and the scheduler. My error was providing the duration argument to the .fit() method instead of the max_duration argument to the Trainer constructor. Since the .fit() method can be called multiple times sequentially, each call increases the max_duration, as indicated in the code. Upon resumption, this offset causes an error in the scheduler because t_max becomes smaller than max_duration.

Using max_duration in the Trainer constructor avoids this problem, so I will adopt this approach. Should the scenario described above be detected, and should a warning or error be raised? Essentially, if autoresume=True, then max_duration should be specified in the __init__, and the .fit() method should only be called once in the script.

@antoinebrl antoinebrl added the bug Something isn't working label Jun 4, 2024
@mvpatel2000
Copy link
Contributor

@antoinebrl what if you set t_max to 1dur? This would ensure t_max = max_duration always on resumption.

@antoinebrl
Copy link
Contributor Author

Your approach might solve the issue with the scheduler, which is how I identified the issue. However, the training still goes for longer than expected. For example, I do Trainer(..., autoresume=True).fit(..., duration="10ep"), the training crashes after 3 epochs, then after a restart the fit goes for an en extra 10 epochs. Here, the training lasts for 13 epochs effectively. The more interruptions or the closer they are to end the worse it gets.

@mvpatel2000
Copy link
Contributor

Yep... good point. I think it might also break on multiple resumptions. I will add a PR to at least require max_duration on init.

@antoinebrl
Copy link
Contributor Author

antoinebrl commented Jun 4, 2024

Awesome! Thanks for tackling it 💪

For a quick solution on my side I added couple of exceptions with instructions to guide the users towards the right path:

  • in the __init__, when autoresume==True but max_duration==None
  • in the .fit, if autoresume was True and duration != None
  • in the .fit, if the method is called multiple times when autoresume==True

@mvpatel2000
Copy link
Contributor

I've added enforcements for the above restrictions. We will look at enabling autoresume + multiple fits, but its lower prioirty.

@antoinebrl
Copy link
Contributor Author

Thanks for your reactivity on this one 💪

@mvpatel2000
Copy link
Contributor

Reopening as we need to revert the guardrails. It seems that there is a scenario in which this does work, where the user was tracking the number of fit calls in a callback state_dict and skipping over already completed fit calls + not ever checkpointing inside a fit call.

i'll do a more careful follow-on after release goes out

@mvpatel2000 mvpatel2000 reopened this Jun 5, 2024
@mvpatel2000
Copy link
Contributor

Ok, unfortunately, we cannot impose these restrictions in Composer because it breaks a few existing workflows. Here is a rough scenario.

You want to train a model on a series of 5 datasets in a row. You only checkpoint at FIT_END. In your code, when autoresume is set, you skip any fit calls already completed (say using a callback which tracks how many FIT_END events have passed). If we add the checks above, this use case fails without an easy solution.

Given this, I think we will unfortunately have to currently leave it as is.

The longer term solution is for state to track how many fit calls have been passed and, on resumption, skip completed fit calls gracefully. This is lower priority for us at this time (we would welcome community PRs!) but we will eventually add it

@antoinebrl
Copy link
Contributor Author

I see, thanks for the precisions. Would you mind sharing this callback?

@mvpatel2000
Copy link
Contributor

I see, thanks for the precisions. Would you mind sharing this callback?

it would be roughly:

class FitCounter(Callback):
  def __init__(self):
    self.fit_count = 0
  def fit_end(self, state, logger):
    self.fit_count += 1
  def state_dict(self):
    return {'fit_count': self.fit_count}
  def load_state_dict(self, state_dict):
    self.fit_count = state_dict['fit_count']

and then fit calls would be wrapped by if self.fit_count < X. @bcui19 can elaborate

@antoinebrl
Copy link
Contributor Author

Thanks! I was trying to come up with a way for the callback to spike the training from within.

@bcui19
Copy link
Contributor

bcui19 commented Jun 7, 2024

What Mihir wrote seems reasonable to count the number of fit calls, then you just need to do some of your own timekeeping I believe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants