Skip to content
This repository has been archived by the owner on Mar 12, 2024. It is now read-only.

continue training with chekckpoint #609

Open
wiemrebhi opened this issue Nov 7, 2023 · 0 comments
Open

continue training with chekckpoint #609

wiemrebhi opened this issue Nov 7, 2023 · 0 comments

Comments

@wiemrebhi
Copy link

wiemrebhi commented Nov 7, 2023

Hello can someone help me how can i continue training detr from last epoch with checkpoint

this is the code for training :

from pytorch_lightning.callbacks import EarlyStopping
from pytorch_lightning import Trainer

MAX_EPOCHS = 200

early_stopping_callback = EarlyStopping(
monitor='training_loss', # Monitor validation AP
min_delta=0.00, # Minimum change in AP
patience=3, # Number of epochs to wait for improvement before stopping
mode='max' # Consider AP as a maximization metric
)

trainer = Trainer(
devices=1,
accelerator="gpu",
max_epochs=MAX_EPOCHS,
gradient_clip_val=0.1,
accumulate_grad_batches=8,
log_every_n_steps=5,
callbacks=[early_stopping_callback]
)

trainer.fit(model)
should i add something or what i should do next

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant