-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support log_every_n_steps
with validate|test
#18895
Conversation
d17f7b4
to
8ab6351
Compare
log_every_n_steps
with validate|test|predictlog_every_n_steps
with validate|test
# this is for backwards compatibility | ||
if add_epoch: # only enabled for training metrics | ||
step -= 1 | ||
scalar_metrics.setdefault("epoch", self.trainer.current_epoch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO this bit of logic is not well designed, but keeping it to limit the PRs scope
@@ -548,11 +548,11 @@ def test_step(self, batch, batch_idx): | |||
call(metrics={"train_loss": ANY, "epoch": 0}, step=0), | |||
call(metrics={"valid_loss_0_step": ANY, "valid_loss_2": ANY}, step=0), | |||
call(metrics={"valid_loss_0_step": ANY, "valid_loss_2": ANY}, step=1), | |||
call(metrics={"valid_loss_0_epoch": ANY, "valid_loss_1": ANY, "epoch": 0}, step=0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The step value here corresponded to the training step. Now it's for the validation step
⛈️ Required checks status: Has failure 🔴
Groups summary🔴 pytorch_lightning: Tests workflowThese checks are required after the changes to 🟢 pytorch_lightning: Azure GPU
These checks are required after the changes to 🟢 pytorch_lightning: Benchmarks
These checks are required after the changes to 🟢 pytorch_lightning: Docs
These checks are required after the changes to 🟢 mypy
These checks are required after the changes to 🟢 installThese checks are required after the changes to Thank you for your contribution! 💜
|
Hey @carmocca import os
import torch
from lightning import seed_everything
from lightning.pytorch import LightningModule, Trainer
from torch.utils.data import DataLoader, Dataset
from lightning.pytorch.loggers import TensorBoardLogger
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("valid_loss", loss)
def test_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("test_loss", loss)
def configure_optimizers(self):
return torch.optim.SGD(self.layer.parameters(), lr=0.1)
def run():
seed_everything(0)
train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
val_data = DataLoader(RandomDataset(32, 64), batch_size=2)
test_data = DataLoader(RandomDataset(32, 64), batch_size=2)
logger = TensorBoardLogger("tb_logs", name="my_model")
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
logger=logger,
limit_train_batches=10,
limit_val_batches=10,
limit_test_batches=10,
num_sanity_val_steps=4,
max_epochs=3,
enable_model_summary=False,
log_every_n_steps=2,
)
trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
trainer.validate(model, val_data)
trainer.test(model, dataloaders=test_data)
if __name__ == "__main__":
run() Running Red: master Issue 1: It falling short now means there would be a gap when resuming: For this reason, I believe the previous behavior should be kept. Issue 2: Here the test loss no longer gets logged at the global step, but the step is local to the test stage. Issue 3: Finally, a few things are going on for the validation. 1) The fit-validation values are no longer logged, because they don't fall into the logging interval (they are epoch-end). 2) Without using the global step for logging fit-validation logging, you can no longer track the validation loss across your training. It all gets logged to the same step value. |
I'll look at Issue 1 and 3. Thanks. |
|
GitGuardian id | Secret | Commit | Filename | |
---|---|---|---|---|
- | Generic High Entropy Secret | 78fa3af | tests/tests_app/utilities/test_login.py | View secret |
- | Base64 Basic Authentication | 78fa3af | tests/tests_app/utilities/test_login.py | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
Our GitHub checks need improvements? Share your feedbacks!
What does this PR do?
Fixes #10436
Logging per step during fit's validation, regular validation, testing, or predicting is not generally useful when the logged values depend on the optimization process.
However, sometimes you want to log values that do not depend on the optimization process. Examples of this are throughput-related metrics (as in #18848) or batch-specific metrics.
This PR adds support for tuning the logging interval under these circumstances.
This is only a breaking change if the user was calling
self.log(..., on_step=True)
.The previous behavior is equal to setting
log_every_n_steps=1
now. Before, this value was ignored.📚 Documentation preview 📚: https://pytorch-lightning--18895.org.readthedocs.build/en/18895/
cc @Borda @carmocca @justusschock