Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A warning that may comes from legacy code #2024

Closed
DKandrew opened this issue May 31, 2020 · 4 comments
Closed

A warning that may comes from legacy code #2024

DKandrew opened this issue May 31, 2020 · 4 comments
Assignees
Labels
bug Something isn't working help wanted Open to be worked on question Further information is requested
Milestone

Comments

@DKandrew
Copy link
Contributor

🐛 Bug

Receive this warning when running a simple Lightning module, is it related to the recent update? Maybe this warning is related to the past hyperparameters design?

[path to]/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:23: UserWarning: Did not find hyperparameters at model hparams. Saving checkpoint without hyperparameters.
  warnings.warn(*args, **kwargs)

To Reproduce

Use the following code sample

import os
import torch
from torch.nn import functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from torchvision import transforms
from pytorch_lightning import LightningModule
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TensorBoardLogger
from torch.utils.tensorboard import SummaryWriter

class MyNet(LightningModule):
    def __init__(self):
        super(MyNet, self).__init__()
        self.l1 = torch.nn.Linear(28 * 28, 10)

    def forward(self, x):
        return torch.relu(self.l1(x.view(x.size(0), -1)))

    def configure_optimizers(self):
        return torch.optim.Adam(self.parameters(), lr=0.001)

    def train_dataloader(self):
        dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
        loader = DataLoader(dataset, batch_size=32, num_workers=4, shuffle=True)
        return loader

    def test_dataloader(self):
        dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
        loader = DataLoader(dataset, batch_size=32, num_workers=4, shuffle=False)
        return loader

    def training_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        loss = F.cross_entropy(y_hat, y)
        tensorboard_logs = {'train_loss': loss}
        return {'loss': loss, 'log': tensorboard_logs}

    def test_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        loss = F.cross_entropy(y_hat, y)
        tensorboard_logs = {'test_loss': loss}
        return {'loss': loss, 'log': tensorboard_logs}

    def test_epoch_end(self, output):
        with SummaryWriter(self.logger.log_dir) as w:
            for i in range(5):
                w.add_hparams({'lr': 0.1 * i, 'bsize': i}, {'hparam/accuracy': 10 * i, 'hparam/loss': 10 * i})
        return {}


dir_path = "."
tb_logger = TensorBoardLogger(dir_path, name='run2')
model = MyNet()
trainer = Trainer(gpus=1, max_epochs=1, logger=tb_logger)
trainer.fit(model)
trainer.test()

Environments

  • CUDA:
    • GPU:
      • Fastest GeForce In The Moon
    • available: True
    • version: 10.2
  • Packages:
    • numpy: 1.18.1
    • pyTorch_debug: False
    • pyTorch_version: 1.5.0
    • pytorch-lightning: 0.7.6
    • tensorboard: 2.2.1
    • tqdm: 4.46.0
  • System:
@DKandrew DKandrew added the help wanted Open to be worked on label May 31, 2020
@Lamsie
Copy link

Lamsie commented May 31, 2020

Hi there!
It seems to me that the problem you are facing is caused by the version of PyTorch Lightning that you're using. If I'm not mistaken, all the get-rid-of-hparams features were added after the 0.7.6 was released. Try upgrading to the latest version of the library, although I'm not completely sure that it's safe to use right now. After the upgrade, I get this mysterious warning first time I run a cell with the code you've wrote. I haven't figured out yet whether it's fine or not.
image

@DKandrew
Copy link
Contributor Author

Oh I see. That is possible, so maybe it is because the new features have not integrated into the stable version yet.

@williamFalcon
Copy link
Contributor

it’s safe. The current change is that the user has to call self.auto_collect_arguments() to save all args to the checkpoint automatically.

we need to decide if we keep it that way or go back to auto-doing it.

@Borda

@Borda Borda self-assigned this Jun 2, 2020
@Borda Borda added bug Something isn't working question Further information is requested labels Jun 2, 2020
@Borda Borda added this to the 0.8.0 milestone Jun 2, 2020
@Borda
Copy link
Member

Borda commented Jun 2, 2020

If I'm not mistaken, all the get-rid-of-hparams features were added after the 0.7.6 was released.

the parsing init argument is still unreleased change, #1896

After the upgrade, I get this mysterious warning first time I run a cell with the code you've wrote. I haven't figured out yet whether it's fine or not.

the same was reported here #1976

@Borda Borda modified the milestones: 0.8.0, 0.9.0 Jun 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants