Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no val_dataloader when lr_find #1648

Closed
jgsch opened this issue Apr 28, 2020 · 1 comment · Fixed by #1676
Closed

no val_dataloader when lr_find #1648

jgsch opened this issue Apr 28, 2020 · 1 comment · Fixed by #1676
Labels
bug Something isn't working help wanted Open to be worked on

Comments

@jgsch
Copy link

jgsch commented Apr 28, 2020

🐛 Bug

To Reproduce

If you want to give the dataloaders as parameters during fitting (so training_step, validation_step are defined but not train_dataloader and val_dataloader), if you want to do a learning rate finder, it return you the following error : pytorch_lightning.utilities.exceptions.MisconfigurationException: You have defined 'validation_step()', but have not passed in a val_dataloader().

Code sample

import os

import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms

import pytorch_lightning as pl
from pytorch_lightning import Trainer


class LitModel(pl.LightningModule):

    def __init__(self):
        super().__init__()
        self.l1 = torch.nn.Linear(28 * 28, 10)

    def forward(self, x):
        return torch.relu(self.l1(x.view(x.size(0), -1)))

    def training_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        loss = F.cross_entropy(y_hat, y)
        tensorboard_logs = {'train_loss': loss}
        return {'loss': loss, 'log': tensorboard_logs}

    def validation_step(self, batch, batch_idx):
        x, y = batch
        y_hat = self(x)
        return {'val_loss': F.cross_entropy(y_hat, y)}

    def validation_epoch_end(self, outputs):
        avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
        tensorboard_logs = {'val_loss': avg_loss}
        return {'val_loss': avg_loss, 'log': tensorboard_logs}

    def configure_optimizers(self):
        return torch.optim.Adam(self.parameters(), lr=0.001)


train_dataset = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
train_loader = DataLoader(train_dataset, batch_size=32, num_workers=4, shuffle=True)

val_dataset = MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor())
val_loader = DataLoader(val_dataset, batch_size=32, num_workers=4, shuffle=True)

model = LitModel()
trainer = Trainer(gpus=1)
lr = trainer.lr_find(model, train_loader)

Expected behavior

Simply determines the best learning rate

Environment

* CUDA:
        - GPU:
                - GeForce RTX 2080 Ti
        - available:         True
        - version:           10.1.243
* Packages:
        - numpy:             1.17.4
        - pyTorch_debug:     False
        - pyTorch_version:   1.3.1
        - pytorch-lightning: 0.7.5
        - tensorboard:       2.0.2
        - tqdm:              4.35.0
* System:
        - OS:                Linux
        - architecture:
                - 64bit
                - ELF
        - processor:         x86_64
        - python:            3.6.9
        - version:           #1 SMP Tue Oct 29 08:30:10 EDT 2019
@jgsch jgsch added bug Something isn't working help wanted Open to be worked on labels Apr 28, 2020
@SkafteNicki
Copy link
Member

The quick fix is to make the validation set a part of your model i.e. define the val_dataloader method in the model. However, you are probably right that it should be possible to run the lr_find method without this workaround.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants