-
Notifications
You must be signed in to change notification settings - Fork 670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sweeps not initializing properly with PyTorch Lightening #1059
Comments
Hi there, could you try specifying the entity and project in wandb.init to match your sweep config: Could you share a link to a sweep where you're seeing this issue. |
Hi @cvphelps Apologies for the delay. That still hasn't worked. I've put together a minimal example here with a simple autoencoder.
In this case the model again intialises twice (i.e. two Run Page links are generated)
|
Thanks @braaannigan. I changed a few details (see my notebook below) but I can confirm the issue. Sweeps currently work with pytorch-lightning in scripts (see this example) but not in jupyter environments. @vanpelt I think this is due to the fact that we added Based on my understanding, this creates a new run in jupyter (and detach the sweep run). Should we make a modification in pytorch-lightning to have Here is a notebook to reproduce the issue based on code from @braaannigan . |
FYI this happens in jupyter even without sweeps - just when you're trying to use WandBLogger and wandb.init together. I am retrieving my experiment like this (this avoids the call to wandb.init): Of course, the problem with this is I can't pass the parameters I'm using to be logged into |
Actually you can log hyper-parameters with this object through |
Is this happening from within Jupyter or when run via python? |
Hi, These issues should now be solved. Here are some examples for running sweeps with pytorch-lightning:
Let me know if you still run into any issue. |
Hey folks |
Thank you for your help. I have a current experiment setup using LightningCLI which I enjoy, using yaml files for configuration and everything seems to be working well. I was wondering, would it be out of scope to consider a wandb sweep experiment using LightningCLI? I have looked through the internet and I can not find a single result anywhere, of anyone ever having published anything regarding the use of LightningCLI to setup wandb's sweep. My understanding of the sweep is very limited, but it would be great to see if both of them can be used together (and how). Again, if this I'm misinformed about the relevance of the use-case, please let me know. Thank you. Edit: to clarify, the question is specifically around how to initialize agents with the right configuration, with the agents making use of LightningCLI. The agent needs to be aware of the configuration and sweep_id, using a yaml file. Edit2: it seems it's not currently possible, as of 1.7.7, however, this could change with 1.8 if I understand correctly. 1.8 seems to introduce from:
to
This allows to make use of a default base config, which is then updated with parameters provided by the sweep controller to the agent. |
wandb --version && python --version && uname
wandb, version 0.8.36
Python 3.7.6
Linux
Description
I'm trying to initialize a sweep using the WandB Logger for PyTorch Lightening. I'm following the keras example in 'Intro to Hyperparameter Sweeps with W&B.ipynb'. I'm running it in jupyter on my own machine.
Basic problem: nothing gets loggeed to wandb when I run the sweep.
Notable feature: when I start the sweep it initialized a new hyperparameter config and starts a new run. But then it initializes another run. Nothing gets logged to either of them.
Individual runs are fine.
What I Did
Then I specify the training function:
I then call the sweep
and get the following output at the start:
So it starts the run and gives the sweeps page, but then seems to initialise a new run.
There's no additional wandb code in the model, it's a standard PTL set-up.
Any suggestions?
The text was updated successfully, but these errors were encountered: