You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I'm very happy to see an implementation of gpytorch together with pytorch lightning :D Thanks for making it publicly available.
I was wondering whether you had to deal with this error when turning on the logger and saving the model hyperparameters:
File "/cosma/home/dp004/dc-cues1/.local/lib/python3.7/site-packages/torch/utils/tensorboard/summary.py", line 192, in hparams
ssi.hparams[k].number_value = v
TypeError: array([-1.7629994 , 0.74281293, 0.3584844 , -0.03555403, -1.3293115 ,
1.0728952 , 0. has type numpy.ndarray, but expected one of: int, long, float
I guess this comes from storing the training data together with the model, but do you have any idea of how to solve it or can you think of any way around it? I love callbacks, and without the logger doing anything is quite annoying.
The text was updated successfully, but these errors were encountered:
Glad you are finding this useful! I have been meaning to make a tutorial on the GPytorch repo about this for a while, but just haven't got around to it.
I think I have seen this error before and not just with gpytorch models. I think the issue rises because hparams can't store arrays. Here is a PR on actually deprecating hparams and using another system. (Lightning-AI/pytorch-lightning#1896) not sure if the documentation has been updated or whatnot, but the workaround is prob to just not use hparams. It has been a while and my current code in this repo might not align with best pl practices. Sorry I can't help up front. Let's keep this issue open for the time being tho.
Hi! I'm very happy to see an implementation of gpytorch together with pytorch lightning :D Thanks for making it publicly available.
I was wondering whether you had to deal with this error when turning on the logger and saving the model hyperparameters:
I guess this comes from storing the training data together with the model, but do you have any idea of how to solve it or can you think of any way around it? I love callbacks, and without the logger doing anything is quite annoying.
The text was updated successfully, but these errors were encountered: