Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated the VAE example with some enhancements #632

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Coderx7
Copy link

@Coderx7 Coderx7 commented Sep 23, 2019

Add some enhancements to the VAE example including some comments, a new way for calculating loss (i.e using mean instead of sum, and using mse instead of BCE) and finally several routines for creating visualizations concerning the VAE as other frameworks such as Keras, etc provide.
Here is an example output :

Train Epoch: 50 [0/60000 (0%)]  Loss: 142.077072
Train Epoch: 50 [1280/60000 (2%)]       Loss: 151.412460
Train Epoch: 50 [2560/60000 (4%)]       Loss: 143.169495
Train Epoch: 50 [3840/60000 (6%)]       Loss: 139.854614
Train Epoch: 50 [5120/60000 (9%)]       Loss: 140.479279
Train Epoch: 50 [6400/60000 (11%)]      Loss: 141.113678
Train Epoch: 50 [7680/60000 (13%)]      Loss: 145.222717
Train Epoch: 50 [8960/60000 (15%)]      Loss: 143.298386
Train Epoch: 50 [10240/60000 (17%)]     Loss: 142.074677
Train Epoch: 50 [11520/60000 (19%)]     Loss: 142.046265
Train Epoch: 50 [12800/60000 (21%)]     Loss: 140.014236
Train Epoch: 50 [14080/60000 (23%)]     Loss: 144.403824
Train Epoch: 50 [15360/60000 (26%)]     Loss: 141.879059
Train Epoch: 50 [16640/60000 (28%)]     Loss: 139.471130
Train Epoch: 50 [17920/60000 (30%)]     Loss: 143.647278
Train Epoch: 50 [19200/60000 (32%)]     Loss: 153.391830
Train Epoch: 50 [20480/60000 (34%)]     Loss: 142.550720
Train Epoch: 50 [21760/60000 (36%)]     Loss: 147.450180
Train Epoch: 50 [23040/60000 (38%)]     Loss: 149.538467
Train Epoch: 50 [24320/60000 (41%)]     Loss: 143.521545
Train Epoch: 50 [25600/60000 (43%)]     Loss: 142.479950
Train Epoch: 50 [26880/60000 (45%)]     Loss: 150.655380
Train Epoch: 50 [28160/60000 (47%)]     Loss: 142.734924
Train Epoch: 50 [29440/60000 (49%)]     Loss: 145.209045
Train Epoch: 50 [30720/60000 (51%)]     Loss: 148.113403
Train Epoch: 50 [32000/60000 (53%)]     Loss: 150.475647
Train Epoch: 50 [33280/60000 (55%)]     Loss: 145.669861
Train Epoch: 50 [34560/60000 (58%)]     Loss: 147.789429
Train Epoch: 50 [35840/60000 (60%)]     Loss: 149.124664
Train Epoch: 50 [37120/60000 (62%)]     Loss: 141.129578
Train Epoch: 50 [38400/60000 (64%)]     Loss: 145.054382
Train Epoch: 50 [39680/60000 (66%)]     Loss: 145.409058
Train Epoch: 50 [40960/60000 (68%)]     Loss: 142.284454
Train Epoch: 50 [42240/60000 (70%)]     Loss: 148.351013
Train Epoch: 50 [43520/60000 (72%)]     Loss: 143.500214
Train Epoch: 50 [44800/60000 (75%)]     Loss: 151.315079
Train Epoch: 50 [46080/60000 (77%)]     Loss: 145.327087
Train Epoch: 50 [47360/60000 (79%)]     Loss: 142.971786
Train Epoch: 50 [48640/60000 (81%)]     Loss: 140.635880
Train Epoch: 50 [49920/60000 (83%)]     Loss: 145.872925
Train Epoch: 50 [51200/60000 (85%)]     Loss: 134.699051
Train Epoch: 50 [52480/60000 (87%)]     Loss: 146.803940
Train Epoch: 50 [53760/60000 (90%)]     Loss: 146.638092
Train Epoch: 50 [55040/60000 (92%)]     Loss: 132.822876
Train Epoch: 50 [56320/60000 (94%)]     Loss: 139.588501
Train Epoch: 50 [57600/60000 (96%)]     Loss: 139.583817
Train Epoch: 50 [58880/60000 (98%)]     Loss: 149.037277
====> Epoch: 50 Average loss: 1.1320
====> Test set loss: 148.8819

latent space:
vae_latent_space
vae animation :
vae_off

2d digits manifold
vae_digits_2d_manifiold

Acknowledgement :
Calculating loss using mean (and mse) were inspired by Keras version of VAE
Visualization related parameters and some snippets were inspired (and used) from this blog

Note:
Tested successfully on both Win10 and on Linux (using Google Colab) using Pytorch 1.2.0

@Coderx7 Coderx7 mentioned this pull request Sep 24, 2019
Copy link

@Johnson-yue Johnson-yue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the recon loss should not be MSE loss , do you test any dataset except MNIST by using MSE loss and KLD loss?

I experiment result is that the original recon loss (BCE) performance is better than MSE loss on other larger dataset

@Coderx7
Copy link
Author

Coderx7 commented Nov 1, 2019

Hi, Yes BCE usually performs better it seems.
I included this because it was also in the keras/tensorflow official repository, I just added it for the sake of completeness and so that users can experiment freely with both options at their disposal.

@Coderx7
Copy link
Author

Coderx7 commented Feb 3, 2020

@Johnson-yue : Whats the status of this PR?
Does it not add any value to the current sample we have?
would appreciate if you could decide.
Thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants