Skip to content

Image generation using Variation Autoencoders and Genrative adversarial Networks.

Notifications You must be signed in to change notification settings

mehassanhmood/ComputerVision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Digging deep into VAEs And GANs

Introduction:

Generative models play a vital role in various applications, including image generation, data augmentation, and anomaly detection. Two popular generative models are Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). In this project, i compared GANs and VAEs built from scratch using the Celeb-A dataset, a widely used benchmark dataset for facial image generation.

Training and Evaluation:

  1. GANs:
    Training GANs can be challenging due to instability and mode collapse issues. However, when properly trained, GANs can generate high-quality and diverse images. Evaluation metrics for GANs include Inception Score (IS) and Fréchet Inception Distance (FID), which measure the quality and diversity of generated images.
    Tensor Board
  2. VAEs:
    VAEs are more stable during training compared to GANs but may produce less realistic images. Evaluation metrics for VAEs include reconstruction loss and the quality of generated images in the latent space. Total and Reconstruction Loss Kulbach Liebler Loss

Scope of project

  • The focus of this project leaned towards theoretical aspects. Hence focus was on the implementation of the concepts and not the results generated.

About

Image generation using Variation Autoencoders and Genrative adversarial Networks.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published