This repo contains accompanying code for my master's thesis. This includes implementations of a multimodal variational autoencoder (VAE), and incorporates variants of the PixelCNN architecture. The goal is to learn representations from multiple image modalities, and to provide a generative model for realizing plausible, new configurations in data space. See jointvae.py for a multimodal VAE implementation on image data, and see multimodalvae.py for a multimodal VAE implementation on image and language data. Various deep neural network architectures for the VAE are implemented in layers.py.
-
Notifications
You must be signed in to change notification settings - Fork 1
Multimodal Representation Learning
License
punit-haria/multimodal-learning
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Multimodal Representation Learning
Topics
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published