Skip to content

Audio-Visual Generative Adversarial Network for Face Reenactment

Notifications You must be signed in to change notification settings

mdv3101/AVFR-Gan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

AVFR-Gan: Audio-Visual Face Reenactment

Official github repo for Audio Visual Facial Reenactment (WACV 2023). For now, we have put the public release of code on hold due to licensing and ethical concerns. Feel free to reach out for any specific queries!

Paper Link: arXiv

Introduction

This work proposes a novel method to generate realistic talking head videos using audio and visual streams. We animate a source image by transferring head motion from a driving video using a dense motion field generated using learnable keypoints. We improve the quality of lip sync using audio as an additional input, helping the network to attend to the mouth region. We use additional priors using face segmentation and face mesh to improve the structure of the reconstructed faces. Finally, we improve the visual quality of the generations by incorporating a carefully designed identity-aware generator module. The identity-aware generator takes the source image and the warped motion features as input to generate a high-quality output with fine-grained details. Our method produces state-of-the-art results and generalizes well to unseen faces, languages, and voices. We comprehensively evaluate our approach using multiple metrics and outperforming the current techniques both qualitative and quantitatively. Our work opens up several applications, including enabling low bandwidth video calls.

Release Notes

Aug 16, 2022: Our paper has been accepted to Winter Conference on Applications of Computer Vision (WACV), 2023.

demo1

Citation

If you find this work useful for your research, please cite our paper

@article{agarwal2022audio,
  title={Audio-Visual Face Reenactment},
  author={Agarwal, Madhav and Mukhopadhyay, Rudrabha and Namboodiri, Vinay and Jawahar, CV},
  journal={arXiv preprint arXiv:2210.02755},
  year={2022}
}

Contact

AVFR-Gan was developed by Madhav Agarwal, Rudrabha Mukhopadhyay, Dr. Vinay P. Namboodiri and Dr. C.V. Jawahar.
For any query, feel free to drop a mail to Madhav Agarwal by explicitly mentioning 'AVFR-Gan' in the subject.

About

Audio-Visual Generative Adversarial Network for Face Reenactment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published