Deep Region and Multi-label Learning for Facial Action Unit Detection
-
Updated
Mar 2, 2017 - Jupyter Notebook
Deep Region and Multi-label Learning for Facial Action Unit Detection
A PyTorch re-implementation of Weakly Supervised Facial Action Unit Recognition through Adversarial Training
ROS bindings for OpenFace 2.1.0
A simple Action Unit player with a rendered face modeled with Candide-3.
ROS 2 Wrapper for OpenFace
Search for a dependency between "annemo" annotations of valence and arousal from RECOLA and Action Units from OpenFace using ARD regression and ARIMA.
ICface: Interpretable and Controllable Face Reenactment Using GANs
To examine the feasibility of and aim to use different behavioral indicators for depression, consisting of, but not limited to, visual and audio features to design an effective testing model which can be made more accessible than traditional testing methods.
An Out-of-the-Box Replication of GANimation using PyTorch, pretrained weights are available!
Shout-out supporters in your GitHub README file.
Code for BMVC paper "Joint Action Unit localisation and intensity estimation through heatmap regression"
This project is a semi-supervised approach to detect emotions on faces in-the-wild using GAN
Pytorch implementation of Multi-View Dynamic Facial Action Unit Detection, Image and Vision Computing (2018)
A machine learning model mapping action unit from OpenFace to Blendshapes from Arkit LiveFace
[WACV 2024] LibreFace: An Open-Source Toolkit for Deep Facial Expression Analysis
Guided Interpretable Facial Expression Recognition via Spatial Action Unit Cues
Add a description, image, and links to the action-units topic page so that developers can more easily learn about it.
To associate your repository with the action-units topic, visit your repo's landing page and select "manage topics."