[CHI2021] Hidden emotion detection using multi-modal signals
-
Updated
Sep 30, 2021 - Python
[CHI2021] Hidden emotion detection using multi-modal signals
Multi-Modal Attention-based Hierarchical Graph Neural Network for Object Interaction Recommendation in Internet of Things (IoT)
Focus on Vision Representation Learning, towards Robust general vision
Code for J. Wang, J. Li, Y. Shi, J. Lai and X. Tan, "AM3Net: Adaptive Mutual-learning-based Multimodal Data Fusion Network," in IEEE TCSVT, 2022. We conducted the experiments on the hyperspectral and lidar dataset(Houston and Trento) and multispectral and synthetic aperture radar data (grss-dfc-2007 datasets).
[EMNLP2022] We propose a new collaborative reasoning method on mutli-modal graphs for multimodal dialogue
This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper at https://arxiv.org/abs/2209.15182.
Seed, Code, Harvest: Grow Your Own App with Tree of Thoughts!
Training for multi-modal image fusion with PyTorch.
Adaptive Confidence Multi-View Hashing
The official implementation of "TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis"
IEEE 802.11n CSI and camera synchronization toolkit.
SER-Fuse: An Emotion Recognition Application Utilizing Multi-Modal, Multi-Lingual, and Multi-Feature Fusion
[Paper][LREC-COLING 2024] Unleashing the Power of Imbalanced Modality Information for Multi-modal Knowledge Graph Completion
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
[Paper][SIGIR 2024] NativE: Multi-modal Knowledge Graph Completion in the Wild
[Paper][Preprint 2024] MyGO: Discrete Modality Information as Fine-Grained Tokens for Multi-modal Knowledge Graph Completion
[CVPR-2023 Workshop@NFVLR] Official PyTorch implementation of Learning CLIP Guided Visual-Text Fusion Transformer for Video-based Pedestrian Attribute Recognition
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"
Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Zeta
Add a description, image, and links to the multi-modal-fusion topic page so that developers can more easily learn about it.
To associate your repository with the multi-modal-fusion topic, visit your repo's landing page and select "manage topics."