-
Updated
Jun 24, 2024
multi-modal-fusion
Here are 25 public repositories matching this topic...
[Paper][Preprint 2024] Mixture of Modality Knowledge Experts for Robust Multi-modal Knowledge Graph Completion
-
Updated
Jul 27, 2024 - Python
Focus on Vision Representation Learning, towards Robust general vision
-
Updated
Mar 23, 2022
[EMNLP2022] We propose a new collaborative reasoning method on mutli-modal graphs for multimodal dialogue
-
Updated
Jun 24, 2023 - Python
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"
-
Updated
Jun 17, 2024 - Python
SER-Fuse: An Emotion Recognition Application Utilizing Multi-Modal, Multi-Lingual, and Multi-Feature Fusion
-
Updated
Mar 25, 2024 - Jupyter Notebook
Adaptive Confidence Multi-View Hashing
-
Updated
Dec 13, 2023 - Python
[ISPRS 2024] Sat-SINR: High-Resolution Species Distribution Models through Satellite Imagery
-
Updated
Jun 27, 2024 - Python
[CVPR-2023 Workshop@NFVLR] Official PyTorch implementation of Learning CLIP Guided Visual-Text Fusion Transformer for Video-based Pedestrian Attribute Recognition
-
Updated
Jun 11, 2024 - Python
[Paper][SIGIR 2024] NativE: Multi-modal Knowledge Graph Completion in the Wild
-
Updated
May 23, 2024 - Python
Multi-Modal Attention-based Hierarchical Graph Neural Network for Object Interaction Recommendation in Internet of Things (IoT)
-
Updated
Dec 15, 2021 - Python
[CHI2021] Hidden emotion detection using multi-modal signals
-
Updated
Sep 30, 2021 - Python
[IVS'24] UniBEV: the official implementation of UniBEV
-
Updated
Jun 26, 2024 - Python
The official implementation of "TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis"
-
Updated
Jan 29, 2024 - Python
Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Zeta
-
Updated
Jun 18, 2024 - Python
[Paper][LREC-COLING 2024] Unleashing the Power of Imbalanced Modality Information for Multi-modal Knowledge Graph Completion
-
Updated
Apr 16, 2024 - Python
Code for J. Wang, J. Li, Y. Shi, J. Lai and X. Tan, "AM3Net: Adaptive Mutual-learning-based Multimodal Data Fusion Network," in IEEE TCSVT, 2022. We conducted the experiments on the hyperspectral and lidar dataset(Houston and Trento) and multispectral and synthetic aperture radar data (grss-dfc-2007 datasets).
-
Updated
Mar 27, 2023 - Python
IEEE 802.11n CSI and camera synchronization toolkit.
-
Updated
Mar 9, 2024 - C
[Paper][Preprint 2024] MyGO: Discrete Modality Information as Fine-Grained Tokens for Multi-modal Knowledge Graph Completion
-
Updated
May 28, 2024 - Python
Training for multi-modal image fusion with PyTorch.
-
Updated
Nov 30, 2023 - Python
Improve this page
Add a description, image, and links to the multi-modal-fusion topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the multi-modal-fusion topic, visit your repo's landing page and select "manage topics."