[Paper][Preprint 2024] Mixture of Modality Knowledge Experts for Robust Multi-modal Knowledge Graph Completion
-
Updated
Jul 27, 2024 - Python
[Paper][Preprint 2024] Mixture of Modality Knowledge Experts for Robust Multi-modal Knowledge Graph Completion
Achelous: A Fast Unified Water-surface Panoptic Perception Framework based on Fusion of Monocular Camera and 4D mmWave Radar
[IEEE TCYB 2023] The first large-scale tracking dataset by fusing RGB and Event cameras.
[ISPRS 2024] Sat-SINR: High-Resolution Species Distribution Models through Satellite Imagery
[IVS'24] UniBEV: the official implementation of UniBEV
Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Zeta
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"
[CVPR-2023 Workshop@NFVLR] Official PyTorch implementation of Learning CLIP Guided Visual-Text Fusion Transformer for Video-based Pedestrian Attribute Recognition
[Paper][Preprint 2024] MyGO: Discrete Modality Information as Fine-Grained Tokens for Multi-modal Knowledge Graph Completion
[Paper][SIGIR 2024] NativE: Multi-modal Knowledge Graph Completion in the Wild
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
[Paper][LREC-COLING 2024] Unleashing the Power of Imbalanced Modality Information for Multi-modal Knowledge Graph Completion
SER-Fuse: An Emotion Recognition Application Utilizing Multi-Modal, Multi-Lingual, and Multi-Feature Fusion
IEEE 802.11n CSI and camera synchronization toolkit.
The official implementation of "TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis"
Adaptive Confidence Multi-View Hashing
Training for multi-modal image fusion with PyTorch.
Seed, Code, Harvest: Grow Your Own App with Tree of Thoughts!
This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper at https://arxiv.org/abs/2209.15182.
Add a description, image, and links to the multi-modal-fusion topic page so that developers can more easily learn about it.
To associate your repository with the multi-modal-fusion topic, visit your repo's landing page and select "manage topics."