Skip to content
@Qrange-group

Qrange-group

Popular repositories Loading

  1. SUR-adapter SUR-adapter Public

    ACM MM'23 (oral), SUR-adapter for pre-trained diffusion models can acquire the powerful semantic understanding and reasoning capabilities from large language models to build a high-quality textual …

    Python 109 2

  2. CEM CEM Public

    EMNLP'22, CEM improves MHCH performance by correcting prediction bias and training an auxiliary cost simulator based on user state and labor cost causal graph, without requiring complex model craft…

    Python 11

  3. Mirror-Gradient Mirror-Gradient Public

    WWW'24, Mirror Gradient (MG) makes multimodal recommendation models approach flat local minima easier compared to models with normal training.

    Python 9 1

  4. SEM SEM Public

    SEM can automatically decide to select and integrate attention operators to compute attention maps.

    Python 8 2

  5. LSAS LSAS Public

    ICME'23, Lightweight sub-attention strategy (LSAS) utilizes high-order sub-attention modules to improve the original self-attention modules.

    Python 3

  6. SPEM SPEM Public

    MMM'23, SPEM adopts a self-adaptive pooling strategy based on global max-pooling, global min-pooling and a lightweight module for producing the attention map.

    Python 2

Repositories

Showing 6 of 6 repositories
  • SUR-adapter Public

    ACM MM'23 (oral), SUR-adapter for pre-trained diffusion models can acquire the powerful semantic understanding and reasoning capabilities from large language models to build a high-quality textual semantic representation for text-to-image generation.

    Qrange-group/SUR-adapter’s past year of commit activity
    Python 109 MIT 2 7 0 Updated Apr 24, 2024
  • Mirror-Gradient Public

    WWW'24, Mirror Gradient (MG) makes multimodal recommendation models approach flat local minima easier compared to models with normal training.

    Qrange-group/Mirror-Gradient’s past year of commit activity
    Python 9 MIT 1 0 0 Updated Feb 22, 2024
  • SEM Public

    SEM can automatically decide to select and integrate attention operators to compute attention maps.

    Qrange-group/SEM’s past year of commit activity
    Python 8 MIT 2 0 0 Updated Jun 16, 2023
  • SPEM Public

    MMM'23, SPEM adopts a self-adaptive pooling strategy based on global max-pooling, global min-pooling and a lightweight module for producing the attention map.

    Qrange-group/SPEM’s past year of commit activity
    Python 2 MIT 0 0 0 Updated Jun 16, 2023
  • LSAS Public

    ICME'23, Lightweight sub-attention strategy (LSAS) utilizes high-order sub-attention modules to improve the original self-attention modules.

    Qrange-group/LSAS’s past year of commit activity
    Python 3 MIT 0 0 0 Updated Jun 16, 2023
  • CEM Public

    EMNLP'22, CEM improves MHCH performance by correcting prediction bias and training an auxiliary cost simulator based on user state and labor cost causal graph, without requiring complex model crafting.

    Qrange-group/CEM’s past year of commit activity
    Python 11 MIT 0 0 0 Updated Oct 9, 2022

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…