Skip to content

ChenHsing/AID

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 

Repository files navigation

AID

This repository is the official implementation of AID: Adapting Image2Video Diffusion Models for Instruction-guided Video Prediction. (We will release the code once the paper is accepted!)

AID: Adapting Image2Video Diffusion Models for Instruction-guided Video Prediction
Zhen Xing , Qi Dai, Zejia Weng, Zuxuan Wu, Yu-Gang Jiang

License Project Website arXiv


AID: Adapting Image2Video Diffusion Models for Instruction-guided Video Prediction.

Abstract

Text-guided video prediction (TVP) involves predicting the motion of future frames from the initial frame according to an instruction, which has wide applications in virtual reality, robotics, and content creation. Previous TVP methods make significant breakthroughs by adapting Stable Diffusion for this task. However, they struggle with frame consistency and temporal stability primarily due to the limited scale of video datasets. We observe that pretrained Image2Video diffusion models possess good priors for video dynamics but they lack textual control. Hence, transferring Image2Video models to leverage their video dynamic priors while injecting instruction control to generate controllable videos is both a meaningful and challenging task. To achieve this, we introduce the Multi-Modal Large Language Model (MLLM) to predict future video states based on initial frames and text instructions. More specifically, we design a dual query transformer (DQFormer) architecture, which integrates the instructions and frames into the conditional embeddings for future frame prediction. Additionally, we develop Long-Short Term Temporal Adapters and Spatial Adapters that can quickly transfer general video diffusion models to specific scenarios with minimal training costs. Experimental results show that our method significantly outperforms state-of-the-art techniques on four datasets: Something Something V2, Epic Kitchen-100, Bridge Data, and UCF-101. Notably, AID achieves 91.2% and 55.5% FVD improvements on Bridge and SSv2 respectively, demonstrating its effectiveness in various domains.

Contact

If you have any suggestions or find our work helpful, feel free to contact us

Homepage: Zhen Xing

Email: zhenxingfd@gmail.com

If you find our work useful, please consider citing it:

@article{AID,
  title={AID: Adapting Image2Video Diffusion Models for Instruction-guided Video Prediction},
  author={Zhen Xing and Qi Dai and Zejia Weng and Zuxuan Wu and Yu-Gang Jiang}, 
  journal={arXiv preprint arXiv:2406.06465},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published