CVPR 2023 文章 T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations的modelscope使用版。 使用方法请访问魔搭 https://modelscope.cn/models/zhongchongyang/T2MGPT_text-driven_motion_generation/summary
If the project is helpful for your research, please consider citing :
@inproceedings{zhang2023generating,
title={T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations},
author={Zhang, Jianrong and Zhang, Yangsong and Cun, Xiaodong and Huang, Shaoli and Zhang, Yong and Zhao, Hongwei and Lu, Hongtao and Shen, Xi},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023},
}