My main research areas are Generative Models and Multimodal Learning. I am specifically interested in novel research that generates images or videos from audio or text (various modalities). Just as humans can think and infer from various senses, I believe that the various modalities and generative models can have a significant impact on our community in the future.
https://sites.google.com/view/taegyeonglee/home
Email : taegyeonglee@unist.ac.kr, taegyeong.leaf@gmail.com
International Publications
- Soyeong Kwon*, Taegyeong Lee* and Taehwan Kim, Zero-shot Text-guided Infinite Image Generation with LLM guidance, European Conference on Computer Vision (ECCV), 2024 [pdf][project page]
- Taegyeong Lee*, Soyeong Kwon* and Taehwan Kim, Grid Diffusion Models for Text-to-Video Generation, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024 [pdf][project page]
- Taegyeong Lee, Jeonghun Kang, Hyeonyu Kim and Taehwan Kim, Generating Realistic Images from In-the-wild Sounds, IEEE/CVF International Conference on Computer Vision (ICCV), 2023 [pdf][project page]