This Python script generates a video from text prompts using Stable Diffusion for image generation and MoviePy for video creation. It's a great way to explore AI-powered video creation directly from textual descriptions.
Before you begin, make sure you have Python 3.x installed on your system.
-
Clone this repository to your local machine:
git clone https://github.com/krrishitejas/text-to-video.git cd text-to-video
-
Install required Python libraries using pip:
pip install torch Pillow moviepy diffusers
torch
: PyTorch library for machine learning.Pillow
: Python Imaging Library for image processing.moviepy
: Library for video editing and creation.diffusers
: Library for utilizing Stable Diffusion models.
-
Open
text_to_video.py
in a text editor or Python IDE. -
Modify the
prompts
list to add or change the text prompts for generating different videos. -
Save your changes.
-
Run the script using Python:
python text_to_video.py
This script will use your GPU if available (recommended for faster processing) or fall back to CPU.
-
Once the script finishes running, check the project directory for the generated video file named
output_video.mp4
.
The script includes predefined text prompts like:
- "A serene landscape with mountains"
- "A bustling city at night"
- "A calm beach with a sunset"
Modify these prompts directly in the script to generate videos matching your imagination.
- Adjust the frame rate (
fps
) in theImageSequenceClip
constructor insidetext_to_video.py
to control the video's speed and smoothness.