-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the installation in windows, using powershell and miniconda3 #11
Comments
SOLVED thanks |
Hey, thanks for the instructions. I have a little problem here. I don't know what I should do to move forward. Can anyone give me a hint? (pip install modelscope (create modeldownloader.py) mv ./checkpoints/iic/unianimate/* ./checkpoints/) |
Still waiting for good soul to reply ;) |
It should be noted that on my side, hijacking an already existing python env from A1111/ComfyUI worked without much need for any additional dependencies. diffusers 0.28.2 Anyway, After the pip install modelscope Simply copy any already existing .py file and replace everything with from modelscope.hub.snapshot_download import snapshot_download mv ./checkpoints/iic/unianimate/* ./checkpoints/ #Now Save As.. modeldownloader.py Anaconda Prompt: conda activate UniAnimate cd \Path-to-UniAnimate python modeldownloader.py #The models should download to \Path-to-UniAnimate\checkpoints #OR just download everything manually to \Path-to-UniAnimate\checkpoints |
I am not able to install requirements.txt |
I always find that I am missing many things in the requirements, in this case also that it uses nccl, for multiple GPUs, which is not yet available in Windows, (I did not try in WSL due to space issues, and I don't have any more GPUs either. ), with which gpt4 recommended that I deactivate it, returning to the installation, adding more things that I think are necessary, that are not in the description, I consulted with gpt4 to see what each thing corresponded to, finally it worked, Although with my 12GB card, (I see that it uses 21GB shared but it drags, it is very slow) I did not see it move from 0% although I see it processing, I share my notes in case anyone else encountered several errors when trying to launch the inference, I'm going to try to see if I can change something so that it doesn't use so much vram, to see if it becomes usable,
unanimate
git clone https://github.com/ali-vilab/UniAnimate.git
cd UniAnimate
conda create -n UniAnimate python=3.9
conda activate UniAnimate
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
pip install modelscope
(create modeldownloader.py)
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/unianimate', cache_dir='checkpoints/')
mv ./checkpoints/iic/unianimate/* ./checkpoints/
pip install opencv-python
#https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/
pip install --upgrade --quiet langchain-experimental
pip install --upgrade --quiet pillow open_clip_torch torch matplotlib
#Of course, everyone should see their version of Cuda.
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118
pip install rotary-embedding-torch
pip install fairscale
pip install nvidia-ml-py3
pip install easydict
pip install imageio
pip install pytorch-lightning
pip install args
conda install -c conda-forge pynvml
#(Edit inference_unianimate_entrance.py) and change nccl to gloo
dist.init_process_group(backend='gloo', world_size=cfg.world_size, rank=cfg.rank)
python inference.py --cfg configs/UniAnimate_infer.yaml
The text was updated successfully, but these errors were encountered: