Skip to content

Latest commit

 

History

History
24 lines (17 loc) · 1.67 KB

LoRA.md

File metadata and controls

24 lines (17 loc) · 1.67 KB

Demo (Web UI)

You need GeoChat-7B to run the demo locally. Download the model from GeoChat-7B. After loading the model, run this command by giving the model path to launch the gradio demo.

Launch the demo

python geochat_demo.py --model-path /path/to/model 

Training

Please see sample training scripts for LoRA

We provide sample DeepSpeed configs, zero3.json is more like PyTorch FSDP, and zero3_offload.json can further save memory consumption by offloading parameters to CPU. zero3.json is usually faster than zero3_offload.json but requires more GPU memory, therefore, we recommend trying zero3.json first, and if you run out of GPU memory, try zero3_offload.json. You can also tweak the per_device_train_batch_size and gradient_accumulation_steps in the config to save memory, and just to make sure that per_device_train_batch_size and gradient_accumulation_steps remains the same.

If you are having issues with ZeRO-3 configs, and there are enough VRAM, you may try zero2.json. This consumes slightly more memory than ZeRO-3, and behaves more similar to PyTorch FSDP, while still supporting parameter-efficient tuning.

Create Merged Checkpoints

python scripts/merge_lora_weights.py \
    --model-path /path/to/lora_model \
    --model-base /path/to/base_model \
    --save-model-path /path/to/merge_model