Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

InternLM2

In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on InternLM2 models. For illustration purposes, we utilize the internlm/internlm2-chat-7b as a reference InternLM2 model.

0. Requirements

To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to here for more information.

Example: Predict Tokens using generate() API

In the example generate.py, we show a basic use case for a InternLM2 model to predict the next N tokens using generate() API, with IPEX-LLM INT4 optimizations.

1. Install

We suggest using conda to manage environment:

On Linux:

conda create -n llm python=3.11 # recommend to use Python 3.11
conda activate llm

# install the latest ipex-llm nightly build with 'all' option
pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
pip install transformers==3.36.2
pip install huggingface_hub 

On Windows:

conda create -n llm python=3.11
conda activate llm

pip install --pre --upgrade ipex-llm[all]
pip install transformers==3.36.2 
pip install huggingface_hub 

2. Run

Setup local MODEL_PATH and run python code to download the right version of model from hugginface.

from huggingface_hub import snapshot_download
snapshot_download(repo_id=repo_id, local_dir=MODEL_PATH, local_dir_use_symlinks=False, revision="v1.1.0")

Then run the example with the downloaded model

python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT

Arguments info:

  • --repo-id-or-model-path REPO_ID_OR_MODEL_PATH: argument defining the huggingface repo id for the InternLM2 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be 'internlm/internlm2-chat-7b'.
  • --prompt PROMPT: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be 'AI是什么?'.
  • --n-predict N_PREDICT: argument defining the max number of tokens to predict. It is default to be 32.

Note: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a XB model saved in 16-bit will requires approximately 2X GB of memory for loading, and ~0.5X GB memory for further inference.

Please select the appropriate size of the InternLM2 model based on the capabilities of your machine.

2.1 Client

On client Windows machine, it is recommended to run directly with full utilization of all cores:

python ./generate.py  --repo-id-or-model-path REPO_ID_OR_MODEL_PATH

2.2 Server

For optimal performance on server, it is recommended to set several environment variables (refer to here for more information), and run the example with all the physical cores of a single socket.

E.g. on Linux,

# set IPEX-LLM env variables
source ipex-llm-init

# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
numactl -C 0-47 -m 0 python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH

2.3 Sample Output

Inference time: xxxx s
-------------------- Prompt --------------------
<|User|>:AI是什么?
<|Bot|>:
-------------------- Output --------------------
<|User|>:AI是什么?
<|Bot|>:AI是人工智能的缩写,是计算机科学的一个分支,旨在使计算机能够像人类一样思考、学习和执行任务。AI技术包括机器学习、自然
Inference time: xxxx s
-------------------- Prompt --------------------
<|User|>:What is AI?
<|Bot|>:
-------------------- Output --------------------
<|User|>:What is AI?
<|Bot|>:AI is the ability of machines to perform tasks that would normally require human intelligence, such as perception, reasoning, learning, and decision-making. AI is made possible