Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/llm launcher disable token #3230

Merged
merged 3 commits into from
Jul 5, 2024
Merged

Conversation

mreso
Copy link
Collaborator

@mreso mreso commented Jul 5, 2024

Description

This PR fixes the llm deployment after changes in disable-token-auth parameter

Fixes #(issue)
#3229

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

  • Test A
docker build . -f docker/Dockerfile.llm -t ts/llm
docker run --rm -ti --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:8080 -v data:/data ts/llm --model_id meta-llama/Meta-Llama-3-8B-Instruct --disable_token_auth

Logs for Test A

...
2024-07-05T14:32:43,689 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INFO 07-05 14:32:43 model_runner.py:965] Graph capturing finished in 2 secs.
2024-07-05T14:32:43,784 [INFO ] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.AsyncBatchAggregator - Predictions is empty. This is from initial load....
2024-07-05T14:32:43,785 [INFO ] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.AsyncWorkerThread - Worker loaded the model successfully
2024-07-05T14:32:43,785 [DEBUG] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - W-9000-model_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2024-07-05T14:32:43,785 [INFO ] epollEventLoopGroup-5-1 TS_METRICS - WorkerLoadTime.Milliseconds:19486.0|#WorkerName:W-9000-model_1.0,Level:Host|#hostname:16c99e392b9c,timestamp:1720189963
2024-07-05T14:32:43,785 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.AsyncBatchAggregator - Getting requests from model: org.pytorch.serve.wlm.Model@4115ab4f

Run

curl -X POST -d '{"prompt":"Hello, my name is", "max_new_tokens": 50, "temperature": 0}' --header "Content-Type: application/json" "http://localhost:8080/predictions/model"

Log

{"text": " [", "tokens": 510}{"text": "Your", "tokens": 7927}{"text": " Name", "tokens": 4076}{"text": "].", "tokens": 948}{"text": " I", "tokens": 358}{"text": " am", "tokens": 1097}{"text": " a", "tokens": 264}{"text": " [", "tokens": 510}{"text": "Your", "tokens": 7927}{"text": " Profession", "tokens": 50311}{"text": "/", "tokens": 14}{"text": "Student", "tokens": 14428}{"text": "]", "tokens": 60}{"text": " and", "tokens": 323}{"text": " I", "tokens": 358}{"text": " am", "tokens": 1097}
  • Test B
python -m ts.llm_launcher --disable_token_auth

Logs for Test B

...
2024-07-05T14:34:30,882 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INFO 07-05 14:34:30 model_runner.py:965] Graph capturing finished in 1 secs.
2024-07-05T14:34:30,915 [INFO ] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.AsyncBatchAggregator - Predictions is empty. This is from initial load....
2024-07-05T14:34:30,916 [INFO ] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.AsyncWorkerThread - Worker loaded the model successfully
2024-07-05T14:34:30,916 [DEBUG] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - W-9000-model_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2024-07-05T14:34:30,916 [INFO ] epollEventLoopGroup-5-1 TS_METRICS - WorkerLoadTime.Milliseconds:15557.0|#WorkerName:W-9000-model_1.0,Level:Host|#hostname:ip-172-31-15-101,timestamp:1720190070
2024-07-05T14:34:30,917 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.AsyncBatchAggregator - Getting requests from model: org.pytorch.serve.wlm.Model@6475501a
2024-07-05T14:34:33,457 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - ts_inference_requests_total.Count:1.0|#model_name:model,model_version:default|#hostname:ip-172-31-15-101,timestamp:1720190073
2024-07-05T14:34:33,458 [DEBUG] W-9000-model_1.0 org.pytorch.serve.wlm.AsyncBatchAggregator - Adding job to jobs: 0ec8378f-d4c8-431d-be9a-5c6c97899e3e
2024-07-05T14:34:33,458 [DEBUG] W-9000-model_1.0 org.pytorch.serve.wlm.AsyncWorkerThread - Flushing req.cmd PREDICT repeats 1 to backend at: 1720190073458
2024-07-05T14:34:33,459 [DEBUG] W-9000-model_1.0 org.pytorch.serve.wlm.AsyncWorkerThread - Successfully flushed req
2024-07-05T14:34:33,459 [INFO ] W-9000-model_1.0 org.pytorch.serve.wlm.AsyncBatchAggregator - Getting requests from model: org.pytorch.serve.wlm.Model@6475501a
2024-07-05T14:34:33,459 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - Backend received inference at: 1720190073
2024-07-05T14:34:33,460 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - self._entry_point=<bound method VLLMHandler.handle of <ts.torch_handler.vllm_handler.VLLMHandler object at 0x7faa4da5c0d0>>
2024-07-05T14:34:33,461 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INFO 07-05 14:34:33 async_llm_engine.py:564] Received request 0ec8378f-d4c8-431d-be9a-5c6c97899e3e: prompt: 'Hello, my name is', params: SamplingParams(n=1, best_of=1,
presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str
_in_output=False, ignore_eos=False, max_tokens=16, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: None, lora_request: None
.
2024-07-05T14:34:33,489 [INFO ] epollEventLoopGroup-5-1 ACCESS_LOG - /127.0.0.1:53454 "POST /predictions/model HTTP/1.1" 200 34
2024-07-05T14:34:33,489 [INFO ] epollEventLoopGroup-5-1 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:ip-172-31-15-101,timestamp:1720190073
2024-07-05T14:34:33,811 [INFO ] W-9000-model_1.0-stdout MODEL_LOG - INFO 07-05 14:34:33 async_llm_engine.py:133] Finished request 0ec8378f-d4c8-431d-be9a-5c6c97899e3e.
2024-07-05T14:34:33,811 [INFO ] W-9000-model_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]HandlerTime.Milliseconds:350.89|#ModelName:model,Level:Model|#type:GAUGE|#hostname:ip-172-31-15-101,1720190073,0ec8378f-d4c8
-431d-be9a-5c6c97899e3e, pattern=[METRICS]

Run

curl -X POST -d '{"prompt":"Hello, my name is", "max_new_tokens": 50, "temperature": 0}' --header "Content-Type: application/json" "http://localhost:8080/predictions/model"

Logs

{"text": " [", "tokens": 510}{"text": "Your", "tokens": 7927}{"text": " Name", "tokens": 4076}{"text": "].", "tokens": 948}{"text": " I", "tokens": 358}{"text": " am", "tokens": 1097}{"text": " a", "tokens": 264}{"text": " [", "tokens": 510}{"text": "Your", "tokens": 7927}{"text": " Profession", "tokens": 50311}{"text": "/", "tokens": 14}{"text": "Student", "tokens": 14428}{"text": "]", "tokens": 60}{"text": " and", "tokens": 323}{"text": " I", "tokens": 358}{"text": " am", "tokens": 1097}

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

Copy link
Collaborator

@udaij12 udaij12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mreso mreso added this pull request to the merge queue Jul 5, 2024
Merged via the queue into master with commit cbe9340 Jul 5, 2024
9 of 12 checks passed
@mreso mreso deleted the fix/llm_launcher_disable_token branch July 5, 2024 20:20
@RafLit RafLit mentioned this pull request Jul 15, 2024
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants