Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add example of integration with vLLM #435

Merged
merged 3 commits into from
Dec 17, 2023
Merged

Add example of integration with vLLM #435

merged 3 commits into from
Dec 17, 2023

Conversation

rlouf
Copy link
Member

@rlouf rlouf commented Dec 14, 2023

This example currently does not work with multiple prompts, because self.fsm_state will be updated every time self.__call__ is called. So with several prompts, self.fsm_states will be updated, at each step, as many times as there are sequences. This can be avoided by having self.fsm_states as a DefaultDict and passing the seq_id to logits_processor.

Looking at the code, it might be a good idea to revert back to the original tokenizer interface.

@rlouf rlouf force-pushed the vllm-integration branch 4 times, most recently from 844f467 to e0f9e76 Compare December 15, 2023 17:51
@rlouf rlouf merged commit 7ee827f into main Dec 17, 2023
4 checks passed
@rlouf rlouf deleted the vllm-integration branch December 17, 2023 08:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant