Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LlamaSequenceGenerator #701

Merged

Conversation

rlouf
Copy link
Member

@rlouf rlouf commented Feb 22, 2024

We currently store the logits processor in the LlamaCpp instance. This causes issues when doing successive generations with different generators. In this PR we create a new LlamaSequenceGenerator instance every time we create a new generator, and store the logits processor in this instance which solves the issue.

Fixes #700.

We currently store the logits processor in the `LlamaCpp` instance. This
causes issues when doing successive generations with different
generators. In this PR we create a new `LlamaSequenceGenerator` instance
every time we create a new generator, and store the logits processor in
this instance which solves the issue.

Fixes outlines-dev#700.
@rlouf rlouf added bug llama.cpp Related to the `llama.cpp` integration labels Feb 22, 2024
@rlouf rlouf merged commit d23807b into outlines-dev:main Feb 22, 2024
5 checks passed
@rlouf rlouf deleted the llamacpp-fix-logits-processor-overwrite branch February 22, 2024 12:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug llama.cpp Related to the `llama.cpp` integration
Projects
None yet
Development

Successfully merging this pull request may close these issues.

llamacpp: Controlled generation overwrites model instance
1 participant