Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support streaming as part of thread runs and LLM generations #38

Open
multipletwigs opened this issue Mar 30, 2024 · 0 comments
Open

Support streaming as part of thread runs and LLM generations #38

multipletwigs opened this issue Mar 30, 2024 · 0 comments

Comments

@multipletwigs
Copy link
Collaborator

multipletwigs commented Mar 30, 2024

Problem Statement

  1. Based on a previous PR Assistant responds to message #9, we have managed to introduce the concept of thread runs, where we await for a response based on the content of the thread.
  2. We should have the option to stream the answer back to the consumer of the api to accommodate for the slow response time of LLMs.
  3. While the adapters are currently generators, we have not supposed streaming over the internet connection yet.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant