-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for issue #505 #510
Fix for issue #505 #510
Conversation
in generate_user_intent, generate_next_step, generate_value
while taking all other parameters from the GenerationOptions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this @uvnikgupta!
Looking closer into this, the llm_params
from the generation options should actually not be used for the generate_user_intent
, generate_next_steps
, generate_value
tasks. But we need the fix for generate_intent_next_steps
.
with llm_params(llm, temperature=self.config.lowest_temperature): | ||
generation_options: GenerationOptions = generation_options_var.get() | ||
with llm_params( | ||
llm, **((generation_options and generation_options.llm_params) or {}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The generation_options.llm_params
should not be used here. The llm_params
are meant for the "generation LLM call", i.e., the one that produces the result that the user ultimately sees. This could, for example, set the maximum number of tokens. For the other "internal" calls, only certain parameters should be set, e.g., temperature 0.
generation_options: GenerationOptions = generation_options_var.get() | ||
with llm_params( | ||
llm, **((generation_options and generation_options.llm_params) or {}) | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
generation_options: GenerationOptions = generation_options_var.get() | ||
with llm_params( | ||
llm, **((generation_options and generation_options.llm_params) or {}) | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
generation_options: GenerationOptions = generation_options_var.get() | ||
additional_params = { | ||
**((generation_options and generation_options.llm_params) or {}), | ||
"temperature": self.config.lowest_temperature, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To remove llm_params
.
generation_options: GenerationOptions = generation_options_var.get() | ||
additional_params = { | ||
**((generation_options and generation_options.llm_params) or {}), | ||
"temperature": self.config.lowest_temperature, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To remove llm_params
.
generation_options: GenerationOptions = generation_options_var.get() | ||
additional_params = { | ||
**((generation_options and generation_options.llm_params) or {}), | ||
"temperature": self.config.lowest_temperature, | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here it's ok to use the llm_params
because everything is happening in single call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
O a second thought, this is tricky too. For example, if the original prompt has a max_tokens
set to 1, let's say we want to LLM to respond with yes/no, then this would break the logic as the single LLM call mode needs to alter the prompt and generate more tokens. I don't have a solution for this.
Closing this for now. The right approach for when to pass the original params to "sub-tasks" needs more thinking and design. Thanks @uvnikgupta for trying to sort this. We'll move this to the backlog for now. |
llm_params from GenerationOptions are not considered during for the following operations:
generate_user_intent,
generate_next_step,
generate_value