Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for issue #505 #510

Closed
wants to merge 3 commits into from
Closed

Fix for issue #505 #510

wants to merge 3 commits into from

Conversation

uvnikgupta
Copy link

llm_params from GenerationOptions are not considered during for the following operations:
generate_user_intent,
generate_next_step,
generate_value

in generate_user_intent, generate_next_step, generate_value
while taking all other parameters from the
GenerationOptions
@drazvan drazvan self-assigned this Jun 6, 2024
Copy link
Collaborator

@drazvan drazvan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this @uvnikgupta!
Looking closer into this, the llm_params from the generation options should actually not be used for the generate_user_intent, generate_next_steps, generate_value tasks. But we need the fix for generate_intent_next_steps.

with llm_params(llm, temperature=self.config.lowest_temperature):
generation_options: GenerationOptions = generation_options_var.get()
with llm_params(
llm, **((generation_options and generation_options.llm_params) or {})
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generation_options.llm_params should not be used here. The llm_params are meant for the "generation LLM call", i.e., the one that produces the result that the user ultimately sees. This could, for example, set the maximum number of tokens. For the other "internal" calls, only certain parameters should be set, e.g., temperature 0.

Comment on lines 571 to 574
generation_options: GenerationOptions = generation_options_var.get()
with llm_params(
llm, **((generation_options and generation_options.llm_params) or {})
):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above.

Comment on lines +970 to +973
generation_options: GenerationOptions = generation_options_var.get()
with llm_params(
llm, **((generation_options and generation_options.llm_params) or {})
):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above.

Comment on lines +400 to +404
generation_options: GenerationOptions = generation_options_var.get()
additional_params = {
**((generation_options and generation_options.llm_params) or {}),
"temperature": self.config.lowest_temperature,
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To remove llm_params.

Comment on lines +573 to +577
generation_options: GenerationOptions = generation_options_var.get()
additional_params = {
**((generation_options and generation_options.llm_params) or {}),
"temperature": self.config.lowest_temperature,
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To remove llm_params.

Comment on lines +1142 to +1147
generation_options: GenerationOptions = generation_options_var.get()
additional_params = {
**((generation_options and generation_options.llm_params) or {}),
"temperature": self.config.lowest_temperature,
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here it's ok to use the llm_params because everything is happening in single call.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

O a second thought, this is tricky too. For example, if the original prompt has a max_tokens set to 1, let's say we want to LLM to respond with yes/no, then this would break the logic as the single LLM call mode needs to alter the prompt and generate more tokens. I don't have a solution for this.

@drazvan drazvan added the bug Something isn't working label Jul 10, 2024
@drazvan drazvan added this to the v0.10.0 milestone Jul 10, 2024
@drazvan
Copy link
Collaborator

drazvan commented Aug 20, 2024

Closing this for now. The right approach for when to pass the original params to "sub-tasks" needs more thinking and design. Thanks @uvnikgupta for trying to sort this. We'll move this to the backlog for now.

@drazvan drazvan closed this Aug 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants