-
Notifications
You must be signed in to change notification settings - Fork 15k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving Resilience of MRKL Agent #5014
Conversation
@vowelparrot kindly check out this pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks pretty solid to me. lets maybe not change the derault value of handle_parsing_errors
in this PR... im down to do in a future one but want to update docs
Sure, I have changed the default value back to False. You can merge this PR now (@vowelparrot, @hwchase17). Also as I asked in the previous PR #3269, do you want me to apply this feature to react, self_ask_with_search & conversational agents too, in a future PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
Hi, This is a fix for #5014. This PR forgot to add the ability to self solve the ValueError(f"Could not parse LLM output: {llm_output}") error for `_atake_next_step`.
…n-ai#5985) Hi, This is a fix for langchain-ai#5014. This PR forgot to add the ability to self solve the ValueError(f"Could not parse LLM output: {llm_output}") error for `_atake_next_step`.
This is a highly optimized update to the pull request #3269
Summary:
{llm_output}
") error, whenever llm (especially gpt-3.5-turbo) does not follow the format of MRKL Agent, while returning "Action:" & "Action Input:".For a detailed explanation, look at the previous pull request.
New Updates: