You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AutoGPT's and BabyAGI's prompts contain instructions regarding the expected output format, but then use custom parsing code for the response. I suggest to let the user define the expected response format with Pydantic, and build the prompt directly from the Pydantic schema.
So the following instruction:
{
"thoughts":
{
"text": "thought",
"reasoning": "reasoning",
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
"criticism": "constructive self-criticism",
"speak": "thoughts summary to say to user"
},
}
Could be passed by first defining and decorating the following schema:
importoutlines.textastextfrompydanticimportBaseModel, Field@text.responseclassThoughts(BaseModel):
text: str=Field(description="thought")
reasoning: str=Field(description="reasoning")
plan: str=Field(description="short bulleted list that conveys")
criticism: str=Field(description="constructive self-criticism"speak: str=Field(description="thoughts summary to say to users")
AutoGPT's and BabyAGI's prompts contain instructions regarding the expected output format, but then use custom parsing code for the response. I suggest to let the user define the expected response format with Pydantic, and build the prompt directly from the Pydantic schema.
So the following instruction:
Could be passed by first defining and decorating the following schema:
So we can do:
And on an LLM response:
The text was updated successfully, but these errors were encountered: