-
Notifications
You must be signed in to change notification settings - Fork 995
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After changing chat_models from ChatOpenAI to ChatVertexAI, i get a error 'ValueError: SystemMessage should be the first in the history.' #628
Comments
Hi @weatherbetter, I'm assuming you are using the multi-agent notebook for this? For vertex or other model providers, I'd replace the system messages with human messages with the contetn wrapped in XM, since they don't support the full openai API |
Hi @hinthornw First of all, thank you for your reply. I tried using 'convert_system_message_to_human'. But it still doesn't work. Am I using the wrong method? What should I do? |
I was thinking more along the lines of this: def sanitize(messages: list):
return [HumanMessage(content=f"<system-message>{m.content}</system-message>") if m.type == "system" else m for m in messages]
llm = sanitize | ChatVertexAI(model_name="gemini-pro") |
Same issue with my code |
@chocky18 did you try a suggested fix? |
@hinthornw i tried But it still doesn't work. Same error occurs. |
import functools from langgraph.graph import END, StateGraph, agent_node, create_agentos.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "json key" def setup_supervisor_chain(members: List[str], options: List[str], prompt: str):
)
Example usage:members = ["Researcher", "Coder"] prompt = ChatPromptTemplate.from_messages([ supervisor_chain = setup_supervisor_chain(members, options, prompt)
Define a simple supervisor chain functionInitialize the language modelllm = ChatVertexAI(model="gemini-pro",project="agents-426216")# Create the supervisor chainsupervisor_chain = (prompt| llm.bind_tools([function_def], function_call="route")| JsonOutputFunctionsParser())def supervisor_chain(state):state['next'] = 'FINISH'return stateDefine a simple _convert_to_prompt functiondef _convert_to_prompt(part: str) -> Dict[str, Any]: Define the agents and nodesresearch_agent = create_agent(llm, [tavily_tool], "You are a web researcher.") code_agent = create_agent( Define the workflowworkflow = StateGraph(AgentState) Define members and add edgesmembers = ["Researcher", "Coder"] Define conditional mapconditional_map = {k: k for k in members} Set entry pointworkflow.set_entry_point("supervisor") Compile the workflowgraph = workflow.compile() Define the function to convert messagesdef _convert_to_parts(message: BaseMessage) -> List[Dict[str, Any]]: def convert_messages_to_vertex_format(history: List[BaseMessage], convert_system_message_to_human: bool = False):
def _parse_tool_message_content(content: Any) -> Dict[Any, Any]: def _parse_content(raw_content: Any) -> Dict[Any, Any]: Prepare the message sequenceinitial_message = SystemMessage(content="Starting the workflow") Convert messagessystem_instruction, vertex_messages = convert_messages_to_vertex_format( Debugging: Print the converted messagesprint("System instruction:", system_instruction)print("Vertex messages:", vertex_messages)input_messages = [initial_message, human_message] Stream the graph with the initial SystemMessage followed by other messagestry:
except ValueError as e: |
check this, its working |
Hey! I got your code, but it doesn't worked. Can you help me? import os from langchain_core.prompts import ChatPromptTemplate class AgentState(TypedDict): Função para configurar a cadeia de supervisoresdef setup_supervisor_chain(members: List[str], options: List[str], prompt: str):
def create_agent(llm, tools, system_message): def agent_node(state, agent, name): from langchain_core.output_parsers import StrOutputParser members = ["Components", "Records", "Details"] Our team supervisor is an LLM node. It just picks the next agent to processand decides when the work is completedoptions = ["FINISH"] + members Using openai function calling can make output parsing easier for usprompt = ChatPromptTemplate.from_messages([ supervisor_chain = setup_supervisor_chain(members, options, prompt) llm = ChatVertexAI(model=model, model_kwargs=model_kwargs, safety_settings=safety_settings) def supervisor_chain(state): def _convert_to_prompt(part: str) -> Dict[str, Any]: def create_and_bind_agent(llm, tools, system_message, name): components_agent = create_and_bind_agent(llm, [list_components], "Você é o Agente de Componentes da uMov.me. Você deve fornecer informações sobre os componentes disponíveis no catálogo uMov.me.", "Components") tools = [get_component_details, list_components, list_component_records] workflow = StateGraph(AgentState) for member in members: The supervisor populates the "next" field in the graph statewhich routes to a node or finishesconditional_map = {k: k for k in members} Finally, add entrypointworkflow.set_entry_point("supervisor") graph = workflow.compile() from typing import Sequence, TypedDict, List, Optional, Any, Dict from langchain_core.prompts import ChatPromptTemplate def _convert_to_prompt(part: str) -> Dict[str, Any]: def _convert_to_parts(message: BaseMessage) -> List[Dict[str, Any]]: def convert_messages_to_vertex_format(history: List[BaseMessage], convert_system_message_to_human: bool = False):
def _parse_tool_message_content(content: Any) -> Dict[Any, Any]: def _parse_content(raw_content: Any) -> Dict[Any, Any]: initial_message = SystemMessage(content="Starting the workflow") Converter mensagenssystem_instruction, vertex_messages = convert_messages_to_vertex_format( Depuração: Imprimir as mensagens convertidasprint("System instruction:", system_instruction) try:
except ValueError as e: Answer: {'supervisor': {'messages': [SystemMessage(content='Starting the workflow'), HumanMessage(content='Quais são os componentes do catálogo?')], 'next': 'FINISH'}} |
Can you help me? Please, i got the same error. I'm using ChatVertexAI. from langchain.agents import AgentExecutor, create_openai_tools_agent def create_agent(llm: ChatVertexAI, tools: list, system_prompt: str):
def agent_node(state, agent, name): from langchain_core.output_parsers.openai_functions import JsonOutputFunctionsParser class AgentState(TypedDict): members = ["Components", "Records", "Details"] Our team supervisor is an LLM node. It just picks the next agent to processand decides when the work is completedoptions = ["FINISH"] + members Using openai function calling can make output parsing easier for usfunction_def = { llm = ChatVertexAI(model_name="gemini-1.5-pro-001", convert_system_message_to_human=True) supervisor_chain = ( components_agent = create_agent(llm, [list_components], "Você é o Agente de Componentes da uMov.me. Você deve fornecer informações sobre os componentes disponíveis no catálogo uMov.me.") tools = [get_component_details, list_components, list_component_records] workflow = StateGraph(AgentState) for member in members: The supervisor populates the "next" field in the graph statewhich routes to a node or finishesconditional_map = {k: k for k in members} Finally, add entrypointworkflow.set_entry_point("supervisor") graph = workflow.compile() def sanitize(messages: list): messages = [HumanMessage(content="Quais são os componentes disponíveis no catálogo?", type="human")] sanitized_messages = sanitize(messages) input_dict = {"messages": sanitized_messages} # Encapsulando as mensagens em um dicionário for s in graph.stream(input_dict): # Passando o dicionário como argumento Error: ValueError: SystemMessage should be the first in the history. I really need this code as soon as possible, thanks anyway! |
@hinthornw Can you help me? Pleaase, thanks anyway! |
i have same issue |
Same problem here |
1 similar comment
Same problem here |
Checked other resources
Example Code
Description
After changing chat_models from ChatOpenAI to ChatVertexAI, i get a error 'ValueError: SystemMessage should be the first in the history.'
file : LLMCompiler.ipynb
System Info
python -m langchain_core.sys_info
The text was updated successfully, but these errors were encountered: