I am using the LangChain 1.0 new agent = create_agent() method to create an agent but I am getting inconsistent output response types.
Case 1: str as output
When the user query is simple such as "What is a banana" the content property returns a string type:
response3 = agent.invoke(
{"messages": [{"role": "user", "content": "What is a banana?"}]},
{"configurable": {"thread_id": "2"}}
)
print(response3["messages"][-1].content)
Output:
A banana is an elongated, edible fruit botanically a berry, produced by several kinds of large herbaceous flowering plants in the genus Musa.
Case 2: list as output
But if the user query is confusing such as "What's its name" the content property returns a list type:
response3 = agent.invoke(
{"messages": [{"role": "user", "content": "What's its name?"}]},
{"configurable": {"thread_id": "2"}}
)
print(response3["messages"][-1].content)
The output is:
[{'type': 'text', 'text': 'I\'m sorry, I don\'t understand what "it" refers to. Could you please provide more context?', 'extras': {'signature': 'CscCAdHtim+SJIpPCDrUbhw9W'}}]
This happens only when I am using gemini-2.5.-flash. It does not happen with openai models.
This inconsistency would cause unexpected bugs. Is there a proper way to handle this without doing type checks with conditionals?
Edit: This seems to happen only with gemini-2.5-flash. I tried gemini-2.0-flash and I got consistent str types.