I am following the tutorial given here, trying to create a very simple tool-calling function. I have followed the code exactly, only changing the tool itself.
from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
from langchain_core.messages import HumanMessage, AIMessage
@tool
def duddify(first_number: int, second_number: int):
"""Concatenate two numbers twice."""
return int(f"{str(first_number)}{str(second_number)}{str(first_number)}{str(second_number)}")
@tool
def pippify(number: int):
"""Repeats the number three times."""
return f"{str(number)} {str(number)} {str(number)}"
llm = ChatHuggingFace(
llm=HuggingFaceEndpoint(
repo_id="mistralai/Mistral-7B-Instruct-v0.3",
task='text-generation'
),
verbose=True
)
llm_with_tools = llm.bind_tools([duddify, pippify])
query = "pippify the number 102"
messages = [HumanMessage(content=query)]
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
for tool_call in ai_msg.tool_calls:
selected_tool = {"duddify": duddify, "pippify": pippify}[tool_call["name"].lower()]
tool_msg = selected_tool.invoke(tool_call)
messages.append(tool_msg)
print(f"Messages:\n{messages}\n")
for message in messages:
print(f'{message}\ntype {type(message)}\n')
final_response = llm_with_tools.invoke(messages)
print(final_response)
After the tool calls, there are three messages in the 'messages' list: the HumanMessage (query), AIMessage (original invocation), and ToolMessage. This is shown by the print outputs below:
Messages:
[HumanMessage(content='pippify the number 102 and calculate 8 bryanify 3', additional_kwargs={}, response_metadata={}), AIMessage(content='', additional_kwargs={'tool_calls': [ChatCompletionOutputToolCall(function=ChatCompletionOutputFunctionDefinition(arguments={'number': 102}, name='pippify', description=None), id='0', type='function')]}, response_metadata={'token_usage': ChatCompletionOutputUsage(completion_tokens=19, prompt_tokens=277, total_tokens=296), 'model': '', 'finish_reason': 'stop'}, id='run-b00d61a6-0a4f-4b02-8cac-30663ae5e5e3-0', tool_calls=[{'name': 'pippify', 'args': {'number': 102}, 'id': '0', 'type': 'tool_call'}]), ToolMessage(content='102 102 102', name='pippify', tool_call_id='0')]
content='pippify the number 102' additional_kwargs={} response_metadata={}
type <class 'langchain_core.messages.human.HumanMessage'>
content='' additional_kwargs={'tool_calls': [ChatCompletionOutputToolCall(function=ChatCompletionOutputFunctionDefinition(arguments={'number': 102}, name='pippify', description=None), id='0', type='function')]} response_metadata={'token_usage': ChatCompletionOutputUsage(completion_tokens=19, prompt_tokens=277, total_tokens=296), 'model': '', 'finish_reason': 'stop'} id='run-f65ed238-9c6f-42f0-84f1-c7b521920f09-0' tool_calls=[{'name': 'pippify', 'args': {'number': 102}, 'id': '0', 'type': 'tool_call'}]
type <class 'langchain_core.messages.ai.AIMessage'>
content='102 102 102' name='pippify' tool_call_id='0'
type <class 'langchain_core.messages.tool.ToolMessage'>
This seems to be working well. However, the final invocation of llm_with_tools gives this error:
Template error: unknown filter: filter string is unknown (in <string>:79)
Full error message:
Traceback (most recent call last):
File "C:\Users\path\.venv\Lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "C:\Users\path\.venv\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3/v1/chat/completions
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\path\testing_langchain_agents.py", line 57, in <module>
print(f"\n{llm_with_tools.invoke([messages[-1]])}\n")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\runnables\base.py", line 5354, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 286, in invoke
self.generate_prompt(
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 643, in generate
raise e
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 633, in generate
self._generate_with_cache(
File "C:\Users\path\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py", line 370, in _generate
answer = self.llm.client.chat_completion(messages=message_dicts, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\huggingface_hub\inference\_client.py", line 892, in chat_completion
data = self.post(model=model_url, json=payload, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\path\.venv\Lib\site-packages\huggingface_hub\inference\_client.py", line 306, in post
hf_raise_for_status(response)
File "C:\Users\path\.venv\Lib\site-packages\huggingface_hub\utils\_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 422 Client Error: Unprocessable Entity for url: https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.3/v1/chat/completions (Request ID: z1qkv0HDiNWIwsBGxA-xp)
Template error: unknown filter: filter string is unknown (in <string>:79)
I have searched all over the web but can't seem to find any similar problems encountered. I have:
uninstalled and reinstalled langchain, langchain_huggingface and other packages
created venv with other Python versions and tested the same code
- original version was 3.11
- tried 3.12, same error occurred
- tried 3.13, torch cannot be installed
tried invoking with different messages, and found that this error only occurs when a ToolMessage object is in the list of messages. For example:
llm_with_tools.invoke(HumanMessage(content=query))worksllm_with_tools.invoke([HumanMessage(content=query)])worksllm_with_tools.invoke([HumanMessage(content=query), ai_msg])worksllm_with_tools.invoke(messages[-1])does not work as input type must be a PromptValue, str, or list of BaseMessages, and not a ToolMessage. This is understandable. Note that messages[-1] is the ToolMessage that's most recently appended into the messages list.- llm_with_tools.invoke([messages[-1]]) does not work and gives this error. So evidently the error is caused by a ToolMessage being passed as part of a list into llm_with_tools.invoke().
It is not a problem with the HuggingFace model / login credentials as I can easily invoke the model with strings and HumanMessages
- HfHubHTTPError does not give any information; this error is thrown for 'etcetera' errors
I have also tried searching the source code for an answer on why the ToolMessage cannot be part of the list, but to no avail.
Can anyone help me please? All replies are much appreciated