2

I'm writing an agent using the LangChain and LangGraph libraries. I want to make my agent able to interact with files but only inside of the local directory, so I am writing a family of tools that are actually member functions of a class instance for storing the location of the local directory (which can be later expanded to have tracing/rollback functionality). The structure is as such:

class FileInterface:
    def __init__(self):
        self.localdir = Path.cwd() # memorize local directory

    def get_tools(self):
        return [self.read_file,...]
    
    @tool
    def read_file(self, path:str) -> str:
        """tool for reading file at path"""
        if ... path is inside self.localdir ...:
            return path.read()

etc.

I then give the list to a langchain.agents.create_agent function to test the capabilities of this simple agent:

file_interface = FileInterface()
agent = create_agent(
            model=init_model(...),
            state_schema=MyState,
            tools=file_interface.get_tools(),
            system_prompt="You are a helpful agent [...]",
            checkpointer=InMemorySaver()
        )

print(agent.invoke({"messages":[{"role":"human","content":"tell me the contents of file example.txt"}]},context=context))

But when I invoke the agent with a query that prompts it to use one such tool I get the error:

TypeError: StructuredTool._run() got multiple values for argument 'self'

I would like to be able to have a family of tools that all refer to the data inside of an object. What is the intended way to achieve this?

Here is the full traceback of an agent call with tool invocation:

Traceback (most recent call last):
  File "/home/ubuntu/dev/elster/.venv/bin/elster", line 10, in <module>
    sys.exit(main())
             ~~~~^^
  File "/home/ubuntu/dev/elster/src/elster/main.py", line 49, in main
    resp = agent.invoke(args.command,args.request)
  File "/home/ubuntu/dev/elster/src/elster/agent.py", line 61, in invoke
    return self.agent.invoke(
           ~~~~~~~~~~~~~~~~~^
        {"messages":[{"role":"human","content":request}]},
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        config=CONFIG
        ^^^^^^^^^^^^^
    )
    ^
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langgraph/pregel/main.py", line 3094, in invoke
    for chunk in self.stream(
                 ~~~~~~~~~~~^
        input,
        ^^^^^^
    ...<10 lines>...
        **kwargs,
        ^^^^^^^^^
    ):
    ^
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langgraph/pregel/main.py", line 2679, in stream
    for _ in runner.tick(
             ~~~~~~~~~~~^
        [t for t in loop.tasks.values() if not t.writes],
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<2 lines>...
        schedule_task=loop.accept_push,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ):
    ^
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langgraph/pregel/_runner.py", line 167, in tick
    run_with_retry(
    ~~~~~~~~~~~~~~^
        t,
        ^^
    ...<10 lines>...
        },
        ^^
    )
    ^
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langgraph/pregel/_retry.py", line 42, in run_with_retry
    return task.proc.invoke(task.input, config)
           ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langgraph/_internal/_runnable.py", line 656, in invoke
    input = context.run(step.invoke, input, config, **kwargs)
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langgraph/_internal/_runnable.py", line 400, in invoke
    ret = self.func(*args, **kwargs)
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain/tools/tool_node.py", line 702, in _func
    outputs = list(executor.map(self._run_one, tool_calls, input_types, tool_runtimes))
  File "/home/ubuntu/.local/share/uv/python/cpython-3.13.9-linux-x86_64-gnu/lib/python3.13/concurrent/futures/_base.py", line 619, in result_iterator
    yield _result_or_cancel(fs.pop())
          ~~~~~~~~~~~~~~~~~^^^^^^^^^^
  File "/home/ubuntu/.local/share/uv/python/cpython-3.13.9-linux-x86_64-gnu/lib/python3.13/concurrent/futures/_base.py", line 317, in _result_or_cancel
    return fut.result(timeout)
           ~~~~~~~~~~^^^^^^^^^
  File "/home/ubuntu/.local/share/uv/python/cpython-3.13.9-linux-x86_64-gnu/lib/python3.13/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ~~~~~~~~~~~~~~~~~^^
  File "/home/ubuntu/.local/share/uv/python/cpython-3.13.9-linux-x86_64-gnu/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/home/ubuntu/.local/share/uv/python/cpython-3.13.9-linux-x86_64-gnu/lib/python3.13/concurrent/futures/thread.py", line 59, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain_core/runnables/config.py", line 546, in _wrapped_fn
    return contexts.pop().run(fn, *args)
           ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain/tools/tool_node.py", line 911, in _run_one
    return self._execute_tool_sync(tool_request, input_type, config)
           ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain/tools/tool_node.py", line 860, in _execute_tool_sync
    content = _handle_tool_error(e, flag=self._handle_tool_errors)
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain/tools/tool_node.py", line 389, in _handle_tool_error
    content = flag(e)  # type: ignore [assignment, call-arg]
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain/tools/tool_node.py", line 352, in _default_handle_tool_errors
    raise e
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain/tools/tool_node.py", line 815, in _execute_tool_sync
    response = tool.invoke(call_args, config)
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain_core/tools/base.py", line 591, in invoke
    return self.run(tool_input, **kwargs)
           ~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain_core/tools/base.py", line 856, in run
    raise error_to_raise
  File "/home/ubuntu/dev/elster/.venv/lib/python3.13/site-packages/langchain_core/tools/base.py", line 825, in run
    response = context.run(self._run, *tool_args, **tool_kwargs)
TypeError: StructuredTool._run() got multiple values for argument 'self'
During task with name 'tools' and id 'da9f60b1-3238-488d-27f9-b25e894f60b0'
1

1 Answer 1

2

You get that error because @tool on a bound instance method is not supported. When you decorate a method like:

@tool
def read_file(self, path: str) -> str:
    ...

LangChain wraps it as a StructuredTool, but since it’s already bound (i.e. self is already attached), at runtime it gets passed another implicit self → so Python sees two self args → and thus:

TypeError: StructuredTool._run() got multiple values for argument 'self'

In a nutshell: tools need to be plain callables, not bound methods.

So, To fix this you can wrap the instance method into a closure that captures self:

class FileInterface:

    def __init__(self, base_dir: str):
        self.base_dir = Path(base_dir)

    def _read_file(self, filename: str) -> str:
        path = self.base_dir / filename
        return path.read_text()

    def tools(self):
        # ---- you must wrap in a closure like this ↓ ----
        @tool
        def read_file(filename: str) -> str:
            """Read a file from disk inside base_dir"""
            return self._read_file(filename)

        return [read_file]

here's a full working solution:

from pathlib import Path

from langchain_core.tools import tool
from langchain.agents import create_agent
from langchain_ollama.chat_models import ChatOllama


class FileInterface:

    def __init__(self, base_dir: str):
        self.base_dir = Path(base_dir)

    def _read_file(self, filename: str) -> str:
        path = self.base_dir / filename
        return path.read_text()

    def tools(self):
        # ---- you must wrap in a closure like this ↓ ----
        @tool
        def read_file(filename: str) -> str:
            """Read a file from disk inside base_dir"""
            return self._read_file(filename)

        return [read_file]


# -------------- usage --------------
# 1) instantiate your class
fs = FileInterface(".")

# 2) define your model

llm = ChatOllama(
    model="llama3.2:latest"
)

# 3) build an agent with your tools
agent = create_agent(llm, tools=fs.tools())
# 4) call it

response = agent.invoke({"messages": [{"role": "user", "content": "Open the file README.md and summarize it in one bullet point."}]})
print(response)

output (response):

{
  'messages': [
    HumanMessage(content='Open the file README.md and summarize it in one bullet point.',
    additional_kwargs={
      
    },
    response_metadata={
      
    },
    id='c478a0e7-1ba7-453c-886e-d608809f0e44'),
    AIMessage(content='',
    additional_kwargs={
      
    },
    response_metadata={
      'model': 'llama3.2:latest',
      'created_at': '2025-11-04T06:10:16.1762027Z',
      'done': True,
      'done_reason': 'stop',
      'total_duration': 1753878000,
      'load_duration': 129724100,
      'prompt_eval_count': 164,
      'prompt_eval_duration': 1115885900,
      'eval_count': 18,
      'eval_duration': 485763600,
      'model_name': 'llama3.2:latest',
      'model_provider': 'ollama'
    },
    id='lc_run--d7091ec6-a07a-4e95-b0f6-45f9f00adcaf-0',
    tool_calls=[
      {
        'name': 'read_file',
        'args': {
          'filename': 'README.md'
        },
        'id': '3 c96bb95-4e77-4a80-9b41-a21ed5016f8a',
        'type': 'tool_call'
      }
    ],
    usage_metadata={
      'input_tokens': 164,
      'output_tokens': 18,
      'total_tokens': 182
    }),
    ToolMessage(content='# Tools\n\nMany A I applications interact with users via natural language. However, some use cases require models to interface directly with external systems—such as APIs, databases, or file syste ms—using structured input.\n\nTools are components that [agents](/oss/python/langchain/agents) call to perform actions. They extend model capabilities by letting them interact wi th the world through well-defined inputs and outputs. Tools encapsulate a callable function and its input schema. These can be passed to compatible [chat models](/oss/python/langch ain/models), allowing the model to decide whether to invoke a tool and with what arguments. In these scenarios, tool calling enables models to generate requests that conform to a s pecified input schema.\n\n<Note>\n  **Server-side tool use**\n\n  Some chat models (e.g., [OpenAI](/oss/python/integrations/chat/openai), [Anthropic](/oss/python/integrations/chat/ anthropic), and [Gemini](/oss/python/integrations/chat/google_generative_ai)) feature [built-in tools](/oss/python/langchain/models#server-side-tool-use) that are executed server-s ide, such as web search and code interpreters. Refer to the [provider overview](/oss/python/integrations/providers/overview) to learn how to access these tools with your specific c hat model.\n</Note>\n\n## Create tools\n\n### Basic tool definition\n\nThe simplest way to create a tool is with the [`@tool`](https://reference.langchain.com/python/langchain/tool s/#langchain.tools.tool) decorator. By default, the function\'s docstring becomes the tool\'s description that helps the model understand when to use it:\n\n```python wrap theme={n ull}\nfrom langchain.tools import tool\n\n@tool\ndef search_database(query: str, limit: int = 10) -> str:\n    """Search the customer database for records matching the query.\n\n     Args:\n        query: Search terms to look for\n        limit: Maximum number of results to return\n    """\n    return f"Found {limit} results for \'{query}\'"\n```\n\nType hint s are **required** as they define the tool\'s input schema. The docstring should be informative and concise to help the model understand the tool\'s purpose.\n\n### Customize tool  properties\n\n#### Custom tool name\n\nBy default, the tool name comes from the function name. Override it when you need something more descriptive:\n\n```python wrap theme={null}\ n@tool("web_search")  # Custom name\ndef search(query: str) -> str:\n    """Search the web for information."""\n    return f"Results for: {query}"\n\nprint(search.name)  # web_sear ch\n```\n\n#### Custom tool description\n\nOverride the auto-generated tool description for clearer model guidance:\n\n```python wrap theme={null}\n@tool("calculator", description= "Performs arithmetic calculations. Use this for any math problems.")\ndef calc(expression: str) -> str:\n    """Evaluate mathematical expressions."""\n    return str(eval(expressio n))\n```\n\n### Advanced schema definition\n\nDefine complex inputs with Pydantic models or JSON schemas:\n\n<CodeGroup>\n  ```python wrap Pydantic model theme={null}\n  from pydan tic import BaseModel, Field\n  from typing import Literal\n\n  class WeatherInput(BaseModel):\n      """Input for weather queries."""\n      location: str = Field(description="City  name or coordinates")\n      units: Literal["celsius", "fahrenheit"] = Field(\n          default="celsius",\n          description="Temperature unit preference"\n      )\n      in clude_forecast: bool = Field(\n          default=False,\n          description="Include 5-day forecast"\n      )\n\n  @tool(args_schema=WeatherInput)\n  def get_weather(location: s tr, units: str = "celsius", include_forecast: bool = False) -> str:\n      """Get current weather and optional forecast."""\n      temp = 22 if units == "celsius" else 72\n      re sult = f"Current weather in {location}: {temp} degrees {units[0].upper()}"\n      if include_forecast:\n          result += "\\nNext 5 days: Sunny"\n      return result\n  ```\n\n   ```python wrap JSON Schema theme={null}\n  weather_schema = {\n      "type": "object",\n      "properties": {\n          "location": {"type": "string"},\n          "units": {"type ": "string"},\n          "include_forecast": {"type": "boolean"}\n      },\n      "required": ["location", "units", "include_forecast"]\n  }\n\n  @tool(args_schema=weather_schema)\ n  def get_weather(location: str, units: str = "celsius", include_forecast: bool = False) -> str:\n      """Get current weather and optional forecast."""\n      temp = 22 if units  == "celsius" else 72\n      result = f"Current weather in {location}: {temp} degrees {units[0].upper()}"\n      if include_forecast:\n          result += "\\nNext 5 days: Sunny"\n       return result\n  ```\n</CodeGroup>\n\n## Accessing Context\n\n<Info>\n  **Why this matters:** Tools are most powerful when they can access agent state, runtime context, and lo ng-term memory. This enables tools to make context-aware decisions, personalize responses, and maintain information across conversations.\n</Info>\n\nTools can access runtime infor mation through the `ToolRuntime` parameter, which provides:\n\n* **State** - Mutable data that flows through execution (messages, counters, custom fields)\n* **Context** - Immutabl e configuration like user IDs, session details, or application-specific configuration\n* **Store** - Persistent long-term memory across conversations\n* **Stream Writer** - Stream  custom updates as tools execute\n* **Config** - RunnableConfig for the execution\n* **Tool Call ID** - ID of the current tool call\n\n### ToolRuntime\n\nUse `ToolRuntime` to access  all runtime information in a single parameter. Simply add `runtime: ToolRuntime` to your tool signature, and it will be automatically injected without being exposed to the LLM.\n\ n<Info>\n  **`ToolRuntime`**: A unified parameter that provides tools access to state, context, store, streaming, config, and tool call ID. This replaces the older pattern of using  separate [`InjectedState`](https://reference.langchain.com/python/langgraph/agents/#langgraph.prebuilt.tool_node.InjectedState), [`InjectedStore`](https://reference.langchain.com/ python/langgraph/agents/#langgraph.prebuilt.tool_node.InjectedStore), [`get_runtime`](https://reference.langchain.com/python/langgraph/runtime/#langgraph.runtime.get_runtime), and  [`InjectedToolCallId`](https://reference.langchain.com/python/langchain/tools/#langchain.tools.InjectedToolCallId) annotations.\n</Info>\n\n**Accessing state:**\n\nTools can access  the current graph state using `ToolRuntime`:\n\n```python wrap theme={null}\nfrom langchain.tools import tool, ToolRuntime\n\n# Access the current conversation state\n@tool\ndef s ummarize_conversation(\n    runtime: ToolRuntime\n) -> str:\n    """Summarize the conversation so far."""\n    messages = runtime.state["messages"]\n\n    human_msgs = sum(1 for m  in messages if m.__class__.__name__ == "HumanMessage")\n    ai_msgs = sum(1 for m in messages if m.__class__.__name__ == "AIMessage")\n    tool_msgs = sum(1 for m in messages if m. __class__.__name__ == "ToolMessage")\n\n    return f"Conversation has {human_msgs} user messages, {ai_msgs} AI responses, and {tool_msgs} tool results"\n\n# Access custom state fie lds\n@tool\ndef get_user_preference(\n    pref_name: str,\n    runtime: ToolRuntime  # ToolRuntime parameter is not visible to the model\n) -> str:\n    """Get a user preference va lue."""\n    preferences = runtime.state.get("user_preferences", {})\n    return preferences.get(pref_name, "Not set")\n```\n\n<Warning>\n  The `tool_runtime` parameter is hidden f rom the model. For the example above, the model only sees `pref_name` in the tool schema - `tool_runtime` is *not* included in the request.\n</Warning>\n\n**Updating state:**\n\nUs e [`Command`](https://reference.langchain.com/python/langgraph/types/#langgraph.types.Command) to update the agent\'s state or control the graph\'s execution flow:\n\n```python wra p theme={null}\nfrom langgraph.types import Command\nfrom langchain.messages import RemoveMessage\nfrom langgraph.graph.message import REMOVE_ALL_MESSAGES\nfrom langchain.tools imp ort tool, ToolRuntime\n\n# Update the conversation history by removing all messages\n@tool\ndef clear_conversation() -> Command:\n    """Clear the conversation history."""\n\n    r eturn Command(\n        update={\n            "messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES)],\n        }\n    )\n\n# Update the user_name in the agent state\n@tool\ndef update _user_name(\n    new_name: str,\n    runtime: ToolRuntime\n) -> Command:\n    """Update the user\'s name."""\n    return Command(update={"user_name": new_name})\n```\n\n#### Contex t\n\nAccess immutable configuration and contextual data like user IDs, session details, or application-specific configuration through `runtime.context`.\n\nTools can access runtime  context through `ToolRuntime`:\n\n```python wrap theme={null}\nfrom dataclasses import dataclass\nfrom langchain_openai import ChatOpenAI\nfrom langchain.agents import create_agen t\nfrom langchain.tools import tool, ToolRuntime\n\n\nUSER_DATABASE = {\n    "user123": {\n        "name": "Alice Johnson",\n        "account_type": "Premium",\n        "balance":  5000,\n        "email": "[email protected]"\n    },\n    "user456": {\n        "name": "Bob Smith",\n        "account_type": "Standard",\n        "balance": 1200,\n        "email":  "[email protected]"\n    }\n}\n\n@dataclass\nclass UserContext:\n    user_id: str\n\n@tool\ndef get_account_info(runtime: ToolRuntime[UserContext]) -> str:\n    """Get the current u ser\'s account information."""\n    user_id = runtime.context.user_id\n\n    if user_id in USER_DATABASE:\n        user = USER_DATABASE[user_id]\n        return f"Account holder: { user[\'name\']}\\nType: {user[\'account_type\']}\\nBalance: ${user[\'balance\']}"\n    return "User not found"\n\nmodel = ChatOpenAI(model="gpt-4o")\nagent = create_agent(\n    mod el,\n    tools=[get_account_info],\n    context_schema=UserContext,\n    system_prompt="You are a financial assistant."\n)\n\nresult = agent.invoke(\n    {"messages": [{"role": "us er", "content": "What\'s my current balance?"}]},\n    context=UserContext(user_id="user123")\n)\n```\n\n#### Memory (Store)\n\nAccess persistent data across conversations using th e store. The store is accessed via `runtime.store` and allows you to save and retrieve user-specific or application-specific data.\n\nTools can access and update the store through  `ToolRuntime`:\n\n```python wrap expandable theme={null}\nfrom typing import Any\nfrom langgraph.store.memory import InMemoryStore\nfrom langchain.agents import create_agent\nfrom  langchain.tools import tool, ToolRuntime\n\n\n# Access memory\n@tool\ndef get_user_info(user_id: str, runtime: ToolRuntime) -> str:\n    """Look up user info."""\n    store = runti me.store\n    user_info = store.get(("users",), user_id)\n    return str(user_info.value) if user_info else "Unknown user"\n\n# Update memory\n@tool\ndef save_user_info(user_id: st r, user_info: dict[str, Any], runtime: ToolRuntime) -> str:\n    """Save user info."""\n    store = runtime.store\n    store.put(("users",), user_id, user_info)\n    return "Succes sfully saved user info."\n\nstore = InMemoryStore()\nagent = create_agent(\n    model,\n    tools=[get_user_info, save_user_info],\n    store=store\n)\n\n# First session: save user  info\nagent.invoke({\n    "messages": [{"role": "user", "content": "Save the following user: userid: abc123, name: Foo, age: 25, email: [email protected]"}]\n})\n\n# Second sessio n: get user info\nagent.invoke({\n    "messages": [{"role": "user", "content": "Get user info for user with id \'abc123\'"}]\n})\n# Here is the user info for user with ID "abc123": \n# - Name: Foo\n# - Age: 25\n# - Email: [email protected]\n```\n\n#### Stream Writer\n\nStream custom updates from tools as they execute using `runtime.stream_writer`. This is use ful for providing real-time feedback to users about what a tool is doing.\n\n```python wrap theme={null}\nfrom langchain.tools import tool, ToolRuntime\n\n@tool\ndef get_weather(ci ty: str, runtime: ToolRuntime) -> str:\n    """Get weather for a given city."""\n    writer = runtime.stream_writer\n\n    # Stream custom updates as the tool executes\n    writer( f"Looking up data for city: {city}")\n    writer(f"Acquired data for city: {city}")\n\n    return f"It\'s always sunny in {city}!"\n```\n\n<Note>\n  If you use `runtime.stream_writ er` inside your tool, the tool must be invoked within a LangGraph execution context. See [Streaming](/oss/python/langchain/streaming) for more details.\n</Note>\n\n***\n\n<Callout  icon="pen-to-square" iconType="regular">\n  [Edit the source of this page on GitHub.](https://github.com/langchain-ai/docs/edit/main/src/oss/langchain/tools.mdx)\n</Callout>\n\n<Ti p icon="terminal" iconType="regular">\n  [Connect these docs programmatically](/use-these-docs) to Claude, VSCode, and more via MCP for    real-time answers.\n</Tip>',
    name='read_f ile',
    id='8be31b19-73f7-4d6e-aa1d-6f805230df8f',
    tool_call_id='3c96bb95-4e77-4a80-9b41-a21ed5016f8a'),
    AIMessage(content='* The README.md file provides an introduction to the LangC hain project, its features, and usage examples. It also includes information on how to install and use the tooling, as well as troubleshooting tips and FAQs.',
    additional_kwargs={
      
    },
    response_metadata={
      'model': 'llama3.2:latest',
      'created_at': '2025-11-04T06:10:23.9334343Z',
      'done': True,
      'done_reason': 'stop',
      'total_duration': 7745077300,
      'load_duration': 216327300,
      'prompt_eval_count': 2995,
      'prompt_eval_duration': 2610589400,
      'eval_count': 44,
      'eval_duration': 4829328900,
      'model_name': 'llama3.2:latest',
      'model_provider': 'ollama'
    },
    id='lc_run--1789d615-edea-46fd-83d6-0b28f5778cc9-0',
    usage_metadata={
      'input_tokens': 2995,
      'output_tokens': 44,
      'total_tokens': 3039
    })
  ]
}

To test this code, I created a REAMDME.md file that consists of the document

And you can notice in the response

tool_calls=[
      {
        'name': 'read_file',
        'args': {
          'filename': 'README.md'
        },
        'id': '3 c96bb95-4e77-4a80-9b41-a21ed5016f8a',
        'type': 'tool_call'
      }
    ]

that means it calls the defined tool read_file and follows the prompt to summarize the output (check the AIMessage):

print(response["messages"][-1].content)

output:

Here is a summary of the README.md file in one bullet point:

* The LangChain project provides a set of tools that enable models to interact with external systems such as APIs, databases, or file systems using structured input. These tools extend model capabilities by letting them interface directly with the world through well-defined inputs and outputs.
Sign up to request clarification or add additional context in comments.

2 Comments

this was incredibly thorough and made me learn something about python at large as well as langgraph, thank you
I'm glad to know that!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.