2025, Oct 16 05:00

Fixing LangChain Structured Chat Agent ValueError: Use agent_scratchpad in the human prompt, not MessagesPlaceholder

Fix the LangChain ValueError on agent_scratchpad type in structured chat agents. See why MessagesPlaceholder fails and the fix: put it in the human prompt.

When wiring up a structured chat agent in LangChain, a deceptively small detail can trigger an exception that looks unrelated to your code: ValueError: variable agent_scratchpad should be a list of base messages, got of type <class 'str'>. The root cause is how the agent expects its intermediate reasoning to be threaded through the prompt. If you place agent_scratchpad as a messages placeholder for a structured chat agent, it will fail.

Reproducing the issue

The following minimal setup invokes a fake LLM, asks it to call a simple tool, and then return a final answer. The prompt is built for a structured chat agent, but agent_scratchpad is incorrectly provided as a messages placeholder.

import asyncio
import json
from langchain.agents import AgentExecutor, create_structured_chat_agent, Tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import AIMessage
from langchain_community.chat_models.fake import FakeMessagesListChatModel

# 1. Define a predictable tool

def echo_utility(text: str) -> str:
    print(f"Tool called with input: '{text}'")
    return "The tool says hello back!"

utility_catalog = [
    Tool(
        name="simple_tool",
        func=echo_utility,
        description="A simple test tool.",
    )
]

# 2. Responses following the structured chat format

mock_outputs = [
    AIMessage(
        content=json.dumps({
            "action": "simple_tool",
            "action_input": {"input": "hello"}
        })
    ),
    AIMessage(
        content=json.dumps({
            "action": "Final Answer",
            "action_input": "The tool call was successful. The tool said: 'The tool says hello back!'"
        })
    ),
]

fake_llm = FakeMessagesListChatModel(responses=mock_outputs)

# 3. Prompt with an incorrect placement of agent_scratchpad

broken_prompt = ChatPromptTemplate.from_messages([
    (
        "system",
        """Respond to the human as helpfully and accurately as possible. You have access to the following tools:

{tools}

Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).

Valid "action" values: "Final Answer" or {tool_names}

Provide only ONE action per $JSON_BLOB, as shown:

{{
  "action": $TOOL_NAME,
  "action_input": $INPUT
}}

Follow this format:

Question: input question to answer
Thought: consider previous and subsequent steps
Action:
{{
$JSON_BLOB
}}
Observation: action result
... (repeat Thought/Action/Observation as needed)
Thought: I know what to respond
Action:
{{
  "action": "Final Answer",
  "action_input": "Final response to human"
}}

Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation"""
    ),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

# 4. Agent and executor

structured_agent = create_structured_chat_agent(fake_llm, utility_catalog, broken_prompt)
runner = AgentExecutor(
    agent=structured_agent,
    tools=utility_catalog,
    verbose=True,
    handle_parsing_errors=True,
    max_iterations=3,
)

# 5. Invoke

result = asyncio.run(runner.ainvoke({"input": "call the tool"}))

Dependencies in use: langchain==0.3.27, langchain-community==0.3.27, langchain-core==0.3.74, langchain-aws==0.2.30, langchain-openai==0.3.29. Python version: 3.9.

What actually goes wrong

AgentExecutor drives a loop where each iteration builds on the previous step’s output. Different agent implementations append those intermediate steps differently. Some agents convert intermediate steps into messages and extend the conversation with those messages. Others convert intermediate steps into a string and attach it to the user prompt. A structured chat agent belongs to the second group: it expects agent_scratchpad to be injected into the user message as a string, not as a list of messages. Providing MessagesPlaceholder for agent_scratchpad makes the executor try to treat a string as a list of base messages, which triggers the ValueError.

This distinction is documented for the structured chat agent. In practice it means that for this agent you place agent_scratchpad directly into the human prompt. By contrast, agents that rely on message-based intermediate steps use MessagesPlaceholder.

Concretely, structured_chat_agent, react_agent, self_ask_with_search_agent, the default sql_agent when agent_type is not specified, and xml_agent expect agent_scratchpad to be part of the user prompt as a string. Meanwhile, json_chat_agent, openai_tools_agent, sql_agent when agent_type is set to "tool-calling", and tool_calling_agent expect agent_scratchpad to be a messages placeholder.

The fix

Adjust the prompt so that agent_scratchpad is part of the human message instead of a messages placeholder. Nothing else in the control flow or tool setup needs to change.

import asyncio
import json
from langchain.agents import AgentExecutor, create_structured_chat_agent, Tool
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import AIMessage
from langchain_community.chat_models.fake import FakeMessagesListChatModel

# 1. Tool definition remains the same

def echo_utility(text: str) -> str:
    print(f"Tool called with input: '{text}'")
    return "The tool says hello back!"

utility_catalog = [
    Tool(
        name="simple_tool",
        func=echo_utility,
        description="A simple test tool.",
    )
]

# 2. Mock LLM outputs remain the same

mock_outputs = [
    AIMessage(
        content=json.dumps({
            "action": "simple_tool",
            "action_input": {"input": "hello"}
        })
    ),
    AIMessage(
        content=json.dumps({
            "action": "Final Answer",
            "action_input": "The tool call was successful. The tool said: 'The tool says hello back!'"
        })
    ),
]

fake_llm = FakeMessagesListChatModel(responses=mock_outputs)

# 3. Correct placement: agent_scratchpad is appended to the user prompt

fixed_prompt = ChatPromptTemplate.from_messages([
    (
        "system",
        """Respond to the human as helpfully and accurately as possible. You have access to the following tools:

{tools}

Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).

Valid "action" values: "Final Answer" or {tool_names}

Provide only ONE action per $JSON_BLOB, as shown:

{{
  "action": $TOOL_NAME,
  "action_input": $INPUT
}}

Follow this format:

Question: input question to answer
Thought: consider previous and subsequent steps
Action:
{{
$JSON_BLOB
}}
Observation: action result
... (repeat Thought/Action/Observation as needed)
Thought: I know what to respond
Action:
{{
  "action": "Final Answer",
  "action_input": "Final response to human"
}}

Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation"""
    ),
    (
        "human",
        "{input}\n{agent_scratchpad}"
    ),
])

structured_agent = create_structured_chat_agent(fake_llm, utility_catalog, fixed_prompt)
runner = AgentExecutor(
    agent=structured_agent,
    tools=utility_catalog,
    verbose=True,
    handle_parsing_errors=True,
    max_iterations=3,
)

result = asyncio.run(runner.ainvoke({"input": "call the tool"}))

Why this detail matters

AgentExecutor composes prompts iteratively. If the agent’s design assumes that intermediate steps are concatenated into the user message, a messages placeholder breaks that assumption and the executor can’t reconcile the types. Knowing which agents append messages and which concatenate strings prevents brittle prompt wiring, spares you from confusing type errors, and keeps your tool-calling loops predictable.

Practical takeaways

For a structured chat agent, place agent_scratchpad inside the user message. If you switch to agents that extend the message list, move agent_scratchpad to a MessagesPlaceholder instead. If you run into the exact error described above, revisit the prompt wiring first—the fix is typically a one-line change.

The article is based on a question from StackOverflow by hitesh and an answer by cottontail.