2025, Oct 16 11:00
Fix OpenAI Agents SDK + Ollama TypeError: Cannot instantiate typing.Union by pinning openai<1.99.0
OpenAI Agents with Ollama throwing TypeError: Cannot instantiate typing.Union? See why tool calls break on openai>=1.99.0 and fix it by pinning openai<1.99.0.
Function calling with OpenAI Agents SDK wired to an Ollama backend can unexpectedly break with a TypeError coming from typing.Union. If your agent crashes on a simple tool call and the traceback ends with “TypeError: Cannot instantiate typing.Union”, you are likely hitting a compatibility gap between the agents code and the latest openai package.
Minimal setup that reproduces the crash
The following program wires an OpenAIChatCompletionsModel to an Ollama HTTP endpoint and exposes a small function tool. The logic is straightforward: create a client, register a tool, ask the agent to compute 2 + 2.
from dotenv import load_dotenv
from agents import Agent, Runner, AsyncOpenAI, OpenAIChatCompletionsModel, function_tool
import asyncio
load_dotenv(override=True)
host_url = "http://localhost:11434/v1"
api_token = "dummy"
model_id = "qwen3:4b"
@function_tool
def add_numbers(x: int, y: int) -> int:
    """Send two integers and return their sum"""
    print(f"Adding {x} and {y}")
    return x + y
async def entrypoint():
    async_client = AsyncOpenAI(base_url=host_url, api_key=api_token)
    chat_model = OpenAIChatCompletionsModel(model=model_id, openai_client=async_client)
    base_agent = Agent(
        name="basic_worker",
        instructions="You are a simple agent that can perform basic tasks",
        tools=[add_numbers],
        model=chat_model,
    )
    result = await Runner.run(base_agent, "What is 2 + 2?")
    print(result)
asyncio.run(entrypoint())
On affected setups, this fails with a traceback that ends in:
TypeError: Cannot instantiate typing.Union
What’s actually failing and why
The failure comes from how the agents code attempts to construct a tool call message. Under the hood it builds a ChatCompletionMessageToolCallParam like this:
new_tool_call = ChatCompletionMessageToolCallParam(
    id=fs["id"],
    type="function",
    function={
        "name": "file_search_call",
        "arguments": json.dumps({
            "queries": fs.get("queries", []),
            "status": fs.get("status"),
        }),
    },
)
However, ChatCompletionMessageToolCallParam is declared as a type alias of a Union of two pydantic models:
ChatCompletionMessageToolCallParam: TypeAlias = Union[
    ChatCompletionMessageFunctionToolCallParam,
    ChatCompletionMessageCustomToolCallParam,
]
Attempting to instantiate a typing.Union like a class is not valid in Python. A Union is not a concrete type you can call; it’s a type expression that says the value can be one of several types. To see why this explodes, here’s the same kind of misuse in isolation:
from typing import Union
UType = Union[int, str]
UType("hello")
This reliably raises:
TypeError: Cannot instantiate typing.Union
In short, the agents module path that constructs ChatCompletionMessageToolCallParam this way is incorrect for the most recent openai package. The incompatibility is tracked upstream and confirmed to be version-related.
The fix
Pin the openai package to a version lower than 1.99.0. According to the linked issue, downgrading resolves the error.
If you manage dependencies with a constraints file or similar, adjust the spec like this:
openai<1.99.0
openai-agents>=0.0.15
The application code itself does not need changes; once the openai version is constrained, the agent’s tool call path stops trying to instantiate a Union and the flow proceeds as expected.
Working code after adjusting the dependency
With openai<1.99.0 installed, the same program runs successfully:
from dotenv import load_dotenv
from agents import Agent, Runner, AsyncOpenAI, OpenAIChatCompletionsModel, function_tool
import asyncio
load_dotenv(override=True)
host_url = "http://localhost:11434/v1"
api_token = "dummy"
model_id = "qwen3:4b"
@function_tool
def add_numbers(x: int, y: int) -> int:
    """Send two integers and return their sum"""
    print(f"Adding {x} and {y}")
    return x + y
async def entrypoint():
    async_client = AsyncOpenAI(base_url=host_url, api_key=api_token)
    chat_model = OpenAIChatCompletionsModel(model=model_id, openai_client=async_client)
    base_agent = Agent(
        name="basic_worker",
        instructions="You are a simple agent that can perform basic tasks",
        tools=[add_numbers],
        model=chat_model,
    )
    result = await Runner.run(base_agent, "What is 2 + 2?")
    print(result)
asyncio.run(entrypoint())
Why this matters for your stack
Typed SDKs evolve quickly, and small shifts in model definitions can invalidate assumptions in higher-level wrappers. Here, a Union-based type alias in the openai package collides with code that treats it like a concrete class. Keeping an eye on version compatibility prevents hard-to-debug crashes in production, especially when bridging multiple layers like OpenAI Agents SDK and an Ollama backend.
Takeaways
If your OpenAI Agents + Ollama integration fails with “Cannot instantiate typing.Union,” the immediate and verified remedy is to pin openai to a version below 1.99.0 while using openai-agents≥0.0.15. This eliminates the invalid Union instantiation path and restores function call handling. Track the upstream discussion for when the incompatibility is addressed and revisit the pin once the modules align.
The article is based on a question from StackOverflow by Dileep17 and an answer by Tom McLean.