-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
Describe the bug
When using openai-agents v0.2.0 with Anthropic's Claude models (either through AsyncOpenAI with Anthropic's base URL or through LiteLLM), tool calls fail with an error indicating that the tool ID doesn't match Anthropic's required pattern ^[a-zA-Z0-9_-]+$
. This was working correctly in v0.1.0.
The error occurs when the framework attempts to send tool use messages to Anthropic's API:
litellm.exceptions.BadRequestError: AnthropicException - b'{"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.1.tool_use.id: String should match pattern \'^[a-zA-Z0-9_-]+$\'"}}'
This affects BOTH direct AsyncOpenAI integration and LiteLLM integration methods.
Debug information
- Agents SDK version: v0.2.0 (works fine with v0.1.0)
- Python version: Python 3.11.6
Repro steps
import asyncio
from agents import Agent, Runner, function_tool
@function_tool
async def text_processor(query: str) -> str:
"""Simple text processor that formats text"""
return f"Processed: {query.strip().title()}"
@function_tool
async def word_counter(text: str) -> str:
"""Simple word counter tool"""
word_count = len(text.split())
return f"Word count: {word_count}"
async def main():
agent = Agent(
name="Simple Test Agent",
instructions="You are a helpful assistant. Use the text processor and word counter tools.",
tools=[text_processor, word_counter],
model="litellm/anthropic/claude-opus-4-20250514",
)
# Run the agent with streaming
result = Runner.run_streamed(
starting_agent=agent,
input="Please process this text: hello world from the agent and count the words"
)
async for event in result.stream_events():
# Stream raw text deltas for real-time output
if event.type == "raw_response_event":
data = getattr(event, "data", None)
if hasattr(data, "delta"):
print(data.delta, end="", flush=True)
# Handle tool calls
elif event.type == "run_item_stream_event":
if event.item.type == "tool_call_output_item":
print(f"\n[✅ Tool output: {event.item.output}]")
print(f"\n\n✅ Final Result: {result.final_output}")
if __name__ == "__main__":
asyncio.run(main())
Error Output (LiteLLM example)
I'll help you process the text "hello world from the agent" and count the words.{"query": "hello world from the agent"}{"text": "hello world from the agent"}
[✅ Tool output: Processed: Hello World From The Agent]
[✅ Tool output: Word count: 5]
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
Traceback (most recent call last):
[truncated stack trace]
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://api.anthropic.com/v1/messages'
During handling of the above exception, another exception occurred:
[truncated stack trace]
litellm.llms.anthropic.common_utils.AnthropicError: b'{"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.1.tool_use.id: String should match pattern \'^[a-zA-Z0-9_-]+$\'"}}'
During handling of the above exception, another exception occurred:
[truncated stack trace]
litellm.exceptions.BadRequestError: litellm.BadRequestError: AnthropicException - b'{"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.1.tool_use.id: String should match pattern \'^[a-zA-Z0-9_-]+$\'"}}'
Expected behavior
The agent should successfully call tools when using Anthropic's API (through either method), as it did in v0.1.0. Tool IDs should be generated in a format that matches Anthropic's requirements (only alphanumeric characters, hyphens, and underscores).
Additional context
- The error suggests that tool IDs are being generated with characters that don't match the pattern
^[a-zA-Z0-9_-]+$
- This appears to be a regression in v0.2.0 as the same code works correctly with v0.1.0
- Anthropic's API has stricter validation for tool IDs compared to OpenAI's API
- The issue affects both direct AsyncOpenAI integration AND LiteLLM integration, suggesting the problem is in how openai-agents generates tool IDs
Workaround
Downgrading to v0.1.0 resolves the issue: