Models
The Agents SDK comes with out-of-the-box support for OpenAI models in two flavors:
- Recommended: the 
OpenAIResponsesModel, which calls OpenAI APIs using the new Responses API. - The 
OpenAIChatCompletionsModel, which calls OpenAI APIs using the Chat Completions API. 
Mixing and matching models
Within a single workflow, you may want to use different models for each agent. For example, you could use a smaller, faster model for triage, while using a larger, more capable model for complex tasks. When configuring an Agent, you can select a specific model by either:
- Passing the name of an OpenAI model.
 - Passing any model name + a 
ModelProviderthat can map that name to a Model instance. - Directly providing a 
Modelimplementation. 
Note
While our SDK supports both the OpenAIResponsesModel and the OpenAIChatCompletionsModel shapes, we recommend using a single model shape for each workflow because the two shapes support a different set of features and tools. If your workflow requires mixing and matching model shapes, make sure that all the features you're using are available on both.
from agents import Agent, Runner, AsyncOpenAI, OpenAIChatCompletionsModel
import asyncio
spanish_agent = Agent(
    name="Spanish agent",
    instructions="You only speak Spanish.",
    model="o3-mini", # (1)!
)
english_agent = Agent(
    name="English agent",
    instructions="You only speak English",
    model=OpenAIChatCompletionsModel( # (2)!
        model="gpt-4o",
        openai_client=AsyncOpenAI()
    ),
)
triage_agent = Agent(
    name="Triage agent",
    instructions="Handoff to the appropriate agent based on the language of the request.",
    handoffs=[spanish_agent, english_agent],
    model="gpt-3.5-turbo",
)
async def main():
    result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
    print(result.final_output)
- Sets the name of an OpenAI model directly.
 - Provides a 
Modelimplementation. 
Using other LLM providers
You can use other LLM providers in 3 ways (examples here):
set_default_openai_clientis useful in cases where you want to globally use an instance ofAsyncOpenAIas the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set thebase_urlandapi_key. See a configurable example in examples/model_providers/custom_example_global.py.ModelProvideris at theRunner.runlevel. This lets you say "use a custom model provider for all agents in this run". See a configurable example in examples/model_providers/custom_example_provider.py.Agent.modellets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in examples/model_providers/custom_example_agent.py.
In cases where you do not have an API key from platform.openai.com, we recommend disabling tracing via set_tracing_disabled(), or setting up a different tracing processor.
Note
In these examples, we use the Chat Completions API/model, because most LLM providers don't yet support the Responses API. If your LLM provider does support it, we recommend using Responses.
Common issues with using other LLM providers
Tracing client error 401
If you get errors related to tracing, this is because traces are uploaded to OpenAI servers, and you don't have an OpenAI API key. You have three options to resolve this:
- Disable tracing entirely: 
set_tracing_disabled(True). - Set an OpenAI key for tracing: 
set_tracing_export_api_key(...). This API key will only be used for uploading traces, and must be from platform.openai.com. - Use a non-OpenAI trace processor. See the tracing docs.
 
Responses API support
The SDK uses the Responses API by default, but most other LLM providers don't yet support it. You may see 404s or similar issues as a result. To resolve, you have two options:
- Call 
set_default_openai_api("chat_completions"). This works if you are settingOPENAI_API_KEYandOPENAI_BASE_URLvia environment vars. - Use 
OpenAIChatCompletionsModel. There are examples here. 
Structured outputs support
Some model providers don't have support for structured outputs. This sometimes results in an error that looks something like this:
BadRequestError: Error code: 400 - {'error': {'message': "'response_format.type' : value is not one of the allowed values ['text','json_object']", 'type': 'invalid_request_error'}}
This is a shortcoming of some model providers - they support JSON outputs, but don't allow you to specify the json_schema to use for the output. We are working on a fix for this, but we suggest relying on providers that do have support for JSON schema output, because otherwise your app will often break because of malformed JSON.