OpenAI SDK Integration
Use the official OpenAI SDK with Martian to access 200+ models from multiple providers through a single, unified interface.
Ensure you have your Martian API key from the Martian Dashboard before continuing.
Installation
Install the OpenAI SDK for your language:
pip install openai
Configuration
Configure the OpenAI SDK to use Martian's base URL:
Python
import openai
client = openai.OpenAI(
base_url="https://api.withmartian.com/v1",
api_key="MARTIAN_API_KEY"
)
Node.js / TypeScript
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.withmartian.com/v1',
apiKey: process.env.MARTIAN_API_KEY,
});
Basic Usage
Chat Completions
Make a simple chat completion request:
response = client.chat.completions.create(
model="openai/gpt-4.1-nano",
messages=[
{"role": "user", "content": "What is Olympus Mons?"}
]
)
print(response.choices[0].message.content)
Using Models from Different Providers
Access models from any provider using the OpenAI SDK:
# Use Anthropic models
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Hello!"}]
)
# Use Google models
response = client.chat.completions.create(
model="google/gemini-2.5-flash",
messages=[{"role": "user", "content": "Hello!"}]
)
# Use Meta models
response = client.chat.completions.create(
model="meta-llama/llama-3.3-70b-instruct",
messages=[{"role": "user", "content": "Hello!"}]
)
See the Available Models page for the complete list of supported models.
Advanced Features
Streaming Responses
stream = client.chat.completions.create(
model="openai/gpt-4.1-nano",
messages=[{"role": "user", "content": "Write a story about Mars"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Function Calling
response = client.chat.completions.create(
model="openai/gpt-4.1-nano",
messages=[{"role": "user", "content": "What's the weather in Boston?"}],
tools=[
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
}
}
}
}
]
)
if response.choices[0].message.tool_calls:
print(response.choices[0].message.tool_calls)
Cost Optimization
Use the :cheap
suffix for automatic cost optimization:
response = client.chat.completions.create(
model="openai/gpt-4.1-nano:cheap",
messages=[{"role": "user", "content": "Summarize this article..."}]
)
Error Handling
import openai
try:
response = client.chat.completions.create(
model="openai/gpt-4.1-nano",
messages=[{"role": "user", "content": "Hello!"}]
)
except openai.AuthenticationError:
print("Invalid API key")
except openai.RateLimitError:
print("Rate limit exceeded")
except openai.APIError as e:
print(f"API error: {e}")