Quickstart Guide
Get started with Martian in less than three minutes and with minimal code changes.
Authentication and Setup
Step 1: Obtain Your API Key
Get your API key from the Martian Dashboard.
Step 2: Choose Your API Format
Choose from the following API formats supported by Martian:
- OpenAI Chat Completions API: Compatible with OpenAI SDKs and frameworks
- Anthropic Messages API: Native Anthropic format with advanced features
Choose a Model
Select a model from our 200+ AI model catalog and copy its Model Name. You will populate the model
field in the API with that name. For example:
openai/gpt-4.1-nano
: OpenAI's lightweight modelanthropic/claude-sonnet-4-20250514
: Anthropic's lightweight modelgoogle/gemini-2.5-flash
: Google's lightweight model
You need the full provider/model-name
string (e.g., openai/gpt-4.1-nano
), not just the model name (gpt-4.1-nano
). The provider prefix is required.
Invoke API Endpoints
OpenAI Chat Completions API
Endpoint: POST /v1/chat/completions
The Martian API is OpenAI-compatible. This means you can use any OpenAI SDK to make requests via Martian.
cURL Example
# Note: Replace $MARTIAN_API_KEY with your assigned API key
curl https://api.withmartian.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MARTIAN_API_KEY" \
-d '{
"model": "openai/gpt-4.1-nano",
"messages": [
{
"role": "user",
"content": "What is Olympus Mons?"
}
]
}'
Python Example:
import openai
# Note: Define MARTIAN_API_KEY as your assigned API key
oai_client = openai.OpenAI(
base_url="https://api.withmartian.com/v1",
api_key=MARTIAN_API_KEY
)
response = oai_client.chat.completions.create(
model="openai/gpt-4.1-nano",
messages=[
{
"role": "user",
"content": "Write a Python function that calculates Mars orbital period."
}
]
)
print(response.choices[0].message.content)
Anthropic Messages API
Endpoint: POST /v1/messages
Martian's /messages
endpoint enables you to make Anthropic-compatible requests with native support for advanced features like thinking, tool use, and structured outputs.
Use this endpoint to access multiple LLMs from different providers via Anthropic through a unified interface.
cURL Example:
# Note: Replace $MARTIAN_API_KEY with your assigned API key
curl https://api.withmartian.com/v1/messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $MARTIAN_API_KEY" \
-d '{
"model": "anthropic/claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "What is Olympus Mons?"
}
]
}'
Python Example:
import anthropic
# Note: Define MARTIAN_API_KEY as your assigned API key
anth_client = anthropic.Anthropic(
base_url="https://api.withmartian.com/v1",
api_key=MARTIAN_API_KEY
)
response = anth_client.messages.create(
model="anthropic/claude-3-haiku-20240307",
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Write a haiku about Mars (the planet, not the god)."
}
]
)
print(response.content[0].text)
Now that you've made your first API call, explore these resources:
Next Steps
Monitoring & Analytics
Track your usage and optimize costs through the Martian Dashboard:
- Real-time Usage: Monitor API calls, tokens, and costs
- Model Performance: Compare accuracy and latency across models
- Cost Analysis: Identify savings opportunities with smart routing
- Request History: Debug and analyze API interactions
Getting Help
- Documentation: Comprehensive guides at docs.withmartian.com
- Support: Contact our team through the dashboard
- Community: Join our Discord for discussions and updates
Ready to build something amazing? Explore our Advanced Features or browse the Available Models to find the perfect model for your use case.