Integrate the Martian Code Router With Codex

This document describes how to set up OpenAI Codex CLI to route all your LLM requests through Martian.

Ensure you have your Martian API key from the Martian Dashboard before continuing.

Prerequisites

Ensure you have Codex installed. See the Codex quickstart for more options, or use one of the following:

npm install -g @openai/codex

Configuration

Step 1: Store Your API Key

Add your Martian API key to your shell profile (~/.zshrc, ~/.bashrc, etc.) or a secure location like a .env file:

# Add to ~/.zshrc, ~/.bashrc, etc.
export MARTIAN_API_KEY="your-martian-api-key"

Replace your-martian-api-key with your actual Martian API key from the Martian Dashboard.

Then reload your shell:

source ~/.zshrc  # or source ~/.bashrc etc.

Step 2: Create Codex Configuration

Create or edit ~/.codex/config.toml with the following content:

[model_providers.martian-responses]
name = "Martian /responses"
base_url = "https://api.withmartian.com/v1"
env_key = "MARTIAN_API_KEY"
wire_api = "responses"

[model_providers.martian-chat-completions]
name = "Martian /chat/completions"
base_url = "https://api.withmartian.com/v1"
env_key = "MARTIAN_API_KEY"
wire_api = "chat"

[profiles."gpt-5.2"]
model = "openai/gpt-5.2"
model_provider = "martian-responses"

[profiles."gpt-5.2-pro"]
model = "openai/gpt-5.2-pro"
model_provider = "martian-responses"

[profiles."gpt-5-nano"]
model = "openai/gpt-5-nano"
model_provider = "martian-chat-completions"

[profiles."claude-opus-4-5"]
model = "anthropic/claude-opus-4-5"
model_provider = "martian-chat-completions"

[profiles."claude-sonnet-4-5"]
model = "anthropic/claude-sonnet-4-5"
model_provider = "martian-chat-completions"

See the Codex configuration documentation for more options and Available Models for the complete list of supported models.

Start Using Codex

Navigate to your project directory and start Codex with a profile:

cd your-project
codex --profile gpt-5.2

Codex will now route all requests through Martian. You can switch between profiles using the --profile flag or the /model command within Codex.


Next Steps

View Available Models

Browse 200+ AI models from leading providers with real-time pricing.

Read more

View Other Integrations

Explore other ways to integrate Martian with your development workflow.

Read more