Integrate the Martian Code Router With OpenCode

This document describes how to set up OpenCode to route all your LLM requests through Martian.

Ensure you have your Martian API key from the Martian Dashboard before continuing.

Prerequisites

Ensure you have OpenCode installed. See the OpenCode installation guide for more options, or use one of the following methods:

curl -fsSL https://opencode.ai/install | bash

Configuration

Choose one of the following configuration methods:

Option 1: Global Configuration

This configuration applies to all projects using OpenCode.

Step 1: Store Your API Key

Add your Martian API key to your shell profile (~/.zshrc, ~/.bashrc, etc.) or a secure location like a .env file:

# Add to ~/.zshrc, ~/.bashrc, etc.
export MARTIAN_API_KEY="your-martian-api-key"

Replace your-martian-api-key with your actual Martian API key from the Martian Dashboard.

Then reload your shell:

source ~/.zshrc  # or source ~/.bashrc etc.

Step 2: Create Global OpenCode Configuration

Create or edit ~/.config/opencode/opencode.json with the following content:

You can also add the API key directly in the config file by replacing {env:MARTIAN_API_KEY} with your actual API key. However, using an environment variable is recommended for security.

{
  "$schema": "https://opencode.ai/config.json",
  "model": "martian-anthropic/anthropic/claude-sonnet-4-5",
  "provider": {
    "martian-openai-compat": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Martian (OpenAI Compatible)",
      "options": {
        "baseURL": "https://api.withmartian.com/v1",
        "apiKey": "{env:MARTIAN_API_KEY}",
        "timeout": 600000
      },
      "models": {
        "google/gemini-3-pro-preview": {
          "name": "Gemini 3 Pro Preview",
          "modalities": { "input": ["image", "text"], "output": ["text"] },
          "limit": {
            "context": 1000000,
            "output": 65536
          }
        }
      }
    },
    "martian-openai": {
      "npm": "@ai-sdk/openai",
      "name": "Martian (OpenAI)",
      "options": {
        "baseURL": "https://api.withmartian.com/v1",
        "apiKey": "{env:MARTIAN_API_KEY}",
        "timeout": 600000
      },
      "models": {
        "openai/gpt-5.2": {
          "name": "GPT-5.2",
          "api": "responses",
          "modalities": { "input": ["image", "text"], "output": ["text"] },
          "limit": {
            "context": 400000,
            "output": 128000
          }
        },
        "openai/gpt-5.2-pro": {
          "name": "GPT-5.2 Pro",
          "api": "responses",
          "modalities": { "input": ["image", "text"], "output": ["text"] },
          "limit": {
            "context": 400000,
            "output": 128000
          }
        },
        "openai/gpt-5.2-codex": {
          "name": "GPT-5.2 Codex",
          "api": "responses",
          "modalities": { "input": ["image", "text"], "output": ["text"] },
          "limit": {
            "context": 400000,
            "output": 128000
          }
        }
      }
    },
    "martian-anthropic": {
      "npm": "@ai-sdk/anthropic",
      "name": "Martian (Anthropic)",
      "options": {
        "baseURL": "https://api.withmartian.com/v1",
        "apiKey": "{env:MARTIAN_API_KEY}",
        "timeout": 600000
      },
      "models": {
        "anthropic/claude-sonnet-4-5": {
          "name": "Claude Sonnet 4.5",
          "modalities": { "input": ["image", "text"], "output": ["text"] },
          "limit": {
            "context": 200000,
            "output": 64000
          }
        },
        "anthropic/claude-opus-4-5": {
          "name": "Claude Opus 4.5",
          "modalities": { "input": ["image", "text"], "output": ["text"] },
          "limit": {
            "context": 200000,
            "output": 32000
          }
        }
      }
    }
  }
}

Option 2: Per-Project Configuration

This configuration applies only to a specific project directory.

Step 1: Store Your API Key

Create a .env file in your project root:

echo "MARTIAN_API_KEY=your-martian-api-key" >> .env
echo ".env" >> .gitignore

Replace your-martian-api-key with your actual Martian API key from the Dashboard.

Step 2: Create Project OpenCode Configuration

Create an opencode.json file in your project root with the same configuration content as above. OpenCode will automatically detect and use this config when you run it from the project directory.

Per-project settings override global settings. Use this option if you need different configurations for different projects. See the OpenCode config documentation for more details and Available Models for the complete list of 200+ supported models.

Start Using OpenCode

Navigate to your project directory and start OpenCode:

cd your-project
opencode

OpenCode will now route all requests through Martian. You can switch between models using the /model command within OpenCode.


Next Steps

View Available Models

Browse 200+ AI models from leading providers with real-time pricing.

Read more

View Other Integrations

Explore other ways to integrate Martian with your development workflow.

Read more