Integrate the Martian Code Router With Aider
This document describes how to set up Aider to route all your LLM requests through Martian.
Ensure you have your Martian API key before continuing.
Prerequisites
Ensure you have Aider installed. To install it using pip
, execute:
pip install aider-chat
Configuration
Step 1: Configure Base Settings
- Locate or create a new YAML file named
.aider.conf.yml
.
On Mac/Linux, use either the global location (~/
) or local repo root.
- Populate the following fields:
openai-api-base: https://api.withmartian.com/v1
openai-api-key: sk-abc1234.....
model: openai/gpt-4.1
model-settings-file: ~/.aider.model.settings.yml
- openai-api-base: Martian's base url –
https://api.withmartian.com/v1
. This routes all Aider requests through Martian. - openai-api-key: Your Martian API key.
- model: Name of the default model.
- model-settings-file: Name of the model settings YAML file (created in subsequent steps).
See Available Models for a list of supported models.
Step 2: Configure Model Metadata
- Locate or create a new JSON file named
.aider.model.metadata.json
.
On Mac/Linux, use either the global location (~/
) or local repo root.
- Create a JSON object named
openai/{provider-name}/{model-name}
and populate the following fields:
{
"openai/openai/gpt-3.5": {
"max_tokens": 8192,
"max_input_tokens": 200000,
"max_output_tokens": 8192,
"input_cost_per_token": 0.80e-6,
"output_cost_per_token": 4.00e-6,
"mode": "chat"
}
}
- max_tokens: Soft fallback limit.
- max_input_tokens: Maximum number of input tokens. Set to the smallest amount across router models.
- max_output_tokens: Maximum number of output tokens. Set to the smallest amount across router models.
- input_cost_per_token: Estimated average input cost.
- output_cost_per_token: Estimated average output cost.
- mode: Set to
chat
.
If you're adding to an existing file, insert the block as a new key.
Step 3: Configure Model Settings
- Locate or create a new YAML file named
.aider.model.settings.yml
.
On Mac/Linux, use either the global location (~/
) or local repo root.
- Populate the following fields:
- name: openai/{vendor-name}/{model-name}
edit_format: diff
use_repo_map: true
reminder: sys
examples_as_sys_msg: true
caches_by_default: true
extra_params:
routing_constraint:
quality_constraint:
numeric_value: 0.1
Field descriptions:
- name: Set to
openai/{vendor-name}/{model-name}
.
The openai/
prefix before the model or router name (e.g. openai/.../...
) is required to signal to Aider that the model is served via an OpenAI-compatible API. This ensures it formats requests and routes them correctly.
- edit_format: Set to
diff
. This sets the edit format that the LLM is to use (the default depends on the model). - use_repo_map: Set to
true
. This configures Aider to use the repo map to understand your code base. - reminder: Set to
sys
. This configures Aider to take the system prompt that Aider normally uses and automatically re-inject it as the reminder. - examples_as_sys_msg: Set to
true
. This configures Aider to put examples inside the system message orfalse
to put examples into the chat history as user/assistant. - caches_by_default: Set to
true
. This configures the LLM API to cache system messages and other repeated context. - extra_params.routing_constraint: Defines routing constraints for model selection.
- quality_constraint: Set to either:
numeric_value
(e.g.0.1
) to specify how strict the quality requirement is.model_name
(e.g.openai/gpt-4o
) to require a specific model for quality.
- cost_constraint: Set to either:
numeric_value
(e.g.0.1
) to specify how strict the cost requirement is.model_name
(e.g.openai/gpt-4o
) to require a specific model for cost.
- quality_constraint: Set to either:
These follow Martian's standard OpenAI-compatible router semantics.
Common Commands
The following are common commands to run Aider when configured with Martian:
# Run Aider with default model (from .aider.conf.yml)
aider
# Run Aider with the router explicitly
aider --model openai/martian/code
# Prevent Aider from committing changes to Git
aider --no-auto-commit
# Use both flags
aider --model openai/martian/code --no-auto-commit
# Call a specific model (e.g. OpenAI GPT 4.1 via Martian)
aider --model openai/gpt-4.1