Switching To The Router

Switch To The Router

In order to use the router, just remove the model field -- we'll find the right model for you!

curl https://route.withmartian.com/api/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <YOUR_MARTIAN_API_KEY>" \
  -d '{
  "messages": [
    {
      "role": "user",
      "content": "Hello world!"
    }
  ],
  "temperature": 1
}'

By default, we route to the model which will give the highest performance on your specific prompt. You can read about how we do that here.

By routing to the model with the highest performance, instead of choosing a single model, we're able to outperform GPT-4 on the evaluation set OpenAI uses (openai/evals). You can reproduce those results here.

Set Routing Parameters

Once you switch to the router, you can control the criteria used for routing.

For example, if you want to route between a specific set of models, you can include them as a list in the model field. We'll only route between the models you specify in that field.

curl https://route.withmartian.com/api/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <YOUR_MARTIAN_API_KEY>" \
  -d '{
  "model": ["gpt-3.5-turbo", "gpt-4", "anthropic/claude-v2", "anthropic/claude-instant-v1"],
  "messages": [
    {
      "role": "user",
      "content": "Hello world!"
    }
  ],
  "temperature": 1
}'

Last updated