Introduction

The Martian Model Router dynamically routes requests to the best LLM in real-time.

This lets you:

  • Get higher performance and lower costs

  • Future-proof your application as new models are added

  • Make your application more resilient by routing away from models with degraded performance

  • Focus on building a better product instead of finding a better model

The Model Router is the first tool we've released on our mission to make better AI tooling by understanding how AI models work.


The simplest way to get started is to go through our Hello World! example.

Once you complete that example to understand how the library works, you can use the router by following the steps on Integrating Martian Into Your Codebase.


When properly integrated, the model router can outperform individual models (including GPT-4) on both cost and performance. You can learn more and replicate the results for yourself through this colab notebook.


Once you've integrated and need specific details about how the API for the router works, check out the API Reference

Last updated