from openai import OpenAIclient =OpenAI( api_key="My OPENAI Key", # defaults to os.environ.get("OPENAI_API_KEY"))chat_completion = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "user","content": "Say this is a test", } ],)
becomes
from openai import OpenAIclient =OpenAI( api_key="My MARTIAN Key", # defaults to os.environ.get("OPENAI_API_KEY") base_url="https://withmartian.com/api/openai/v1",)chat_completion = client.chat.completions.create( model="router", messages=[ {"role": "user","content": "Say this is a test", } ],)
Setting Parameters
To route across a subset of models, specifiy the array of whitelisted origins:
chat_completion = client.chat.completions.create( model="router", messages=[ {"role": "user","content": "Say this is a test", } ], extra_body={"models": ["gpt-3.5-turbo", "claude-2.1"] })
To specify a max cost and willingness to pay for a 10% improvement on model quality on each request, set the parameters in the following way:\