Vercel AI SDK Integration

Use the Vercel AI SDK to access 200+ models from multiple providers through a unified interface with the Martian Gateway.

Ensure you have your Martian API key from the Martian Dashboard before continuing.

Installation

Install the Martian provider and the Vercel AI SDK:

npm install @withmartian/ai-sdk-provider ai

If you already have the Vercel AI SDK installed, you only need to add @withmartian/ai-sdk-provider.

Configuration

Set your Martian API key in your .env file:

.env

MARTIAN_API_KEY=your_api_key_here

The martianProvider will read the API key from the MARTIAN_API_KEY environment variable.

Basic Usage

Quick Start

Use martianProvider with any AI SDK function to access models through the Martian Gateway:

import { generateText } from "ai";
import { martianProvider } from "@withmartian/ai-sdk-provider";

const { text } = await generateText({
  model: martianProvider("gpt-4o-mini"),
  prompt: "What is Olympus Mons?"
});

console.log(text);

The provider automatically prefixes OpenAI model names with openai/ for routing. gpt-4o-mini becomes openai/gpt-4o-mini.

Model Name Resolution

The Martian provider handles model naming:

  • OpenAI models: Automatically prefixed with openai/ if not already namespaced
  • Other providers: Already-namespaced models (e.g., anthropic/..., google/...) are passed through unchanged
import { generateText } from "ai";
import { martianProvider } from "@withmartian/ai-sdk-provider";

// OpenAI models - auto-prefixed
await generateText({
  model: martianProvider("gpt-4o-mini"), // → openai/gpt-4o-mini
  prompt: "Hello"
});

await generateText({
  model: martianProvider("openai/gpt-4o"), // → openai/gpt-4o
  prompt: "Hello"
});

// Any other model - passed through
await generateText({
  model: martianProvider("anthropic/claude-sonnet-4-20250514"),
  prompt: "Hello"
});

See the Available Models page for the complete list of supported models.

Chat Completions

Use the .chat() method for chat-based interactions:

import { generateText } from "ai";
import { martianProvider } from "@withmartian/ai-sdk-provider";

const { text } = await generateText({
  model: martianProvider.chat("gpt-4o"),
  messages: [
    { role: "user", content: "What is Olympus Mons?" }
  ]
});

console.log(text);

Text Completions

Use the .text() method for text completions:

import { generateText } from "ai";
import { martianProvider } from "@withmartian/ai-sdk-provider";

const { text } = await generateText({
  model: martianProvider.text("gpt-4o"),
  prompt: "Write a story about Mars. A long time ago... "
});

console.log(text);

Advanced Features

Streaming Responses

Use streamText for real-time streaming:

import { streamText } from "ai";
import { martianProvider } from "@withmartian/ai-sdk-provider";

const { textStream } = await streamText({
  model: martianProvider("gpt-4o"),
  prompt: "Write a story about Mars"
});

for await (const chunk of textStream) {
  process.stdout.write(chunk);
}

Tool Calling

Use the AI SDK's tool functionality with the Martian Gateway for function calling:

import { generateText } from "ai";
import { martianProvider } from "@withmartian/ai-sdk-provider";
import { z } from "zod";

const { text, toolCalls } = await generateText({
  model: martianProvider("anthropic/claude-sonnet-4-20250514"),
  prompt: "What's the weather in San Francisco?",
  tools: {
    getWeather: {
      description: "Get weather for a city",
      inputSchema: z.object({ city: z.string() }),
      execute: async ({ city }) => ({ city, temp: 60, conditions: "sunny" }),
    },
  },
  maxSteps: 3,
});

Structured Outputs

Generate structured data using schemas with generateObject:

import { generateObject } from "ai";
import { martianProvider } from "@withmartian/ai-sdk-provider";
import { z } from "zod";

const { object } = await generateObject({
  model: martianProvider("gpt-4o-mini"),
  schema: z.object({
    recipe: z.string(),
    ingredients: z.array(z.string()),
    steps: z.array(z.string()),
  }),
  prompt: "Generate a simple pasta recipe.",
});

console.log(object.recipe);
console.log(object.ingredients);

Advanced Generation

Use advanced parameters like system messages, multi-turn conversations, and generation settings:

import { generateText } from "ai";
import { martianProvider } from "@withmartian/ai-sdk-provider";

const { text, finishReason, usage } = await generateText({
  model: martianProvider("google/gemini-2.0-flash"),
  system: "You are a creative writing assistant.",
  messages: [
    { role: "user", content: "Write a short story about an alien." },
    { role: "assistant", content: "Once upon a time, there was an alien from Mars..." },
    { role: "user", content: "Continue the story with a twist ending." },
  ],
  temperature: 0.9,
  topP: 0.95,
  maxTokens: 800,
  stopSequences: ["THE END", "---"],
});

Next Steps

View Available Models

Browse 200+ AI models from leading providers with real-time pricing.

Read more

View Other Integrations

Explore other ways to integrate the Martian Gateway with your development workflow.

Read more

Code Examples

Practical examples and code snippets for common use cases.

Read more