OpenAI SDK Integration

OpenRouter is fully compatible with OpenAI's SDK, allowing you to access 100+ models from various providers using the same familiar interface. Simply change your base URL and API key to get started.

Installation

If you haven't already, install the OpenAI SDK:

npm install openai
# or
yarn add openai
# or
pnpm add openai

Configuration

Configure the OpenAI client to use OpenRouter's endpoint:

JavaScript/TypeScript

import OpenAI from 'openai';

const openai = new OpenAI({
  baseURL: 'https://openrouter.ai/api/v1',
  apiKey: process.env.OPENROUTER_API_KEY,
  defaultHeaders: {
    'HTTP-Referer': process.env.YOUR_SITE_URL, // Optional
    'X-Title': process.env.YOUR_APP_NAME, // Optional
  }
});

Python

import openai

openai.api_base = "https://openrouter.ai/api/v1"
openai.api_key = os.getenv("OPENROUTER_API_KEY")

# Optional headers
openai.default_headers = {
    "HTTP-Referer": os.getenv("YOUR_SITE_URL"),
    "X-Title": os.getenv("YOUR_APP_NAME"),
}

Basic Usage

Use the OpenAI SDK exactly as you normally would, but now with access to many more models:

Chat Completions

const completion = await openai.chat.completions.create({
  model: 'anthropic/claude-3-opus', // Use any OpenRouter model
  messages: [
    {
      role: 'system',
      content: 'You are a helpful assistant.',
    },
    {
      role: 'user',
      content: 'Explain the theory of relativity in simple terms.',
    },
  ],
  temperature: 0.7,
  max_tokens: 500,
});

console.log(completion.choices[0].message.content);

Streaming Responses

const stream = await openai.chat.completions.create({
  model: 'openai/gpt-4-turbo',
  messages: [
    { role: 'user', content: 'Write a haiku about programming.' }
  ],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Model Selection

OpenRouter provides access to models from multiple providers. Use the provider/model format:

// Popular models available through OpenRouter
const models = {
  // Anthropic
  'anthropic/claude-3-opus': 'Most capable Claude model',
  'anthropic/claude-3-sonnet': 'Balanced performance and cost',
  'anthropic/claude-3-haiku': 'Fast and efficient',
  
  // OpenAI
  'openai/gpt-4-turbo': 'Latest GPT-4 Turbo',
  'openai/gpt-4': 'GPT-4 base model',
  'openai/gpt-3.5-turbo': 'Fast and cost-effective',
  
  // Google
  'google/gemini-pro': 'Google's Gemini Pro',
  'google/gemini-pro-vision': 'Multimodal Gemini',
  
  // Meta
  'meta-llama/llama-3-70b-instruct': 'Llama 3 70B',
  'meta-llama/llama-3-8b-instruct': 'Llama 3 8B',
  
  // Mistral
  'mistralai/mistral-large': 'Mistral's largest model',
  'mistralai/mixtral-8x7b': 'Mixture of experts model',
};

// Use any model in your requests
const completion = await openai.chat.completions.create({
  model: 'anthropic/claude-3-sonnet',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Advanced Features

Function Calling

const completion = await openai.chat.completions.create({
  model: 'openai/gpt-4-turbo',
  messages: [
    { role: 'user', content: 'What's the weather in San Francisco?' }
  ],
  functions: [
    {
      name: 'get_weather',
      description: 'Get the current weather in a location',
      parameters: {
        type: 'object',
        properties: {
          location: {
            type: 'string',
            description: 'The city and state, e.g. San Francisco, CA',
          },
          unit: {
            type: 'string',
            enum: ['celsius', 'fahrenheit'],
          },
        },
        required: ['location'],
      },
    },
  ],
  function_call: 'auto',
});

const functionCall = completion.choices[0].message.function_call;
if (functionCall) {
  console.log('Function:', functionCall.name);
  console.log('Arguments:', JSON.parse(functionCall.arguments));
}

Vision Models

const completion = await openai.chat.completions.create({
  model: 'openai/gpt-4-vision-preview',
  messages: [
    {
      role: 'user',
      content: [
        {
          type: 'text',
          text: 'What's in this image?',
        },
        {
          type: 'image_url',
          image_url: {
            url: 'https://example.com/image.jpg',
            detail: 'high',
          },
        },
      ],
    },
  ],
  max_tokens: 500,
});

Model Routing

Use OpenRouter's automatic model routing for cost optimization:

// Use the 'auto' model for automatic routing
const completion = await openai.chat.completions.create({
  model: 'openrouter/auto',
  messages: [
    { role: 'user', content: 'Translate this to French: Hello, world!' }
  ],
  // OpenRouter will automatically select the best model
  // based on your requirements and constraints
  route: {
    preferences: ['quality', 'cost'], // Prioritize quality, then cost
    models: ['anthropic/*', 'openai/*'], // Only use these providers
  },
});

Migration Guide

Migrating from OpenAI to OpenRouter is straightforward:

1. Update Configuration

// Before (OpenAI)
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

// After (OpenRouter)
const openai = new OpenAI({
  baseURL: 'https://openrouter.ai/api/v1',
  apiKey: process.env.OPENROUTER_API_KEY,
});

2. Update Model Names

// Before
model: 'gpt-4-turbo-preview'

// After (add provider prefix)
model: 'openai/gpt-4-turbo-preview'

// Or use a different provider
model: 'anthropic/claude-3-opus'

3. Environment Variables

# .env
# Before
OPENAI_API_KEY=sk-...

# After
OPENROUTER_API_KEY=sk-or-v1-...
YOUR_SITE_URL=https://yourapp.com
YOUR_APP_NAME=YourApp

Error Handling

OpenRouter uses the same error format as OpenAI:

try {
  const completion = await openai.chat.completions.create({
    model: 'anthropic/claude-3-opus',
    messages: [{ role: 'user', content: 'Hello!' }],
  });
} catch (error) {
  if (error instanceof OpenAI.APIError) {
    console.error('API Error:', error.status, error.message);
    
    switch (error.status) {
      case 401:
        console.error('Invalid API key');
        break;
      case 429:
        console.error('Rate limit exceeded');
        break;
      case 500:
        console.error('Server error');
        break;
      default:
        console.error('Unknown error');
    }
  } else {
    console.error('Unexpected error:', error);
  }
}
async function completionWithFallback(messages: any[]) {
  const models = [
    'anthropic/claude-3-opus',
    'openai/gpt-4-turbo',
    'google/gemini-pro',
  ];
  
  for (const model of models) {
    try {
      return await openai.chat.completions.create({
        model,
        messages,
      });
    } catch (error) {
      console.warn(`Failed with ${model}, trying next...`);
      if (model === models[models.length - 1]) {
        throw error; // Re-throw if all models failed
      }
    }
  }
}

Best Practices

  • Always specify the provider prefix (e.g., 'openai/', 'anthropic/') for clarity
  • Use environment variables for API keys and configuration
  • Implement proper error handling and retries
  • Set appropriate timeout values for long-running requests
  • Monitor your usage through the OpenRouter dashboard
  • Use streaming for better user experience with long responses

Next Steps

Now that you're set up with the OpenAI SDK: