Skip to main content

Migrating from OpenAI

Since OmniaKey is fully compatible with the OpenAI API, migration requires just two changes:
  1. Base URL: Change to https://api.omniakey.com/v1
  2. API Key: Use your OmniaKey API key

Before (OpenAI)

from openai import OpenAI

client = OpenAI(
    api_key="sk-your-openai-key"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

After (OmniaKey)

from openai import OpenAI

client = OpenAI(
    api_key="your-omniakey-api-key",
    base_url="https://api.omniakey.com/v1"  # Add this line
)

response = client.chat.completions.create(
    model="gpt-4o",  # Same model names work
    messages=[{"role": "user", "content": "Hello!"}]
)
That’s it! All request parameters, response formats, streaming, and error handling work exactly the same way.

Migrating from Azure OpenAI

Azure OpenAI uses a different URL format and API version. Here’s how to switch:

Before (Azure)

from openai import AzureOpenAI

client = AzureOpenAI(
    api_key="your-azure-key",
    api_version="2024-02-15-preview",
    azure_endpoint="https://your-resource.openai.azure.com"
)

response = client.chat.completions.create(
    model="gpt-4o",  # deployment name
    messages=[{"role": "user", "content": "Hello!"}]
)

After (OmniaKey)

from openai import OpenAI  # Use standard OpenAI client

client = OpenAI(
    api_key="your-omniakey-api-key",
    base_url="https://api.omniakey.com/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",  # Use standard model names
    messages=[{"role": "user", "content": "Hello!"}]
)

Migrating from Direct Provider APIs

If you’re calling Anthropic, Google, or other providers directly, OmniaKey gives you a unified interface:

Before (Multiple SDKs)

# Anthropic
import anthropic
claude_client = anthropic.Anthropic(api_key="sk-ant-xxx")
claude_response = claude_client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}]
)

# Google
import google.generativeai as genai
genai.configure(api_key="your-google-key")
gemini = genai.GenerativeModel("gemini-2.0-flash")
gemini_response = gemini.generate_content("Hello!")

After (Single SDK)

from openai import OpenAI

client = OpenAI(
    api_key="your-omniakey-api-key",
    base_url="https://api.omniakey.com/v1"
)

# Access all models through the same interface
claude_response = client.chat.completions.create(
    model="claude-3-5-sonnet",
    messages=[{"role": "user", "content": "Hello!"}]
)

gemini_response = client.chat.completions.create(
    model="gemini-2.0-flash",
    messages=[{"role": "user", "content": "Hello!"}]
)

What’s Compatible

FeatureStatus
Chat CompletionsFully compatible
Streaming (SSE)Fully compatible
Function/Tool CallingFully compatible
Image GenerationFully compatible
Video GenerationExtended API
Response Format (JSON Mode)Fully compatible
EmbeddingsComing soon
Audio/TTSComing soon

Migration Checklist

1

Get your OmniaKey API key

Sign up at omniakey.com and generate an API key from the Console.
2

Update base URL and API key

Change your client initialization to use https://api.omniakey.com/v1 as the base URL and your OmniaKey API key.
3

Verify model names

Most model names work as-is. Check the Supported Models page if you encounter any issues.
4

Test your integration

Run your existing test suite. Since the API is fully compatible, everything should work without changes.
5

Monitor in the Console

Use the Console to monitor usage, latency, and costs in real-time.

Need Help?

If you run into any issues during migration, reach out to us: