Skip to content

A Blazing Fast AI Gateway. Route to 200+ LLMs with 1 fast & friendly API.

Notifications You must be signed in to change notification settings

stardog-union/portkey-gateway

 
 

Repository files navigation

English | 中文

AI Gateway

Reliably route to 200+ LLMs with 1 fast & friendly API

Gateway Demo

License Discord Twitter npm version

Gateway streamlines requests to 200+ open & closed source models with a unified API. It is also production-ready with support for caching, fallbacks, retries, timeouts, loadbalancing, and can be edge-deployed for minimum latency.

✅  Blazing fast (9.9x faster) with a tiny footprint (~45kb installed)
✅  Load balance across multiple models, providers, and keys
✅  Fallbacks make sure your app stays resilient
✅  Automatic Retries with exponential fallbacks come by default
✅  Configurable Request Timeouts to easily handle unresponsive LLM requests
✅  Multimodal to support routing between Vision, TTS, STT, Image Gen, and more models
✅  Plug-in middleware as needed
✅  Battle tested over 300B tokens
✅  Enterprise-ready for enhanced security, scale, and custom deployments

How to Run the Gateway?

  1. Run it Locally for complete control & customization
  2. Hosted by Portkey for quick setup without infrastructure concerns
  3. Enterprise On-Prem for advanced features and dedicated support

Run it Locally

Run the following command in your terminal and it will spin up the Gateway on your local system:

npx @portkey-ai/gateway

Your AI Gateway is now running on http://localhost:8787 🚀

Gateway is also edge-deployment ready. Explore Cloudflare, Docker, AWS etc. deployment guides here.

Gateway Hosted by Portkey

This same open-source Gateway powers Portkey API that processes billions of tokens daily and is in production with companies like Postman, Haptik, Turing, MultiOn, SiteGPT, and more.

Sign up for the free developer plan (10K request/month) here or discuss here for enterprise deployments.


How to Use the Gateway?

Compatible with OpenAI API & SDK

Gateway is fully compatible with the OpenAI API & SDK, and extends them to call 200+ LLMs and makes them reliable. To use the Gateway through OpenAI, you only need to update your base_URL and pass the provider name in headers.

  • To use through Portkey, set your base_URL to: https://api.portkey.ai/v1
  • To run locally, set: http://localhost:8787/v1

Let's see how we can use the Gateway to make an Anthropic request in OpenAI spec below - the same will follow for all the other providers.

Python

pip install portkey-ai

While instantiating your OpenAI client,

  1. Set the base_URL to http://localhost:8787/v1 (or PORTKEY_GATEWAY_URL through the Portkey SDK if you're using the hosted version)
  2. Pass the provider name in the default_headers param (here we are using createHeaders method with the Portkey SDK to auto-create the full header)
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders

gateway = OpenAI(
    api_key="ANTHROPIC_API_KEY",
    base_url=PORTKEY_GATEWAY_URL, # Or http://localhost:8787/v1 when running locally
    default_headers=createHeaders(
        provider="anthropic",
        api_key="PORTKEY_API_KEY" # Grab from https://app.portkey.ai # Not needed when running locally
    )
)

chat_complete = gateway.chat.completions.create(
    model="claude-3-sonnet-20240229",
    messages=[{"role": "user", "content": "What's a fractal?"}],
    max_tokens=512
)

If you want to run the Gateway locally, don't forget to run npx @portkey-ai/gateway in your terminal before this! Otherwise just sign up on Portkey and keep your Portkey API Key handy.

Node.JS

Works the same as in Python. Add baseURL & defaultHeaders while instantiating your OpenAI client and pass the relevant provider details.

npm install portkey-ai
import OpenAI from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'
 
const gateway = new OpenAI({
   apiKey: "ANTHROPIC_API_KEY",
    baseURL: PORTKEY_GATEWAY_URL, // Or http://localhost:8787/v1 when running locally
    defaultHeaders: createHeaders({
        provider: "anthropic",
        apiKey: "PORTKEY_API_KEY" // Grab from https://app.portkey.ai / Not needed when running locally
  })
});

async function main(){
   const chatCompletion = await portkey.chat.completions.create({
      messages: [{ role: 'user', content: 'Who are you?' }],
      model: 'claude-3-sonnet-20240229',
      maxTokens:512
   });
   console.log(chatCompletion.choices[0].message.content);
}

main()

REST

In your OpenAI REST request,

  1. Change the request URL to https://api.portkey.ai/v1 (or http://localhost:8787/v1 if you're hosting locally)
  2. Pass an additional x-portkey-provider header with the provider's name
  3. Change the model's name to claude-3
curl 'http://localhost:8787/v1/chat/completions' \
  -H 'x-portkey-provider: anthropic' \
  -H "Authorization: Bearer $ANTHROPIC_API_KEY" \
  -H 'Content-Type: application/json' \
  -d '{ "model": "claude-3-haiku-20240229", "messages": [{"role": "user","content": "Hi"}] }'

For other providers, change the provider & model to their respective values.

Gateway Cookbooks

Trending Cookbooks

Latest Cookbooks

Supported Providers

Explpore Gateway integrations with 20+ providers and 6+ frameworks.

Provider Support Stream
OpenAI
Azure OpenAI
Anyscale
Google Gemini & Palm
Anthropic
Cohere
Together AI
Perplexity
Mistral
Nomic
AI21
Stability AI
DeepInfra
Ollama
Novita AI

View the complete list of 200+ supported models here


Reliability Features

This feature allows you to specify a prioritized list of LLMs. If the primary LLM fails, Portkey will automatically fallback to the next LLM in the list to ensure reliability.

AI Gateway can automatically retry failed requests up to 5 times. A backoff strategy spaces out retry attempts to prevent network overload.

Distribute load effectively across multiple API keys or providers based on custom weights to ensure high availability and optimal performance.

Manage unruly LLMs & latencies by setting up granular request timeouts, allowing automatic termination of requests that exceed a specified duration.

Reliability features are set by passing a relevant Gateway Config (JSON) with the x-portkey-config header or with the config param in the SDKs

Example: Setting up Fallback from OpenAI to Anthropic

Write the fallback logic

{
  "strategy": { "mode": "fallback" },
  "targets": [
    { "provider": "openai", "api_key": "OPENAI_API_KEY" },
    { "provider": "anthropic", "api_key": "ANTHROPIC_API_KEY" }
  ]
}

Use it while making your request

Portkey Gateway will automatically trigger Anthropic if the OpenAI request fails:

REST

curl 'http://localhost:8787/v1/chat/completions' \
  -H 'x-portkey-provider: google' \
  -H 'x-portkey-config: $CONFIG' \
  -H "Authorization: Bearer $GOOGLE_AI_STUDIO_KEY" \
  -H 'Content-Type: application/json' \
  -d '{ "model": "gemini-1.5-pro-latest", "messages": [{"role": "user","content": "Hi"}] }'

You can also trigger Fallbacks only on specific status codes by passing an array of status codes with the on_status_codes param in strategy.

Read the full Fallback documentation here.

Example: Loadbalance Requests across 3 Accounts

Write the loadbalancer config

{
  "strategy": { "mode": "loadbalance" },
  "targets": [
    { "provider": "openai", "api_key": "ACCOUNT_1_KEY", "weight": 1 },
    { "provider": "openai", "api_key": "ACCOUNT_2_KEY", "weight": 1 },
    { "provider": "openai", "api_key": "ACCOUNT_3_KEY", "weight": 1 }
  ]
}

Pass the config while instantiating OpenAI client

import OpenAI from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'
 
const gateway = new OpenAI({
  baseURL: PORTKEY_GATEWAY_URL,
  defaultHeaders: createHeaders({
    apiKey: "PORTKEY_API_KEY",
    config: "CONFIG_ID"
  })
});

Read the Loadbalancing docs here.

Automatic Retries

Similarly, you can write a Config that will attempt retries up to 5 times
{
    "retry": { "attempts": 5 }
}

Read the full Retries documentation here.

Request Timeouts

Here, the request timeout of 10 seconds will be applied to *all* the targets.
{
  "strategy": { "mode": "fallback" },
  "request_timeout": 10000,
  "targets": [
    { "virtual_key": "open-ai-xxx" },
    { "virtual_key": "azure-open-ai-xxx" }
  ]
}

Read the full Request Timeouts documentation here.

Using Gateway Configs

Here's a guide to use the config object in your request.


Supported SDKs

Language Supported SDKs
Node.js / JS / TS Portkey SDK
OpenAI SDK
LangchainJS
LlamaIndex.TS
Python Portkey SDK
OpenAI SDK
Langchain
LlamaIndex
Go go-openai
Java openai-java
Rust async-openai
Ruby ruby-openai

Deploying the AI Gateway

See docs on installing the AI Gateway locally or deploying it on popular locations.


Gateway Enterprise Version

Make your AI app more reliable and forward compatible, while ensuring complete data security and privacy.

✅  Secure Key Management - for role-based access control and tracking
✅  Simple & Semantic Caching - to serve repeat queries faster & save costs
✅  Access Control & Inbound Rules - to control which IPs and Geos can connect to your deployments
✅  PII Redaction - to automatically remove sensitive data from your requests to prevent indavertent exposure
✅  SOC2, ISO, HIPAA, GDPR Compliances - for best security practices
✅  Professional Support - along with feature prioritization

Schedule a call to discuss enterprise deployments


Contributing

The easiest way to contribute is to pick any issue with the good first issue tag 💪. Read the Contributing guidelines here.

Bug Report? File here | Feature Request? File here


Community

Join our growing community around the world, for help, ideas, and discussions on AI.

Rubeus Social Share (4)

About

A Blazing Fast AI Gateway. Route to 200+ LLMs with 1 fast & friendly API.

Resources

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 96.3%
  • TypeScript 3.7%