freshcrate
Home > MCP Servers > bifrost

bifrost

Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 ยตs overhead at 5k RPS.

Description

Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 ยตs overhead at 5k RPS.

README

Bifrost AI Gateway

Go Report Card Discord badgeKnown Vulnerabilities codecov Docker Pulls Run In Postman Artifact Hub License

The fastest way to build AI applications that never go down

Bifrost is a high-performance AI gateway that unifies access to 15+ providers (OpenAI, Anthropic, AWS Bedrock, Google Vertex, and more) through a single OpenAI-compatible API. Deploy in seconds with zero configuration and get automatic failover, load balancing, semantic caching, and enterprise-grade features.

Quick Start

Get started

Go from zero to production-ready AI gateway in under a minute.

Step 1: Start Bifrost Gateway

# Install and run locally
npx -y @maximhq/bifrost

# Or use Docker
docker run -p 8080:8080 maximhq/bifrost

Step 2: Configure via Web UI

# Open the built-in web interface
open http://localhost:8080

Step 3: Make your first API call

curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello, Bifrost!"}]
  }'

That's it! Your AI gateway is running with a web interface for visual configuration, real-time monitoring, and analytics.

Complete Setup Guides:


Enterprise Deployments

Bifrost supports enterprise-grade, private deployments for teams running production AI systems at scale. In addition to private networking, custom security controls, and governance, enterprise deployments unlock advanced capabilities including adaptive load balancing, clustering, guardrails, MCP gateway and and other features designed for enterprise-grade scale and reliability.

Book a Demo


Key Features

Core Infrastructure

  • Unified Interface - Single OpenAI-compatible API for all providers
  • Multi-Provider Support - OpenAI, Anthropic, AWS Bedrock, Google Vertex, Azure, Cerebras, Cohere, Mistral, Ollama, Groq, and more
  • Automatic Fallbacks - Seamless failover between providers and models with zero downtime
  • Load Balancing - Intelligent request distribution across multiple API keys and providers

Advanced Features

  • Model Context Protocol (MCP) - Enable AI models to use external tools (filesystem, web search, databases)
  • Semantic Caching - Intelligent response caching based on semantic similarity to reduce costs and latency
  • Multimodal Support - Support for text,images, audio, and streaming, all behind a common interface.
  • Custom Plugins - Extensible middleware architecture for analytics, monitoring, and custom logic
  • Governance - Usage tracking, rate limiting, and fine-grained access control

Enterprise & Security

  • Budget Management - Hierarchical cost control with virtual keys, teams, and customer budgets
  • SSO Integration - Google and GitHub authentication support
  • Observability - Native Prometheus metrics, distributed tracing, and comprehensive logging
  • Vault Support - Secure API key management with HashiCorp Vault integration

Developer Experience


Repository Structure

Bifrost uses a modular architecture for maximum flexibility:

bifrost/
โ”œโ”€โ”€ npx/                 # NPX script for easy installation
โ”œโ”€โ”€ core/                # Core functionality and shared components
โ”‚   โ”œโ”€โ”€ providers/       # Provider-specific implementations (OpenAI, Anthropic, etc.)
โ”‚   โ”œโ”€โ”€ schemas/         # Interfaces and structs used throughout Bifrost
โ”‚   โ””โ”€โ”€ bifrost.go       # Main Bifrost implementation
โ”œโ”€โ”€ framework/           # Framework components for data persistence
โ”‚   โ”œโ”€โ”€ configstore/     # Configuration storages
โ”‚   โ”œโ”€โ”€ logstore/        # Request logging storages
โ”‚   โ””โ”€โ”€ vectorstore/     # Vector storages
โ”œโ”€โ”€ transports/          # HTTP gateway and other interface layers
โ”‚   โ””โ”€โ”€ bifrost-http/    # HTTP transport implementation
โ”œโ”€โ”€ ui/                  # Web interface for HTTP gateway
โ”œโ”€โ”€ plugins/             # Extensible plugin system
โ”‚   โ”œโ”€โ”€ governance/      # Budget management and access control
โ”‚   โ”œโ”€โ”€ jsonparser/      # JSON parsing and manipulation utilities
โ”‚   โ”œโ”€โ”€ logging/         # Request logging and analytics
โ”‚   โ”œโ”€โ”€ maxim/           # Maxim's observability integration
โ”‚   โ”œโ”€โ”€ mocker/          # Mock responses for testing and development
โ”‚   โ”œโ”€โ”€ semanticcache/   # Intelligent response caching
โ”‚   โ””โ”€โ”€ telemetry/       # Monitoring and observability
โ”œโ”€โ”€ docs/                # Documentation and guides
โ””โ”€โ”€ tests/               # Comprehensive test suites

Getting Started Options

Choose the deployment method that fits your needs:

1. Gateway (HTTP API)

Best for: Language-agnostic integration, microservices, and production deployments

# NPX - Get started in 30 seconds
npx -y @maximhq/bifrost

# Docker - Production ready
docker run -p 8080:8080 -v $(pwd)/data:/app/data maximhq/bifrost

Features: Web UI, real-time monitoring, multi-provider management, zero-config startup

Learn More: Gateway Setup Guide

2. Go SDK

Best for: Direct Go integration with maximum performance and control

go get github.com/maximhq/bifrost/core

Features: Native Go APIs, embedded deployment, custom middleware integration

Learn More: Go SDK Guide

3. Drop-in Replacement

Best for: Migrating existing applications with zero code changes

# OpenAI SDK
- base_url = "https://api.openai.com"
+ base_url = "http://localhost:8080/openai"

# Anthropic SDK  
- base_url = "https://api.anthropic.com"
+ base_url = "http://localhost:8080/anthropic"

# Google GenAI SDK
- api_endpoint = "https://generativelanguage.googleapis.com"
+ api_endpoint = "http://localhost:8080/genai"

Learn More: Integration Guides


Performance

Bifrost adds virtually zero overhead to your AI requests. In sustained 5,000 RPS benchmarks, the gateway added only 11 ยตs of overhead per request.

Metric t3.medium t3.xlarge Improvement
Added latency (Bifrost overhead) 59 ยตs 11 ยตs -81%
Success rate @ 5k RPS 100% 100% No failed requests
Avg. queue wait time 47 ยตs 1.67 ยตs -96%
Avg. request latency (incl. provider) 2.12 s 1.61 s -24%

Key Performance Highlights:

  • Perfect Success Rate - 100% request success rate even at 5k RPS
  • Minimal Overhead - Less than 15 ยตs additional latency per request
  • Efficient Queuing - Sub-microsecond average wait times
  • Fast Key Selection - ~10 ns to pick weighted API keys

Complete Benchmarks: Performance Analysis


Documentation

Complete Documentation: https://docs.getbifrost.ai

Quick Start

Features

Integrations

Enterprise


Need Help?

Join our Discord for community support and discussions.

Get help with:

  • Quick setup assistance and troubleshooting
  • Best practices and configuration tips
  • Community discussions and support
  • Real-time help with integrations

Contributing

We welcome contributions of all kinds! See our Contributing Guide for:

  • Setting up the development environment
  • Code conventions and best practices
  • How to submit pull requests
  • Building and testing locally

For development requirements and build instructions, see our Development Setup Guide.


License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Built with โค๏ธ by Maxim

Release History

VersionChangesUrgencyDate
ent-v1.4.0-prerelease4-base## What's Changed * fix(helm): use correct templating for mcpClientConfig by @crust3780 in https://github.com/maximhq/bifrost/pull/2673 * docs updates by @akshaydeo in https://github.com/maximhq/bifrost/pull/2691 * changelog update by @akshaydeo in https://github.com/maximhq/bifrost/pull/2693 * fix(bedrock): preserve image content in tool results for Converse API by @Edward-Upton in https://github.com/maximhq/bifrost/pull/2658 * chore: updates compat plugin docs by @sammaji in https://githuHigh4/21/2026
ent-v1.3.19-base## What's Changed * fix(helm): use correct templating for mcpClientConfig by @crust3780 in https://github.com/maximhq/bifrost/pull/2673 * docs updates by @akshaydeo in https://github.com/maximhq/bifrost/pull/2691 * changelog update by @akshaydeo in https://github.com/maximhq/bifrost/pull/2693 * fix(bedrock): preserve image content in tool results for Converse API by @Edward-Upton in https://github.com/maximhq/bifrost/pull/2658 * chore: updates compat plugin docs by @sammaji in https://githuHigh4/18/2026
helm-chart-v2.1.1Helm chart release for Bifrost v2.1.1High4/15/2026
helm-chart-v2.0.18-rc.1Helm chart release for Bifrost v2.0.18-rc.1High4/14/2026
plugins/telemetry/v1.5.2## Plugin Release: telemetry v1.5.2 - chore: upgraded core to v1.5.2 and framework to v1.3.2 ### Installation ```bash # Update your go.mod to use the new plugin version go get github.com/maximhq/bifrost/plugins/telemetry@v1.5.2 ``` --- _This release was automatically created from version file: `plugins/telemetry/version`_High4/13/2026
transports/v1.4.22## Bifrost HTTP Transport Release v1.4.22 ## โœจ Features - **OAuth MCP** - add next-step hints to OAuth MCP client creation response - **Azure passthrough** - added azure passthrough support - **272k token tier** - add 272k token tier pricing support in pricing - **Flex and priority tier support** - added flex and priority tier support in pricing ## ๐Ÿž Fixed - **Response Backfill** โ€” Added response parameter backfilling for chat completion and responses requests, ensuring model, object type, High4/11/2026
ent-v1.4.0-prerelease2-base## What's Changed * feat: Modifies Maxim plugin to support image generation requests by @roroghost17 in https://github.com/maximhq/bifrost/pull/2422 * v1.5.0-prerelease1 docs by @akshaydeo in https://github.com/maximhq/bifrost/pull/2458 * adds v1.5.0 changelogs by @akshaydeo in https://github.com/maximhq/bifrost/pull/2459 * enterprise release notes by @akshaydeo in https://github.com/maximhq/bifrost/pull/2461 * fix index to contain enteprise release notes by @akshaydeo in https://github.comHigh4/8/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

ai-gatewayOne API for 25+ LLMs, OpenAI, Anthropic, Bedrock, Azure. Caching, guardrails & cost controls. Go-native LiteLLM & Kong AI Gateway alternative.v1.0.4
mcp-yandex-trackerManage and automate tasks in Yandex Tracker using a robust MCP integration for efficient issue tracking and project control.main@2026-04-21
litellmPython SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropiv1.83.7-stable
planoPlano is an AI-native proxy and data plane for agentic apps โ€” with built-in orchestration, safety, observability, and smart LLM routing so you stay focused on your agents core logic.0.4.20
developers-guide-to-aiThe Developer's Guide to AI - A Field Guide for the Working Developermain@2026-04-09