freshcrate
Home > Infrastructure > axonhub

axonhub

⚑️ Open-source AI Gateway β€” Use any SDK to call 100+ LLMs. Built-in failover, load balancing, cost control & end-to-end tracing.

Description

⚑️ Open-source AI Gateway β€” Use any SDK to call 100+ LLMs. Built-in failover, load balancing, cost control & end-to-end tracing.

README

AxonHub - All-in-one AI Development Platform

Use any SDK. Access any model. Zero code changes.

looplj%2Faxonhub | Trendshift


NOTE

  1. This project is maintained by an individual. The author makes no warranties and assumes no liability for risks arising from its use. Please evaluate carefully.
  2. The core scope of this project does not include 2api (subscription-to-API conversion). If you need that, consider other open-source projects focused on 2api.

πŸ“– Project Introduction

All-in-one AI Development Platform

AxonHub is the AI gateway that lets you switch between model providers without changing a single line of code.

Whether you're using OpenAI SDK, Anthropic SDK, or any AI SDK, AxonHub transparently translates your requests to work with any supported model provider. No refactoring, no SDK swapsβ€”just change a configuration and you're done.

What it solves:

  • πŸ”’ Vendor lock-in - Switch from GPT-4 to Claude or Gemini instantly
  • πŸ”§ Integration complexity - One API format for 10+ providers
  • πŸ“Š Observability gap - Complete request tracing out of the box
  • πŸ’Έ Cost control - Real-time usage tracking and budget management
AxonHub Architecture

Core Features

Feature What You Get
πŸ”„ Any SDK β†’ Any Model Use OpenAI SDK to call Claude, or Anthropic SDK to call GPT. Zero code changes.
πŸ” Full Request Tracing Complete request timelines with thread-aware observability. Debug faster.
πŸ” Enterprise RBAC Fine-grained access control, usage quotas, and data isolation.
⚑ Smart Load Balancing Auto failover in <100ms. Always route to the healthiest channel.
πŸ’° Real-time Cost Tracking Per-request cost breakdown. Input, output, cache tokensβ€”all tracked.

πŸ“š Documentation

For detailed technical documentation, API references, architecture design, and more, please visit

Try AxonHub live at our demo instance!

Note:The demo instance currently configures Zhipu and OpenRouter free models.

Demo Account


⭐ Features

πŸ“Έ Screenshots

Here are some screenshots of AxonHub in action:

System Dashboard
System Dashboard
Channel Management
Channel Management
Model Price
Model Price
Models
Models
Trace Viewer
Trace Viewer
Request Monitoring
Request Monitoring

πŸš€ API Types

API Type Status Description Document
Text Generation βœ… Done Conversational interface OpenAI API, Anthropic API, Gemini API
Image Generation βœ… Done Image generation Image Generation
Rerank βœ… Done Results ranking Rerank API
Embedding βœ… Done Vector embedding generation Embedding API
Realtime πŸ“ Todo Live conversation capabilities -

πŸ€– Supported Providers

Provider Status Supported Models Compatible APIs
OpenAI βœ… Done GPT-4, GPT-4o, GPT-5, etc. OpenAI, Anthropic, Gemini, Embedding, Image Generation
Anthropic βœ… Done Claude 3.5, Claude 3.0, etc. OpenAI, Anthropic, Gemini
Zhipu AI βœ… Done GLM-4.5, GLM-4.5-air, etc. OpenAI, Anthropic, Gemini
Moonshot AI (Kimi) βœ… Done kimi-k2, etc. OpenAI, Anthropic, Gemini
DeepSeek βœ… Done DeepSeek-V3.1, etc. OpenAI, Anthropic, Gemini
ByteDance Doubao βœ… Done doubao-1.6, etc. OpenAI, Anthropic, Gemini, Image Generation
Gemini βœ… Done Gemini 2.5, etc. OpenAI, Anthropic, Gemini, Image Generation
Fireworks βœ… Done MiniMax-M2.5, GLM-5, Kimi K2.5, etc. OpenAI
Jina AI βœ… Done Embeddings, Reranker, etc. Jina Embedding, Jina Rerank
OpenRouter βœ… Done Various models OpenAI, Anthropic, Gemini, Image Generation
ZAI βœ… Done - Image Generation
AWS Bedrock πŸ”„ Testing Claude on AWS OpenAI, Anthropic, Gemini
Google Cloud πŸ”„ Testing Claude on GCP OpenAI, Anthropic, Gemini
NanoGPT βœ… Done Various models, Image Gen OpenAI, Anthropic, Gemini, Image Generation

πŸš€ Quick Start

30-Second Local Start

# Download and extract (macOS ARM64 example)
curl -sSL https://github.com/looplj/axonhub/releases/latest/download/axonhub_darwin_arm64.tar.gz | tar xz
cd axonhub_*

# Run with SQLite (default)
./axonhub

# Open http://localhost:8090
# First run: Follow the setup wizard to initialize the system (create admin account, password must be at least 6 characters)

That's it! Now configure your first AI channel and start calling models through AxonHub.

Zero-Code Migration Example

Your existing code works without any changes. Just point your SDK to AxonHub:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8090/v1",  # Point to AxonHub
    api_key="your-axonhub-api-key"        # Use AxonHub API key
)

# Call Claude using OpenAI SDK!
response = client.chat.completions.create(
    model="claude-3-5-sonnet",  # Or gpt-4, gemini-pro, deepseek-chat...
    messages=[{"role": "user", "content": "Hello!"}]
)

Switch models by changing one line: model="gpt-4" β†’ model="claude-3-5-sonnet". No SDK changes needed.

1-click Deploy to Render

Deploy AxonHub with 1-click on Render for free.


πŸš€ Deployment Guide

πŸ’» Personal Computer Deployment

Perfect for individual developers and small teams. No complex configuration required.

Quick Download & Run

  1. Download the latest release from GitHub Releases

    • Choose the appropriate version for your operating system:
  2. Extract and run

    # Extract the downloaded file
    unzip axonhub_*.zip
    cd axonhub_*
    
    # Add execution permissions (only for Linux/macOS)
    chmod +x axonhub
    
    # Run directly - default SQLite database
    
    # Install AxonHub to system
    sudo ./install.sh
    
    # Start AxonHub service
    ./start.sh
    
    # Stop AxonHub service
    ./stop.sh
  3. Access the application

    http://localhost:8090
    

πŸ–₯️ Server Deployment

For production environments, high availability, and enterprise deployments.

Database Support

AxonHub supports multiple databases to meet different scale deployment needs:

Database Supported Versions Recommended Scenario Auto Migration Links
TiDB Cloud Starter Serverless, Free tier, Auto Scale βœ… Supported TiDB Cloud
TiDB Cloud Dedicated Distributed deployment, large scale βœ… Supported TiDB Cloud
TiDB V8.0+ Distributed deployment, large scale βœ… Supported TiDB
Neon DB - Serverless, Free tier, Auto Scale βœ… Supported Neon DB
PostgreSQL 15+ Production environment, medium-large deployments βœ… Supported PostgreSQL
MySQL 8.0+ Production environment, medium-large deployments βœ… Supported MySQL
SQLite 3.0+ Development environment, small deployments βœ… Supported SQLite

Configuration

AxonHub uses YAML configuration files with environment variable override support:

# config.yml
server:
  port: 8090
  name: "AxonHub"
  debug: false

db:
  dialect: "tidb"
  dsn: "<USER>.root:<PASSWORD>@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true&parseTime=true&multiStatements=true&charset=utf8mb4"

log:
  level: "info"
  encoding: "json"

Environment variables:

AXONHUB_SERVER_PORT=8090
AXONHUB_DB_DIALECT="tidb"
AXONHUB_DB_DSN="<USER>.root:<PASSWORD>@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true&parseTime=true&multiStatements=true&charset=utf8mb4"
AXONHUB_LOG_LEVEL=info

For detailed configuration instructions, please refer to configuration documentation.

Docker Compose Deployment

# Clone project
git clone https://github.com/looplj/axonhub.git
cd axonhub

# Set environment variables
export AXONHUB_DB_DIALECT="tidb"
export AXONHUB_DB_DSN="<USER>.root:<PASSWORD>@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true&parseTime=true&multiStatements=true&charset=utf8mb4"

# Start services
docker-compose up -d

# Check status
docker-compose ps

Helm Kubernetes Deployment

Deploy AxonHub on Kubernetes using the official Helm chart:

# Quick installation
git clone https://github.com/looplj/axonhub.git
cd axonhub
helm install axonhub ./deploy/helm

# Production deployment
helm install axonhub ./deploy/helm -f ./deploy/helm/values-production.yaml

# Access AxonHub
kubectl port-forward svc/axonhub 8090:8090
# Visit http://localhost:8090

Key Configuration Options:

Parameter Description Default
axonhub.replicaCount Replicas 1
axonhub.dbPassword DB password axonhub_password
postgresql.enabled Embedded PostgreSQL true
ingress.enabled Enable ingress false
persistence.enabled Data persistence false

For detailed configuration and troubleshooting, see Helm Chart Documentation.

Virtual Machine Deployment

Download the latest release from GitHub Releases

# Extract and run
unzip axonhub_*.zip
cd axonhub_*

# Set environment variables
export AXONHUB_DB_DIALECT="tidb"
export AXONHUB_DB_DSN="<USER>.root:<PASSWORD>@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true&parseTime=true&multiStatements=true&charset=utf8mb4"

sudo ./install.sh

# Configuration file check
axonhub config check

# Start service
#  For simplicity, we recommend managing AxonHub with the helper scripts:

# Start
./start.sh

# Stop
./stop.sh

πŸ“– Usage Guide

Unified API Overview

AxonHub provides a unified API gateway that supports both OpenAI Chat Completions and Anthropic Messages APIs. This means you can:

  • Use OpenAI API to call Anthropic models - Keep using your OpenAI SDK while accessing Claude models
  • Use Anthropic API to call OpenAI models - Use Anthropic's native API format with GPT models
  • Use Gemini API to call OpenAI models - Use Gemini's native API format with GPT models
  • Automatic API translation - AxonHub handles format conversion automatically
  • Zero code changes - Your existing OpenAI or Anthropic client code continues to work

1. Initial Setup

  1. Access Management Interface

    http://localhost:8090
    
  2. Configure AI Providers

    • Add API keys in the management interface
    • Test connections to ensure correct configuration
  3. Create Users and Roles

    • Set up permission management
    • Assign appropriate access permissions

2. Channel Configuration

Configure AI provider channels in the management interface. For detailed information on channel configuration, including model mappings, parameter overrides, and troubleshooting, see the Channel Configuration Guide.

3. Model Management

AxonHub provides a flexible model management system that supports mapping abstract models to specific channels and model implementations through Model Associations. This enables:

  • Unified Model Interface - Use abstract model IDs (e.g., gpt-4, claude-3-opus) instead of channel-specific names
  • Intelligent Channel Selection - Automatically route requests to optimal channels based on association rules and load balancing
  • Flexible Mapping Strategies - Support for precise channel-model matching, regex patterns, and tag-based selection
  • Priority-based Fallback - Configure multiple associations with priorities for automatic failover

For comprehensive information on model management, including association types, configuration examples, and best practices, see the Model Management Guide.

4. Create API Keys

Create API keys to authenticate your applications with AxonHub. Each API key can be configured with multiple profiles that define:

  • Model Mappings - Transform user-requested models to actual available models using exact match or regex patterns
  • Channel Restrictions - Limit which channels an API key can use by channel IDs or tags
  • Model Access Control - Control which models are accessible through a specific profile
  • Profile Switching - Change behavior on-the-fly by activating different profiles

For detailed information on API key profiles, including configuration examples, validation rules, and best practices, see the API Key Profile Guide.

5. AI Coding Tools Integration

See the dedicated guides for detailed setup steps, troubleshooting, and tips on combining these tools with AxonHub model profiles:


6. SDK Usage

For detailed SDK usage examples and code samples, please refer to the API documentation:

πŸ› οΈ Development Guide

For detailed development instructions, architecture design, and contribution guidelines, please see docs/en/development/development.md.


🀝 Acknowledgments


πŸ“„ License

This project is licensed under multiple licenses (Apache-2.0 and LGPL-3.0). See LICENSE file for the detailed licensing overview and terms.


AxonHub - All-in-one AI Development Platform, making AI development simpler

🏠 Homepage β€’ πŸ“š Documentation β€’ πŸ› Issue Feedback

Built with ❀️ by the AxonHub team

Release History

VersionChangesUrgencyDate
v0.9.35## What's Changed * chore: sync model developers data by @github-actions[bot] in https://github.com/looplj/axonhub/pull/1416 * Revert "fix(channels): bulk weight adjustment not populating on drag … by @looplj in https://github.com/looplj/axonhub/pull/1420 * doc: fix broken doc link by @looplj in https://github.com/looplj/axonhub/pull/1422 * chore: sync model developers data by @github-actions[bot] in https://github.com/looplj/axonhub/pull/1423 * docs: rename model mapping to model alias by High4/21/2026
v0.9.34## What's Changed * opt: change save data to not return value by @looplj in https://github.com/looplj/axonhub/pull/1405 * opt: generate stable device id and account uuid for claude code by @looplj in https://github.com/looplj/axonhub/pull/1406 * fix(channels): bulk weight adjustment not populating on drag reorder by @djdembeck in https://github.com/looplj/axonhub/pull/1401 * chore(i18n): θ‘₯ε…… StepFun 开发者名称翻译 by @llc1123 in https://github.com/looplj/axonhub/pull/1407 * fix(anthropic): accept xHigh4/17/2026
v0.9.33## What's Changed * feat: gemini/doubao embedding, close #1093 by @looplj in https://github.com/looplj/axonhub/pull/1388 * opt: try to fix channel/model cache not refreshed by @looplj in https://github.com/looplj/axonhub/pull/1390 * opt: align codex trace header with latest codex cli by @yoke233 in https://github.com/looplj/axonhub/pull/1387 * feat: realtime chunks preview and response preview by @LazuliKao in https://github.com/looplj/axonhub/pull/1342 * feat: add cache diagnostics tool byHigh4/16/2026
v0.9.32## What's Changed * feat: channel rpm settings, close #746 by @looplj in https://github.com/looplj/axonhub/pull/1320 * feat: channel concurrency settings, close #1130 by @looplj in https://github.com/looplj/axonhub/pull/1322 * chore(deps): upgrade aws eventstream by @looplj in https://github.com/looplj/axonhub/pull/1332 * feat: add kwaipilot developer catalog support by @llc1123 in https://github.com/looplj/axonhub/pull/1328 * feat: repsect channel response with retry after, close #858 by @High4/13/2026
v0.9.31## What's Changed * fix: allow nil project profiles, close #1316 by @looplj in https://github.com/looplj/axonhub/pull/1317 **Full Changelog**: https://github.com/looplj/axonhub/compare/v0.9.30...v0.9.31High4/7/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

cc-relay⚑️ Blazing fast LLMs API Gateway written in Gov0.0.16
claude-container🐳 Run Claude Code safely in isolated Docker containers with persistent projects and easy setup on macOS using Justfile automation.master@2026-04-21
structured-prompt-skill✍️ Write effective AI prompts with this structured prompt engineering library and Claude Code skill, featuring 300+ curated examples for high-quality results.main@2026-04-21
AetherArtifical Ecology For Thought and Emergent Reasoning. The Colony That Builds With You.v1.0.17
crab-codeπŸ¦€ Open-source alternative to Claude Code, built from scratch in Rust. Agentic coding CLI β€” thinks, plans, and executes with any LLM. Compatible with Claude Code workflows.main@2026-04-21