π v0.5.0 Released! Structured error envelope (
AdkError redesign), OpenAI Responses API client, OpenRouter deep integration, config validation, typed Runner::run() parameters, labs feature preset, provider_from_env() auto-detection, adk::run() one-liner, encrypted sessions with key rotation, graph durable resume, MCP resource API, Deepgram streaming STT, ToolSearchConfig for Anthropic. Breaking: AdkError is now a multi-axis struct, Runner::run() takes UserId/SessionId types. See CHANGELOG for full details and migration guide.
Contributors: Many thanks to @mikefaille β AdkIdentity design, realtime audio, LiveKit bridge, skill system. @rohan-panickar β OpenAI-compatible providers, xAI, multimodal content. @dhruv-pant β Gemini service account auth. @danielsan β Google deps issue & PR (#181, #203), RAG crash report (#205). @CodingFlow β Gemini 3 thinking level, global endpoint, citationSources (#177, #178, #179). @ctylx β skill discovery fix (#204). @poborin β project config proposal (#176). Get started β
Announcements: ADK-Rust Roadmap launched for 2026, we welcome suggestions, comments and ideas. ADK Playground launched! You can now run 70+ ADK-Rust AI Agents online for free. Compile and click. No login, no install. https://playground.adk-rust.com (https://playground.adk-rust.com) And many more discussions, feel free to discuss: A production-ready Rust framework for building AI agents enabling you to create powerful and high-performance AI agent systems with a flexible, modular architecture. Model-agnostic. Type-safe. Async.
cargo install cargo-adk
cargo adk new my-agent
cd my-agent && cargo runOr pick a template: --template tools | rag | api | openai. See Quick Start for details.
ADK-Rust provides a comprehensive framework for building AI agents in Rust, featuring:
- Type-safe agent abstractions with async execution and event streaming
- Multiple agent types: LLM agents, workflow agents (sequential, parallel, loop), and custom agents
- Realtime voice agents: Bidirectional audio streaming with OpenAI Realtime API and Gemini Live API
- Tool ecosystem: Function tools, Google Search, MCP (Model Context Protocol) integration
- RAG pipeline: Document chunking, vector embeddings, semantic search with 6 vector store backends
- Security: Role-based access control, declarative scope-based tool security, SSO/OAuth, audit logging
- Agentic commerce: ACP and AP2 payment orchestration with durable transaction journals and evidence-backed recall
- Production features: Session management, artifact storage, memory systems, REST/A2A APIs
- Developer experience: Interactive CLI, 120+ working examples, comprehensive documentation
Status: Production-ready, actively maintained
ADK-Rust follows a clean layered architecture from application interface down to foundational services.
LLM Agents: Powered by large language models with tool use, function calling, and streaming responses.
Workflow Agents: Deterministic orchestration patterns.
SequentialAgent: Execute agents in sequenceParallelAgent: Execute agents concurrentlyLoopAgent: Iterative execution with exit conditions
Custom Agents: Implement the Agent trait for specialized behavior.
Realtime Voice Agents: Build voice-enabled AI assistants with bidirectional audio streaming.
Graph Agents: LangGraph-style workflow orchestration with state management and checkpointing.
ADK supports multiple LLM providers with a unified API:
| Provider | Model Examples | Feature Flag |
|---|---|---|
| Gemini | gemini-2.5-flash, gemini-2.5-pro, gemini-3-pro-preview, gemini-3-flash-preview |
(default) |
| OpenAI | gpt-5, gpt-5-mini, gpt-5-nano |
openai |
| OpenAI Responses API | gpt-4.1, o3, o4-mini |
openai |
| Anthropic | claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5 |
anthropic |
| DeepSeek | deepseek-chat, deepseek-reasoner |
deepseek |
| Groq | meta-llama/llama-4-scout-17b-16e-instruct, llama-3.3-70b-versatile |
groq |
| Ollama | llama3.2:3b, qwen2.5:7b, mistral:7b |
ollama |
| Fireworks AI | accounts/fireworks/models/llama-v3p1-8b-instruct |
openai (preset) |
| Together AI | meta-llama/Llama-3.3-70B-Instruct-Turbo |
openai (preset) |
| Mistral AI | mistral-small-latest |
openai (preset) |
| Perplexity | sonar |
openai (preset) |
| Cerebras | llama-3.3-70b |
openai (preset) |
| SambaNova | Meta-Llama-3.3-70B-Instruct |
openai (preset) |
| xAI (Grok) | grok-3-mini |
openai (preset) |
| Amazon Bedrock | anthropic.claude-sonnet-4-20250514-v1:0 |
bedrock |
| Azure AI Inference | (endpoint-specific) | azure-ai |
| mistral.rs | Phi-3, Mistral, Llama, Gemma, LLaVa, FLUX | git dependency |
All providers support streaming, function calling, and multimodal inputs (where available).
Define tools with zero boilerplate using the #[tool] macro:
use adk_tool::{tool, AdkError};
use schemars::JsonSchema;
use serde::Deserialize;
use serde_json::{json, Value};
#[derive(Deserialize, JsonSchema)]
struct WeatherArgs {
/// The city to look up
city: String,
}
/// Get the current weather for a city.
#[tool]
async fn get_weather(args: WeatherArgs) -> std::result::Result<Value, AdkError> {
Ok(json!({ "temp": 72, "city": args.city }))
}
// Use it: agent_builder.tool(Arc::new(GetWeather))The macro reads the doc comment as the description, derives the JSON schema from the args type, and generates a Tool impl. No manual schema writing, no boilerplate.
Built-in tools:
#[tool]macro (zero-boilerplate custom tools)- Function tools (custom Rust functions)
- Google Search
- Artifact loading
- Loop termination
MCP Integration: Connect to Model Context Protocol servers for extended capabilities. Supports MCP Elicitation β servers can request additional user input at runtime via structured forms or URLs.
- Session Management: In-memory and SQLite-backed sessions with state persistence, encrypted sessions with AES-256-GCM and key rotation
- Memory System: Long-term memory with semantic search and vector embeddings
- Servers: REST API with SSE streaming, A2A protocol for agent-to-agent communication
- Guardrails: PII redaction, content filtering, JSON schema validation
- Payments: ACP and AP2 commerce support through
adk-payments - Observability: OpenTelemetry tracing, structured logging
| Crate | Purpose | Key Features |
|---|---|---|
adk-core |
Foundational traits and types | Agent trait, Content, Part, error types, streaming primitives |
adk-agent |
Agent implementations | LlmAgent, SequentialAgent, ParallelAgent, LoopAgent, builder patterns |
adk-skill |
AgentSkills parsing and selection | Skill markdown parser, .skills discovery/indexing, lexical matching, prompt injection helpers |
adk-model |
LLM integrations | Gemini, OpenAI, Anthropic, DeepSeek, Groq, Ollama, Bedrock, Azure AI + OpenAI-compatible presets (Fireworks, Together, Mistral, Perplexity, Cerebras, SambaNova, xAI) |
adk-gemini |
Gemini client | Google Gemini API client with streaming and multimodal support |
adk-anthropic |
Anthropic client | Dedicated Anthropic API client with streaming, thinking, caching, citations, vision, PDF, pricing |
adk-mistralrs |
Native local inference | mistral.rs integration, ISQ quantization, LoRA adapters (git-only) |
adk-tool |
Tool system and extensibility | FunctionTool, Google Search, MCP protocol with elicitation, schema validation |
adk-session |
Session and state management | SQLite/in-memory backends, conversation history, state persistence |
adk-artifact |
Artifact storage system | File-based storage, MIME type handling, image/PDF/video support |
adk-memory |
Long-term memory | Vector embeddings, semantic search, Qdrant integration |
adk-payments |
Agentic commerce orchestration | ACP/AP2 adapters, canonical transaction kernel, durable journals, evidence-backed payment flows |
adk-rag |
RAG pipeline | Document chunking, embeddings, vector search, reranking, 6 backends |
adk-runner |
Agent execution runtime | Context management, event streaming, session lifecycle, callbacks |
adk-server |
Production API servers | REST API, A2A protocol, middleware, health checks |
adk-cli |
Command-line interface | Interactive REPL, session management, MCP server integration |
adk-realtime |
Real-time voice agents | OpenAI Realtime API, Gemini Live API, bidirectional audio, VAD |
adk-graph |
Graph-based workflows | LangGraph-style orchestration, state management, checkpointing, human-in-the-loop |
adk-browser |
Browser automation | 46 WebDriver tools, navigation, forms, screenshots, PDF generation |
adk-eval |
Agent evaluation | Test definitions, trajectory validation, LLM-judged scoring, rubrics |
adk-guardrail |
Input/output validation | PII redaction, content filtering, JSON schema validation |
adk-auth |
Access control | Role-based permissions, declarative scope-based security, SSO/OAuth, audit logging |
adk-telemetry |
Observability | Structured logging, OpenTelemetry tracing, span helpers |
Extracted to standalone repos: adk-ui (dynamic UI generation), adk-studio (visual agent builder), adk-playground (120+ examples).
cargo install cargo-adk
cargo adk new my-agent # basic Gemini agent
cargo adk new my-agent --template tools # agent with #[tool] custom tools
cargo adk new my-agent --template rag # RAG with vector search
cargo adk new my-agent --template api # REST server
cargo adk new my-agent --template openai # OpenAI-powered agent
cd my-agent
cp .env.example .env # add your API key
cargo runRequires Rust 1.85 or later (Rust 2024 edition). Add to your Cargo.toml:
[dependencies]
adk-rust = "0.5.0" # Standard: agents, models, tools, sessions, runner, server, CLI
# Need graph, browser, eval, realtime, audio, RAG?
# adk-rust = { version = "0.5.0", features = ["full"] }Set your API key:
# For Gemini (default)
export GOOGLE_API_KEY="your-api-key"
# For OpenAI
export OPENAI_API_KEY="your-api-key"
# For Anthropic
export ANTHROPIC_API_KEY="your-api-key"
# For DeepSeek
export DEEPSEEK_API_KEY="your-api-key"
# For Groq
export GROQ_API_KEY="your-api-key"
# For Fireworks AI
export FIREWORKS_API_KEY="your-api-key"
# For Together AI
export TOGETHER_API_KEY="your-api-key"
# For Mistral AI
export MISTRAL_API_KEY="your-api-key"
# For Perplexity
export PERPLEXITY_API_KEY="your-api-key"
# For Cerebras
export CEREBRAS_API_KEY="your-api-key"
# For SambaNova
export SAMBANOVA_API_KEY="your-api-key"
# For Azure AI Inference
export AZURE_AI_API_KEY="your-api-key"
# For Amazon Bedrock (uses AWS IAM credentials)
# Configure via: aws configure
# For Ollama (no key, just run: ollama serve)The simplest way to run an agent β one function call, auto-detects your provider from environment variables:
use adk_rust::run;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
dotenvy::dotenv().ok();
// Set ANTHROPIC_API_KEY, OPENAI_API_KEY, or GOOGLE_API_KEY
let response = run("You are a helpful assistant.", "What is 2 + 2?").await?;
println!("{response}");
Ok(())
}provider_from_env() checks env vars in order: ANTHROPIC_API_KEY β OPENAI_API_KEY β GOOGLE_API_KEY. First match wins.
use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?;
let agent = LlmAgentBuilder::new("assistant")
.description("Helpful AI assistant")
.instruction("You are a helpful assistant. Be concise and accurate.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("OPENAI_API_KEY")?;
let model = OpenAIClient::new(OpenAIConfig::new(api_key, "gpt-5-mini"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}Uses the /v1/responses endpoint β recommended for reasoning models (o3, o4-mini) and built-in tools:
use adk_rust::prelude::*;
use adk_rust::Launcher;
use adk_model::openai::{OpenAIResponsesClient, OpenAIResponsesConfig};
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("OPENAI_API_KEY")?;
let config = OpenAIResponsesConfig::new(api_key, "gpt-4.1-mini");
let model = OpenAIResponsesClient::new(config)?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("ANTHROPIC_API_KEY")?;
let model = AnthropicClient::new(AnthropicConfig::new(api_key, "claude-sonnet-4-6"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("DEEPSEEK_API_KEY")?;
// Standard chat model
let model = DeepSeekClient::chat(api_key)?;
// Or use reasoner for chain-of-thought reasoning
// let model = DeepSeekClient::reasoner(api_key)?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GROQ_API_KEY")?;
let model = GroqClient::new(GroqConfig::llama70b(api_key))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}use adk_rust::prelude::*;
use adk_rust::Launcher;
#[tokio::main]
async fn main() -> AnyhowResult<()> {
dotenvy::dotenv().ok();
// Requires: ollama serve && ollama pull llama3.2
let model = OllamaModel::new(OllamaConfig::new("llama3.2"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}Examples live in the dedicated adk-playground repo (120+ examples covering every feature and provider).
git clone https://github.com/zavora-ai/adk-playground.git
cd adk-playground
cargo run --example quickstart| Project | Description |
|---|---|
| adk-studio | Visual agent builder β drag-and-drop canvas, code generation, live testing |
| adk-ui | Dynamic UI generation β 28 components, React client, streaming updates |
| adk-playground | 120+ working examples for every feature and provider |
Build voice-enabled AI assistants using the adk-realtime crate:
use adk_realtime::{RealtimeAgent, openai::OpenAIRealtimeModel, RealtimeModel};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let model: Arc<dyn RealtimeModel> = Arc::new(
OpenAIRealtimeModel::new(&api_key, "gpt-4o-realtime-preview-2024-12-17")
);
let agent = RealtimeAgent::builder("voice_assistant")
.model(model)
.instruction("You are a helpful voice assistant.")
.voice("alloy")
.server_vad() // Enable voice activity detection
.build()?;
Ok(())
}Supported Realtime Models:
| Provider | Model | Transport | Feature Flag |
|---|---|---|---|
| OpenAI | gpt-4o-realtime-preview-2024-12-17 |
WebSocket | openai |
| OpenAI | gpt-realtime |
WebSocket | openai |
| OpenAI | gpt-4o-realtime-* |
WebRTC | openai-webrtc |
gemini-live-2.5-flash-native-audio |
WebSocket | gemini |
|
| Gemini via Vertex AI | WebSocket + OAuth2 | vertex-live |
|
| LiveKit | Any (bridge to Gemini/OpenAI) | WebRTC | livekit |
Features:
- OpenAI Realtime API and Gemini Live API support
- Vertex AI Live with Application Default Credentials (ADC)
- LiveKit WebRTC bridge for production-grade audio routing
- OpenAI WebRTC transport with Opus codec and data channels
- Bidirectional audio streaming (PCM16, G711, Opus)
- Server-side Voice Activity Detection (VAD)
- Mid-session context mutation β swap instructions and tools without dropping the call
- Real-time tool calling during voice conversations
- Multi-agent handoffs for complex workflows
- Zero-allocation LiveKit audio output path
Run realtime examples (from adk-playground):
# OpenAI Realtime (WebSocket)
cargo run --example realtime_basic --features realtime-openai
cargo run --example realtime_tools --features realtime-openai
cargo run --example realtime_handoff --features realtime-openai
# Vertex AI Live (requires gcloud auth application-default login)
cargo run -p adk-realtime --example vertex_live_voice --features vertex-live
cargo run -p adk-realtime --example vertex_live_tools --features vertex-live
# LiveKit Bridge (requires LiveKit server)
cargo run -p adk-realtime --example livekit_bridge --features livekit,openai
# OpenAI WebRTC (requires cmake)
cargo run -p adk-realtime --example openai_webrtc --features openai-webrtc
# Mid-session context mutation
cargo run -p adk-realtime --example openai_session_update --features openai
cargo run -p adk-realtime --example gemini_context_mutation --features geminiBuild complex, stateful workflows using the adk-graph crate (LangGraph-style):
use adk_graph::{prelude::*, node::AgentNode};
use adk_agent::LlmAgentBuilder;
use adk_model::GeminiModel;
// Create LLM agents for different tasks
let translator = Arc::new(LlmAgentBuilder::new("translator")
.model(Arc::new(GeminiModel::new(&api_key, "gemini-2.5-flash")?))
.instruction("Translate the input text to French.")
.build()?);
let summarizer = Arc::new(LlmAgentBuilder::new("summarizer")
.model(model.clone())
.instruction("Summarize the input text in one sentence.")
.build()?);
// Create AgentNodes with custom input/output mappers
let translator_node = AgentNode::new(translator)
.with_input_mapper(|state| {
let text = state.get("input").and_then(|v| v.as_str()).unwrap_or("");
adk_core::Content::new("user").with_text(text)
})
.with_output_mapper(|events| {
let mut updates = HashMap::new();
for event in events {
if let Some(content) = event.content() {
let text: String = content.parts.iter()
.filter_map(|p| p.text())
.collect::<Vec<_>>()
.join("");
updates.insert("translation".to_string(), json!(text));
}
}
updates
});
// Build graph with parallel execution
let agent = GraphAgent::builder("text_processor")
.description("Translates and summarizes text in parallel")
.channels(&["input", "translation"Release HistoryVersion Changes Urgency Date v0.6.0 ## π ADK-Rust v0.6.0 **31 crates published to crates.io** Β· [Listen to the podcast β](https://github.com/zavora-ai/adk-rust#-rust--beyond-podcast--episode-1-what-is-adk-rust) ### π§ Rust & Beyond Podcast Episode 1 is live on the README β a 2:21 podcast about ADK-Rust generated entirely by the framework using Gemini 3.1 Flash TTS. Two AI hosts, natural voices, zero manual editing. ### β¨ Highlights #### Multimodal Function Responses (`adk-core`, `adk-gemini`, `adk-model`, `adk-agent`) Tools High 4/16/2026 v0.5.0 # ADK-Rust v0.5.0 **31 crates published to crates.io** β the largest ADK-Rust release to date. ## Highlights ### Zero-Config Ergonomics - **`provider_from_env()`** β auto-detect LLM provider from environment variables (Anthropic β OpenAI β Gemini precedence) - **`adk::run(instructions, input)`** β single-function agent invocation with auto provider detection, session creation, and execution ### Prompt Caching Enabled by Default - **Anthropic**: `prompt_caching` now defaults to `true` (cache_ Medium 3/29/2026

