freshcrate
Home > Frameworks > Agently

Agently

[GenAI Application Development Framework] 🚀 Build GenAI application quick and easy 💬 Easy to interact with GenAI agent in code using structure data and chained-calls syntax 🧩 Use Event-Driven Flow

Description

[GenAI Application Development Framework] 🚀 Build GenAI application quick and easy 💬 Easy to interact with GenAI agent in code using structure data and chained-calls syntax 🧩 Use Event-Driven Flow *TriggerFlow* to manage complex GenAI working logic 🔀 Switch to any model without rewrite application code

README

image

Agently 4.1 — AI Application Development Framework

Build production-grade AI applications with stable outputs, composable agents, observable actions, and testable workflows.

English | 中文介绍

license PyPI version Downloads GitHub StarsTwitter Follow WeChat 🔥 Docs · 🚀 Quickstart · 🏗️ Architecture · 💡 Capabilities · 🧩 Ecosystem


Why Agently?

LangChain, CrewAI, and AutoGen each solve a real problem — but they optimize for exploration, not delivery. Teams that ship AI-powered products into production consistently run into the same walls:

Framework What it's great at Where production teams hit walls
LangChain Ecosystem breadth, quick prototypes Untyped outputs, chains hard to unit-test, state management complexity
CrewAI Role-based agent teams, natural language coordination Black-box routing, limited observability, hard to debug failures
AutoGen Conversational multi-agent, research exploration Unpredictable loops, no built-in state persistence, hard to deploy deterministically
Agently Engineering-grade AI applications Contract-first outputs · testable/pausable/serializable TriggerFlow · full action logs · project-scale config management

Agently is designed from the start for the gap between "works in a notebook" and "runs reliably in production":

  • Stable outputs — contract-first schema with mandatory field enforcement and automatic retries
  • Testable orchestration — every TriggerFlow branch is a plain Python function, independently unit-testable
  • Observable actions — every tool/MCP/sandbox call is logged with input, output, and timing
  • Pause, resume, persist — TriggerFlow executions can be saved to disk and restored after process restart
  • Project-scale config — hierarchical YAML/TOML/JSON settings files, env-variable substitution, and scaffolding via agently-devtools init

Agently 4.1 adds a fully rewritten Action Runtime: a three-layer extensible plugin stack (planning → loop → execution) with native support for local functions, MCP servers, Python/Bash sandboxes, and custom backends.


Architecture

Layer Model

Agently organizes every AI application into four clear layers. Each layer has a stable interface — replaceable, extendable, and independently testable.

graph TB
    subgraph APP["Your Application"]
        UserCode["Application / Business Logic"]
    end

    subgraph GLOBAL["Agently Global"]
        Global["Global Settings  ·  Plugin Manager  ·  Default Model Config"]
    end

    subgraph AGENT["Agent Layer"]
        direction TB
        AgentInst["Agent Instance"]
        subgraph AGENT_INNER["Per-Agent Components"]
            Prompt["Prompt\ninput · instruct · info · output"]
            Session["Session\nmulti-turn memory · memo · persistence"]
            Action["Action\nplanning · dispatch · logs"]
            Settings2["Hierarchical Settings\n(inherits from global)"]
        end
    end

    subgraph MODEL["Model Layer"]
        ModelReq["Model Request\nbuilt from prompt slots + settings"]
        ModelResp["Model Response\nstructured · streaming · instant events"]
    end

    subgraph WORKFLOW["Workflow Orchestration"]
        TF["TriggerFlow\nto · if/elif/else · match/case · batch · for_each · when · emit · pause/resume · persist"]
    end

    subgraph LLMS["LLM APIs"]
        LLMAPIs["OpenAI · DeepSeek · Claude · Qwen · Ollama · any OpenAI-compatible endpoint"]
    end

    UserCode -->|"configure & create"| Global
    Global -->|"inherited settings"| AgentInst
    AgentInst --- AGENT_INNER
    AgentInst -->|"build & send"| ModelReq
    ModelReq -->|"HTTP"| LLMAPIs
    LLMAPIs -->|"response stream"| ModelResp
    UserCode -->|"orchestrate"| TF
    TF -->|"trigger agent steps"| AgentInst
Loading

Action Runtime (v4.1)

Three independently replaceable layers — swap only what you need.

graph LR
    subgraph AGENT2["Agent"]
        AExt["ActionExtension\nprepares visible actions & logs"]
    end

    subgraph RUNTIME["Action Runtime (replaceable)"]
        AR["ActionRuntime Plugin\nAgentlyActionRuntime\n\nplanning protocol · call normalization · round control"]
        AF["ActionFlow Plugin\nTriggerFlowActionFlow\n\naction loop · pause/resume · concurrency"]
    end

    subgraph EXECUTORS["ActionExecutor (replaceable)"]
        E1["LocalFunctionExecutor\n@action_func / @tool_func"]
        E2["MCPExecutor\nstdio · http"]
        E3["PythonSandboxExecutor"]
        E4["BashSandboxExecutor"]
        E5["Custom Plugin"]
    end

    AExt -->|"delegate planning"| AR
    AR -->|"run action loop"| AF
    AF -->|"dispatch call"| E1
    AF -->|"dispatch call"| E2
    AF -->|"dispatch call"| E3
    AF -->|"dispatch call"| E4
    AF -->|"dispatch call"| E5
Loading

Core Capabilities

1. Contract-First Output Control

Define the schema once. Agently enforces it on every call, with automatic retries when critical fields are missing.

result = (
    agent
    .input("Analyze this review: 'Great product, but slow shipping.'")
    .output({
        "sentiment": (str, "positive / neutral / negative"),
        "key_issues": [(str, "issue summary")],
        "priority": (int, "1–5, 5 is most urgent"),
    })
    .start(ensure_keys=["sentiment", "key_issues[*]"])
)
# Always a dict — "sentiment" and every "key_issues" item guaranteed present

2. Structured Streaming — Instant Events

Each output field streams independently. Drive UI updates or downstream logic as fields complete, not after the whole response.

response = (
    agent
    .input("Explain recursion with 3 examples")
    .output({
        "definition": (str, "one-sentence definition"),
        "examples": [(str, "code example with explanation")],
    })
    .get_response()
)

for event in response.get_generator(type="instant"):
    if event.path == "definition" and event.delta:
        ui.update_header(event.delta)             # stream definition character by character
    if event.wildcard_path == "examples[*]" and event.is_complete:
        ui.append_example(event.value)            # append each complete example

3. Action Runtime — Functions, MCP, Sandboxes (v4.1)

Mount any combination. The runtime handles planning, execution, retries, and full structured logs.

@agent.action_func
def search_docs(query: str) -> str:
    """Search internal documentation."""
    return docs_db.search(query)

agent.use_mcp("docs-server", transport="stdio", command=["python", "mcp_server.py"])
agent.use_sandbox("python")                          # isolated Python execution

agent.use_actions([search_docs, "docs-server", "python"])

response = agent.input("Find auth docs and show a login code example.").get_response()

# Every call: what was invoked, with what args, what it returned
print(response.result.full_result_data["extra"]["action_logs"])

Legacy tool APIs (@agent.tool_func, agent.use_tool()) continue to work and map to the same runtime.

4. TriggerFlow — Serious Workflow Orchestration

TriggerFlow goes well beyond chaining functions. It's a full workflow engine with concurrency, event-driven branching, human-in-the-loop interrupts, and execution persistence.

Concurrency — batch and for_each

Run steps in parallel with a configurable concurrency limit:

# Process a list of URLs, max 5 in parallel
(
    flow.for_each(url_list, concurrency=5)
    .to(fetch_page)
    .to(summarize)
    .end()
)

# Fan out to N fixed branches simultaneously
flow.batch(3).to(strategy_a).collect("results", "a")
flow.batch(3).to(strategy_b).collect("results", "b")
flow.batch(3).to(strategy_c).collect("results", "c")

Event-driven — when and emit

Branch on signals, chunk completions, or data changes — not just linear sequence:

flow.when("UserInput").to(process_input).to(plan_next_step)
flow.when("ToolResult").to(evaluate_result)
flow.when({"runtime_data": "user_decision"}).to(apply_decision)

# Emit from inside a chunk to trigger other branches
async def plan_next_step(data: TriggerFlowEventData):
    if needs_tool:
        await data.async_emit("ToolCall", tool_args)
    else:
        await data.async_emit("UserInput", final_reply)

Pause, Resume, and Persistence

Save execution state to disk, restore it after a process restart — critical for long-running or human-in-the-loop workflows:

# Start execution, save checkpoint immediately
execution = flow.start_execution(initial_input, wait_for_result=False)
execution.save("checkpoint.json")

# Later — new process, restored state, continue from where it paused
restored = flow.create_execution()
restored.load("checkpoint.json")
restored.emit("UserFeedback", {"approved": True, "note": "Looks good."})
result = restored.get_result(timeout=30)

This makes Agently workflows genuinely restart-safe — suitable for approval gates, multi-day pipelines, and human review loops.

Blueprint Serialization

The flow topology itself can be exported to JSON/YAML and reloaded, enabling dynamic workflow definitions and versioned flow configurations:

flow.get_yaml_flow(save_to="flows/main_flow.yaml")
# later
flow.load_flow_config("flows/main_flow.yaml")

5. Session — Multi-Turn Memory

Activate a session by ID. Agently maintains chat history, applies window trimming, supports custom memo strategies, and persists state to JSON/YAML.

agent.activate_session(session_id="user-42")
agent.set_settings("session.max_length", 10000)

session = agent.activated_session
session.register_analysis_handler(decide_when_to_summarize)
session.register_resize_handler("summarize_oldest", summarize_handler)

reply1 = agent.input("My name is Alice.").start()
reply2 = agent.input("What's my name?").start()   # correctly returns "Alice"

6. Project-Scale Configuration Management

Real AI projects involve multiple agents, multiple prompt templates, and multiple environments. Agently's hierarchical settings system supports YAML/JSON/TOML config files at every layer, with ${ENV.VAR} substitution and layered inheritance.

Recommended project structure:

my_ai_project/
├── .env                          # API keys and secrets
├── config/
│   ├── global.yaml               # global model + runtime settings
│   └── agents/
│       ├── researcher.yaml       # per-agent model overrides
│       └── writer.yaml
├── prompts/
│   ├── researcher_role.yaml      # reusable prompt templates
│   └── writer_role.yaml
├── flows/
│   ├── main_flow.py              # TriggerFlow definitions
│   └── main_flow.yaml            # serialized flow blueprint (optional)
├── agents/
│   ├── researcher.py
│   └── writer.py
└── main.py

Config hierarchy — each layer inherits and overrides:

graph LR
    A["global.yaml\n(model, runtime defaults)"]
    B["agents/researcher.yaml\n(per-agent model overrides)"]
    C["agent.set_settings(...)\n(per-request overrides)"]

    A -->|"inherited by"| B
    B -->|"inherited by"| C
Loading
# global.yaml loaded once at startup
Agently.load_settings("yaml", "config/global.yaml", auto_load_env=True)

# each agent loads its own overrides
researcher = Agently.create_agent()
researcher.load_settings("yaml", "config/agents/researcher.yaml")

# request-level override if needed
researcher.set_settings("OpenAICompatible.request_options", {"temperature": 0.2})

Scaffold a new project instantly:

pip install agently-devtools
agently-devtools init my_project

See real-world project examples:

7. Layered Prompt Management

Prompts are structured slots, not raw strings. Each slot has a role: input (the task), instruct (constraints), info (context data), output (schema). Agent-level slots persist across requests; request-level slots apply once.

agent.role("You are a senior Python code reviewer.")   # always present

result = (
    agent
    .input(user_code)
    .instruct("Focus on security and performance.")
    .info({"context": "Public-facing API handler", "framework": "FastAPI"})
    .output({"issues": [(str, "issue description")], "score": (int, "0–100")})
    .start()
)

Prompt templates can be loaded from YAML/JSON files via the configure_prompt extension for team-level prompt governance.

8. Unified Model Settings

One config object, any provider, no vendor lock-in.

Agently.set_settings(
    "OpenAICompatible",
    {
        "base_url": "https://api.deepseek.com/v1",
        "model": "deepseek-chat",
        "auth": "DEEPSEEK_API_KEY",   # reads from env automatically
    },
)
# Change base_url + model to switch providers — business code unchanged

Supported: OpenAI · DeepSeek · Anthropic Claude (via proxy) · Qwen · Mistral · Llama · local Ollama · any OpenAI-compatible endpoint.


Quickstart

pip install -U agently

Python ≥ 3.10 required.

from agently import Agently

Agently.set_settings("OpenAICompatible", {
    "base_url": "https://api.deepseek.com/v1",
    "model": "deepseek-chat",
    "auth": "DEEPSEEK_API_KEY",
})

agent = Agently.create_agent()

result = (
    agent.input("Introduce Python in one sentence and list 3 strengths")
    .output({
        "intro": (str, "one sentence"),
        "strengths": [(str, "strength")],
    })
    .start(ensure_keys=["intro", "strengths[*]"])
)

print(result)
# {"intro": "Python is ...", "strengths": ["...", "...", "..."]}

Ecosystem

Agently Skills — Coding Agent Extensions

Official Agently Skills give AI coding assistants (Claude Code, Cursor, etc.) the knowledge to implement Agently patterns correctly, without re-explaining the framework each session.

Covers: single-request design · TriggerFlow orchestration · multi-agent · MCP · Session · FastAPI integration · LangChain/LangGraph migration playbooks.

Agently DevTools — Runtime Observation & Scaffolding

agently-devtools is an optional companion package for runtime inspection and project scaffolding.

pip install agently-devtools
agently-devtools init my_project    # scaffold a new Agently project
  • Runtime observation: ObservationBridge, create_local_observation_app
  • Examples: examples/devtools/
  • Compatibility: agently-devtools 0.1.x targets agently >=4.1.0,<4.2.0

Integrations

Integration What it enables
agently.integrations.chromadb ChromaCollection — RAG knowledge base with embedding agent
agently.integrations.fastapi SSE streaming, WebSocket, and standard POST endpoint patterns

Extensibility — Customize at Every Layer

Agently is designed to be extended at multiple independent levels. You don't have to fork the framework to change how it behaves — every major component is a replaceable plugin, hook, or registered handler.

graph TB
    subgraph USER["Your Application"]
        App["Business Logic  ·  TriggerFlow Orchestration"]
    end

    subgraph AGENT_EXT["Agent Extension Layer"]
        AE["Built-in Extensions\nSession · Action · AutoFunc · ConfigurePrompt · KeyWaiter · StreamingPrint"]
        AEC["Custom Agent Extension\nregister your own lifecycle hooks\nand agent-level capabilities"]
    end

    subgraph HOOKS["Request Lifecycle Hooks"]
        H1["request_prefixes\nmodify Prompt + Settings before request"]
        H2["broadcast_prefixes\nreact when response starts"]
        H3["broadcast_suffixes\nreact per streaming event"]
        H4["finally\nrun after response completes"]
    end

    subgraph CORE_PIPELINE["Core Pipeline Plugins (replaceable)"]
        PG["PromptGenerator\ncontrols how slots are assembled into the final prompt"]
        MR["ModelRequester\nOpenAICompatible (default) · custom provider"]
        RP["ResponseParser\ncontrols how raw model output is parsed"]
    end

    subgraph ACTION_STACK["Action Runtime Plugins (replaceable)"]
        AR["ActionRuntime\nplanning protocol · round control"]
        AF["ActionFlow\nloop shape · pause/resume · concurrency"]
        AX["ActionExecutor\nLocalFunction · MCP · PythonSandbox · BashSandbox · Custom"]
    end

    subgraph TF_EXT["TriggerFlow Extensions"]
        TC["Custom Chunk Handlers\n@flow.chunk or register_chunk_handler"]
        TCC["Custom Condition Handlers\nregister_condition_handler"]
    end

    subgraph HOOKERS["Runtime Event Hookers"]
        RH["ConsoleSink · StorageSink · ChannelSink · Custom\nattach to any runtime event stream"]
    end

    App --> AGENT_EXT
    AGENT_EXT --> HOOKS
    HOOKS --> CORE_PIPELINE
    CORE_PIPELINE --> ACTION_STACK
    App --> TF_EXT
    CORE_PIPELINE -.->|"emit runtime events"| HOOKERS
    ACTION_STACK -.->|"emit runtime events"| HOOKERS
Loading

Extension Points at a Glance

Layer Extension type What you can customize
Agent Extensions Register a custom extension class Add new capabilities to every agent: new prompt slots, new response hooks, new lifecycle behavior
Request lifecycle hooks request_prefixes / broadcast_prefixes / broadcast_suffixes / finally Intercept and modify requests, responses, or streaming events at each stage
PromptGenerator (plugin) Replace the built-in plugin Control exactly how prompt slots are assembled into the final message list sent to the model
ModelRequester (plugin) Register a new provider class Add any non-OpenAI-compatible model API — the interface contract stays the same
ResponseParser (plugin) Replace the built-in plugin Change how raw model output is parsed into structured data and streaming events
ActionRuntime (plugin) Replace AgentlyActionRuntime Change planning protocol, call normalization, or round-limit logic
ActionFlow (plugin) Replace TriggerFlowActionFlow Change how the action loop is orchestrated — different concurrency, pause/resume, or branching
ActionExecutor (plugin) Register alongside or replace builtins Add a new execution backend: cloud functions, RPC, custom sandboxes
TriggerFlow chunks @flow.chunk / register_chunk_handler Any Python function or coroutine becomes a composable flow step
TriggerFlow conditions register_condition_handler Custom routing logic between branches
Runtime hookers Implement and register a hooker Attach to the runtime event stream for observability, storage, or channel forwarding

Example: Registering a Custom ActionExecutor

from agently.types.plugins import ActionExecutor, ActionRunContext, ActionExecutionRequest, ActionResult

class MyCloudExecutor:
    name = "my-cloud-executor"
    DEFAULT_SETTINGS = {}

    async def execute(
        self,
        context: ActionRunContext,
        request: ActionExecutionRequest,
    ) -> list[ActionResult]:
        # call your cloud function / RPC / custom backend
        ...

Agently.plugin_manager.register("ActionExecutor", MyCloudExecutor)

Example: Adding a Request Lifecycle Hook

agent = Agently.create_agent()

# Inject context into every request this agent makes
def inject_tenant_context(prompt, settings):
    prompt.info({"tenant_id": get_current_tenant()})

agent.extension_handlers.append("request_prefixes", inject_tenant_context)

Agently and the "Harness" Concept

The term AI application harness describes a layer that wraps LLM calls with engineering controls — stable interfaces, observable internals, pluggable components. It's an architectural quality, not a product category.

Agently is a development framework, but it's designed to satisfy exactly those properties:

Harness property How Agently delivers it
Stable output interfaces output() + ensure_keys guarantee field presence regardless of model variation
Observable internals action_logs, tool_logs, DevTools ObservationBridge, per-layer structured logs
Pluggable runtime layers ActionRuntime, ActionFlow, and ActionExecutor are independent plugin slots
Separation of concerns Prompt slots, settings hierarchy, Session, and TriggerFlow are distinct composable layers
Testability Each TriggerFlow chunk is a plain function; structured outputs have fixed schemas to assert against

These properties are a consequence of Agently's design philosophy — what you get when you structure an AI application the Agently way.


Who Uses Agently?

"Agently helped us turn evaluation rules into executable workflows and keep key scoring accuracy at 75%+, significantly improving bid-evaluation efficiency." — Project lead at a large energy SOE

"Agently enabled a closed loop from clarification to query planning to rendering, reaching 90%+ first-response accuracy and stable production performance." — Data lead at a large energy group

"Agently's orchestration and session capabilities let us ship a teaching assistant for course management and Q&A quickly, with continuous iteration." — Project lead at a university teaching-assistant initiative

📢 Share your case on GitHub Discussions →


FAQ

Q: What makes Agently different from LangChain? LangChain is excellent for prototyping and has a broad ecosystem. Agently is optimized for the post-prototype phase: contract-first outputs prevent interface drift, TriggerFlow branches are individually unit-testable, and the project-scale config system supports real engineering workflows. If you've shipped with LangChain and hit maintainability walls, Agently is built for exactly that.

Q: How is Agently different from CrewAI or AutoGen? CrewAI and AutoGen are designed around agent teams with natural-language coordination — great for exploration, hard to make deterministic. Agently uses explicit code-based orchestration (TriggerFlow) where every branch is a Python function with clear inputs and outputs, every action call is logged, and executions can be paused, serialized, and resumed — properties that matter when you're shipping to users.

Q: What is the Action Runtime, and why was it rewritten in v4.1? The old tool system was a single flat layer — enough for simple use cases, but not extensible. The new Action Runtime separates planning ("what to call"), loop orchestration ("how many rounds, with what concurrency"), and execution ("actually run the function/MCP/sandbox"). Each layer is a plugin. You can swap just the sandbox backend without touching the planning logic, or replace just the planning algorithm without changing how loops run.

Q: How do I deploy an Agently service? Agently doesn't prescribe a deployment model. It provides full async APIs. The examples/fastapi/ directory covers SSE streaming, WebSocket, and standard POST. See Agently-Talk-to-Control for a complete deployed example.

Q: Is there enterprise support? Yes. The core framework is open-source under Apache 2.0. Enterprise extensions, private deployment support, governance modules, and SLA-backed collaboration are available under separate commercial agreements. Contact us via the community.


Docs

Resource Link
Documentation (EN) https://agently.tech/docs
Documentation (中文) https://agently.cn/docs
Quickstart https://agently.tech/docs/en/quickstart.html
Output Control https://agently.tech/docs/en/output-control/overview.html
Instant Streaming https://agently.tech/docs/en/output-control/instant-streaming.html
Session & Memo https://agently.tech/docs/en/agent-extensions/session-memo/
TriggerFlow https://agently.tech/docs/en/triggerflow/overview.html
Actions & MCP https://agently.tech/docs/en/agent-extensions/tools.html
Prompt Management https://agently.tech/docs/en/prompt-management/overview.html
Agent Systems Playbook https://agently.tech/docs/en/agent-systems/overview.html
Agently Skills https://github.com/AgentEra/Agently-Skills

Community

License

VersionChangesUrgencyDate
v4.0.9# v4.0.9 ## Features ### Runtime Observation And DevTools Companion 1. [Runtime] Added the runtime event bus and run-lineage foundation for request, model, agent-turn, action, and workflow observation. 2. [Runtime] Added model request observation lifecycle events, including prompt build, request, streaming, retry, completion, and meta stages. 3. [DevTools] Introduced `agently-devtools` as an optional companion package for local observation, evaluation, logs, and playground workflows. Medium3/28/2026
v4.0.7## Features ### TriggerFlow Concurrency Control 1. **[TriggerFlow]**: Global concurrency control for each execution (pass `concurrency` when creating execution or starting flow). 2. **[TriggerFlow]**: `.batch()` supports concurrency control. 3. **[TriggerFlow]**: `.for_each()` supports concurrency control. 4. **[Example]**: New concurrency control example. ### Python Sandbox Utility 1. **[Utils]**: New Python Sandbox utility (safer isolated execution environment). ## Updates Low1/8/2026
v4.0.6## Features ### ChromaDB Integrations You can use Agently ChromaDB Integrations to simplify the use case of ChromaDB ```python from agently import Agently from agently.integrations.chromadb import ChromaData, ChromaEmbeddingFunction from chromadb import Client as ChromaDBClient embedding = Agently.create_agent() embedding.set_settings( "OpenAICompatible", { "model": "qwen3-embedding:0.6b", "base_url": "http://127.0.0.1:11434/v1/", "auth": "notLow11/10/2025
v4.0.5## Key Feature Updates ### TriggerFlow 1. Rewrite for each process (`.for_each()`) to support nested for each loops perfectly. [[Example Code](https://github.com/AgentEra/Agently/blob/main/examples/trigger_flow/nested_for.py)] 2. Add `.____(<comments>, log_info=<True | False>, print_info=<True | False>, show_value=<True | False>)` to help developers to write flow chain beautifully. 3. Add `.when()` to support developers to wait chunk done event, runtime data change event, flow data changLow10/9/2025
v4.0.3Some major bugs fixed and add more examples: Trigger Flow Feature Examples: https://github.com/AgentEra/Agently/tree/main/examples/trigger_flow Trigger Flow WebSocket Server Example: https://github.com/AgentEra/Agently/tree/main/examples/trigger_flow/ws_serverLow9/16/2025
v4.0.1New Features: [Trigger Flow]: 1. if condition chain expression; 2. flow.when(), flow.when_event(), flow.when_runtime_data(), flow.when_flow_data(); 3. quick start from flow instance(create a temp execution); 4. separator method .____(); 5. support specific event type when create new process; Low9/12/2025
v4.0.0New Feature: TriggerFlow([Examples](https://github.com/AgentEra/Agently/tree/main/examples/trigger_flow))Low9/11/2025
v4.0.0-beta-3v4.0.0b3 (https://github.com/AgentEra/Agently/pull/232) New Features: 1. Support MCP as Tools: https://github.com/AgentEra/Agently/blob/main/examples/mcp_agent.py 2. Key Waiter Extension: https://github.com/AgentEra/Agently/blob/main/examples/key_waiter_agent.py 3. Auto Func Decorator: https://github.com/AgentEra/Agently/blob/main/examples/auto_func_decorator.py 4. Use system message to manage in-frame events 5. Streaming debug logs (new Debug Console CUI still need to be optimized) Low8/18/2025
v4.0.0-beta-2Key feature: Agent Extension `Tool` ```python import asyncio from agently import Agently Agently.set_settings( "OpenAICompatible", { "base_url": "http://localhost:11434/v1", "model": "qwen2.5:7b", "model_type": "chat", }, ) agent = Agently.create_agent() @agent.tool_func async def add(a: int, b: int) -> int: """ Get result of `a(int)` add `b(int)` """ await asyncio.sleep(1) print(a, b, a + b) return a + b Low8/13/2025
v4.0.0-beta-1<img width="640" alt="image" src="https://github.com/user-attachments/assets/c645d031-c8b0-4dba-a515-9d7a4b0a6881" /> # Agently 4 (v4.0.0.Beta1) [English Introduction](https://github.com/AgentEra/Agently/README.md) | [中文介绍](https://github.com/AgentEra/Agently/README_CN.md) > *Speed Up Your GenAI Application Development* [![license](https://img.shields.io/badge/license-Apache2.0-blue.svg?style=flat-square)](https://github.com/AgentEra/Agently/blob/main/LICENSE) [![PyPI - Downloads]Low7/19/2025
v3.5.1.2Maybe the final version of Agently v3.x - Fix the warning from IDE when try to use methods on agent instance like `agent.input()` We're preparing brand new version of Agently v4 for better development experience!Low6/25/2025
v3.5.1.0## What's Changed * new example: MCP support by @Maplemx in https://github.com/AgentEra/Agently/pull/209 * fix: get_generator break case by @gouzil in https://github.com/AgentEra/Agently/pull/211 * fix: load json `think` or `thinking` condition by @gouzil in https://github.com/AgentEra/Agently/pull/212 * updates by @Maplemx in https://github.com/AgentEra/Agently/pull/213 * Keepup by @Maplemx in https://github.com/AgentEra/Agently/pull/214 * update by @Maplemx in https://github.com/AgentEraLow4/18/2025
v3.5.0.1Optimize MCP tool using results and fixed a bug when using MCP configsLow4/6/2025
v3.5.0.0- New feature: FastServer( only support fast transform agent and workflow into MCP server right now ) - Rewrite Tool Using logic, support use MCP server as tools - Optimize several core codes - Remove old version WebSocket server codes, lite the package dependencies. (WebSocket will be supported in future version of FastServer) - Add lexer(realtime json fixer) to fix json before llm fixing. ## What's Changed * Dev by @Maplemx in https://github.com/AgentEra/Agently/pull/203 * v3.5.0.0 byLow3/30/2025
v3.4.2.7## What's Changed * new example: planning loop and attach workflow to agent by @Maplemx in https://github.com/AgentEra/Agently/pull/198 * fest: AWS Bedrock by @cnbeining in https://github.com/AgentEra/Agently/pull/197 * fix: remove `ResponseGenerator` init create `Stage` by @gouzil in https://github.com/AgentEra/Agently/pull/202 ## New Contributors * @gouzil made their first contribution in https://github.com/AgentEra/Agently/pull/202 **Full Changelog**: https://github.com/AgentEra/AgeLow3/21/2025
v3.4.2.6[Core] Feat: Realtime reasoning console output in debug mode; [Core] Update: Add setting key 'current_client' which is the same as 'current_model' but more accurate; [Core] Update: Clean request plugins' dependencies, move them from package initial stage to plugin class initial stage; [Request: OAIClient] Update: Support some APIs those put reasoning content both in key 'reasoning_content' and key 'content'. [Request: Qianfan] Bug fixed: Fixed qianfan package import error. ## What's ChangLow2/19/2025
v3.4.2.3> ⚠️ Notice: version --> 3.4.2.3 because poetry can not support uppercase package name so we have to republish with twine. 1. Optimize reasoning content handling code 2. Add DeepSeek Reasoner(R1) requesting example 3. Clean dependencies 4. Add poetry to manage dependencies ## What's Changed * v.3.4.2.1 by @Maplemx in https://github.com/AgentEra/Agently/pull/193 **Full Changelog**: https://github.com/AgentEra/Agently/compare/v3.4.2.0...v3.4.2.1Low2/11/2025
v3.4.2.0**Full Changelog**: https://github.com/AgentEra/Agently/compare/v3.4.1.3...v3.4.2.0 ## What's Changed * update: add support of DeepSeek Reasoner model official API data format by @Maplemx in https://github.com/AgentEra/Agently/pull/192 **Full Changelog**: https://github.com/AgentEra/Agently/compare/v3.4.1.6...v3.4.2.0Low2/11/2025
v3.4.1.61. [OAIClient] Optimize the handle process of messages those without key 'content'. 2. [ResponseGenerator] Remove dependency agently-stage, reuse old built-in version and optimize Response Generator code to prevent main thread will not shutdown at the end. **Full Changelog**: https://github.com/AgentEra/Agently/compare/v3.4.1.3...v3.4.1.6Low2/8/2025
v3.4.1.3## What's Changed * fest: Add LiteLLM to provide generic support for providers by @cnbeining in https://github.com/AgentEra/Agently/pull/191 * update: Adapt dependency Agenty-Stage 0.2.0-alpha-1 * bug fixed: OAIClient can handle some messages without "content" key now ## New Contributors * @cnbeining made their first contribution in https://github.com/AgentEra/Agently/pull/191 **Full Changelog**: https://github.com/AgentEra/Agently/compare/v3.4.1.1...v3.4.1.3Low1/30/2025
v3.4.1.1New Features: - MessageCenter - EventEmitter **Full Changelog**: https://github.com/Maplemx/Agently/compare/v3.4.1.0...v3.4.1.1Low12/13/2024
v3.4.1.0[New Feature: new utils Stage and Tunnel to help manage coroutine and threads](https://github.com/Maplemx/Agently/pull/186/commits/d89fc871494ae2a2066c9fda7756927172174b37) [Update: rewrite ResponseGenerator using Stage and Tunnel](https://github.com/Maplemx/Agently/pull/186/commits/272d23e5c5baba6fe52c3451f04d4f3dfedb5b2d) [Bug Fix: fixed a bug that will cause agent can not inhert debug setti…](https://github.com/Maplemx/Agently/pull/186/commits/b1fa5b88325d360c8fec8ca5ed46dd4d4350cd54) Low12/7/2024
v3.4.0.5## What's Changed * Fixed broken link in Readme.md by @Shreyas0410 in https://github.com/Maplemx/Agently/pull/178 * refactor: Refactored Status.py by @ghimirebibek in https://github.com/Maplemx/Agently/pull/179 * update by @Maplemx in https://github.com/Maplemx/Agently/pull/182 * v3.4.0.5 by @Maplemx in https://github.com/Maplemx/Agently/pull/183 ## New Contributors * @Shreyas0410 made their first contribution in https://github.com/Maplemx/Agently/pull/178 * @ghimirebibek made their firLow11/14/2024
v3.4.0.4### Instant Mode (Former Realtime Mode) 1. Rename "Realtime" component to "Instant" to avoid improper associate to OpenAI Realtime; 2. Optimize key indexes combination expression handler and add "&" symbol support; 3. Remove ".$complete" mark for string items and add ".$delta" mark to make string items behave the same as other type items; ### Response Generator 1. Rewrite ResponseGenerator component to provide 4 agentic request response generators, developers can use these alias down Low10/30/2024
v3.4.0.3**New Feature:** - **ResponseGenerator**: a better way to get streaming response generator from Agently AgenticRequest, try these codes down below to feel the better develop experience provided by **Agently Realtime x ResponseGenerator**: ```python generator = ( agent .input("Generator 10 sentences") .output({ "sentences": ([("str", )]), }) .get_realtime_generator() ) for item in generator: print(item["key"], item["delta"]) `Low10/24/2024
v3.4.0.2## What's Changed * Update Realtime by @Maplemx in https://github.com/Maplemx/Agently/pull/172 **Full Changelog**: https://github.com/Maplemx/Agently/compare/v3.4.0.1...v3.4.0.2Low10/16/2024
v3.4.0.1## What's Changed * Add `delta` to realtime response data by @Maplemx in https://github.com/Maplemx/Agently/pull/171 **Full Changelog**: https://github.com/Maplemx/Agently/compare/v3.4.0.0...v3.4.0.1Low10/9/2024
v3.4.0.0(Will update soon) ## What's Changed * Update README.md by @moyueheng in https://github.com/Maplemx/Agently/pull/167 * keep up by @Maplemx in https://github.com/Maplemx/Agently/pull/169 * Dev by @Maplemx in https://github.com/Maplemx/Agently/pull/170 ## New Contributors * @moyueheng made their first contribution in https://github.com/Maplemx/Agently/pull/167 **Full Changelog**: https://github.com/Maplemx/Agently/compare/v3.3.4.8...v3.4.0.0Low10/8/2024
v3.3.4.8### Update 1. `[AppConnector]` Support Gradio additional inputs. 2. `[DataGenerator]` Add new methods `.future()`, `.join()` to support pre-define generator handler before `.add()` is called.Low9/24/2024
v3.3.4.7Yeah... I'm sorry that I didn't update release document here in time and as you know... just like a small snowball rolling down from the top of a slope, it growth bigger and bigger, and it gets harder and harder for me to update this release document... In fact, from v3.2.2.3 to v.3.3.4.7 we make so much progress such as: - We improved Agently Workflow to make it both powerful and easy to use, read [Agently Workflow Official Document](http://agently.cn/guides/workflow/index.html) to explorLow9/18/2024
v3.2.2.3## New Features - `[Agent.load_yaml_prompt()]`: We provide developers a new way to manage your request prompt template in YAML file! - HOW TO USE: - YAML file: ```YAML input: ${user_input} use_public_tools: - browse set_tool_proxy: http://127.0.0.1:7890 instruct: output language: English output: page_topic: $type: str $desc: "" summLow4/8/2024
v3.2.1.3## New Features - `[Request: OAIClient]` Add a new request plugins for those models which API format is alike OpenAI but have some additional rules like can not support multi system messages or must have strict user-assistant message orders. It's very useful for those local servering models started by local model servering library like [Xinference](https://github.com/xorbitsai/inference). HOW TO USE: ```python import Agently agent_factory = ( Agently.AgentFactory(is_debug=True)Low3/27/2024
v3.2.1.0## New Features: 1. `[Request]` New models are supported! 新增了两个模型的支持! - **Claude**: ```python import Agently agent_factory = Agently.AgentFactory() ( agent_factory .set_settings("current_model", "Claude") .set_settings("model.Claude.auth", { "api_key": "" }) # switch model # model list: https://docs.anthropic.com/claude/docs/models-overview # default: claude-3-sonnet-202402Low3/11/2024
v3.2.0.1## New Feature: 1. `[Agently Workflow]` We're glad to introduce a brand new feature of Agently v3.2 to you all: `Agently Workflow`! With this new feature, you can arrange and manage your LLMs based application workflow in just 3 steps, simple and easy: 1. Define and program your application logic into different workflow chunks; 2. Connect chunks using `chunk.connect_to()` in orders; (Loop and condition judugment supported) 3. Start the workflow using `workflow.stLow2/28/2024
v3.1.5.5## New Feature: 1. `[Agent Component: Session]` Added chat history manual management methods and settings: **Methods:** - `.add_chat_history(role: str, cotnent: str)` - `.get_chat_history(*, is_shorten: bool=False)` - `.rewrite_chat_history(new_chat_history: list)` **Settings:** - `strict_orders`: Setting value is set as `True` by default. If this setting is `True`, component will ensure chat history append in "User-Assistant-User-AssLow2/5/2024
v3.1.5**New Feature:** 1. Support ZhipuAI GLM-4 and GLM-3-turbo [Read here to see how to set settings](https://github.com/Maplemx/Agently/blob/main/playground/create_event_listeners_with_alias_or_decorator.ipynb) 2. Agent Component: `Decorator`: - Move `@agent.auto_func` into Decorator Component - New decorator `@agent.on_event(<event_name>)` to help developers to add event listener easier [Read Example](https://github.com/Maplemx/Agently/blob/main/playground/create_event_listeners_with_alLow1/18/2024
v3.1.4**New Feature: Summon a Genie 🧞‍♂️ (Function Decorator) to Generate Agent Powered Function in Runtime** As a developer, have you ever dreamed about writing some definitions and annotation in code then "boom!" all in a sudden, some genies 🧞‍♂️ come out and make all your wishes happen? Notice that: **the genies do not write the code for you, instead they just _finish the work_ for you!** Now Agently framework present a brand new feature **"agent auto function decorator"** for you in **versLow1/7/2024
v3.1.3**Agently v3.1.3 - Embedding Facility** **New Feature:** Add embedding plugin to facility Now You can use `Agently.facility.embedding.<ProviderName>` to use, provider including OpenAI, Google, ERNIE, ZhipuAI: ```python import Agently # OpenAI ( Agently.facility .set_settings("embedding.OpenAI.auth", { "api_key": "" }) ) # Baidu ERNIE ( Agently.facility .set_settings("embedding.ERNIE.auth", { "aistudio": "" }) ) # ZhipuAI ( Agently.facility .setLow1/6/2024
v3.1.2Using Tools is the new feature in Agently v3.1. You can register your function as tools to agent instance or to global tool manager. You can also use default tools plugins in /plugins/tool/ dir or create more tool package plugins. Go to/plugins/tool/ to see how simple it is to create your own tool package plugins. Then you can let agent instance plan whether or not to use tools and in what order to use those toolsYou can also use .must call() alias to tell agent instance to generate a dLow1/3/2024
v3.0.6New Feature: **You can use Gemini Pro now!** How to use: ```python ( agent_factory .set_settings("model.Google.auth.api_key", "<Your-Google-API-Key-Here>") .set_settings("current_model", "Google") .set_settings("proxy", "http://127.0.0.1:7890") # You can set proxy if needed ) ```Low12/24/2023
v3.0.4New Feature: Agent Component - Search! How to use: ```python agent\ .set_role("As a preschool teacher, I'm here to turn complex and hard-to-understand knowledge into stories that little ones can understand. Even though it's a story, it's essential to ensure the accuracy and truthfulness of the information.")\ .toggle_component("Search", True)\ .instruct("If the search results or extra information contain a lot of content, try to organize it into multiple structured story sLow11/27/2023
v3.0.3**Introducing Agently 3.0 - An AI Agent Native Development Framework** Agently is a development framework that helps developers build AI agent native application really fast. You can use and build AI agent in your code in an extremely simple way. You can create an AI agent instance then interact with it like calling a function in very few codes. In AI agent native application, we put an AI agent instance into our code, then we ask it to execute / to solve the problem with natural lanLow11/25/2023
v2.0.5-pythonBug Fixed: 1. Fixed when try to fix JSON format in workflow, process sometimes report Error: can not find Type 'customize'; https://github.com/Maplemx/Agently/issues/23 2. Fixed a bug when request XunFei Spark model, the result is None; https://github.com/Maplemx/Agently/issues/20 > v2.0.5 is the last stable version of Agently v2.0(Python) > > We will soon publish the brand new Agently v2.1(Python) agent framework with a completely new architectural design and refactoring.Low10/22/2023
v2.0.4-python新功能: 1. 支持`百度文心模型库`和`讯飞星火大模型`; 2. 提供了一个非常易用的[一键启动shell脚本](https://github.com/Maplemx/Agently/tree/main/demo/python/quick_launch_sh_cn) ,用于启动Agent DEMO,你可以参看blueprint目录了解如何使用它启动自己的Agent蓝图; 3. 提供了一个能够分析用户意图并给出更恰当回答的Agent蓝图:[SmartOne](https://github.com/Maplemx/Agently/blob/main/demo/python/quick_launch_sh_cn/examples/blueprints/smart_one.py); 4. 在Session上新增了async_start()方法,以更方便您在async环境中编写协程; New Features: 1. Support Baidu WenXin model workshop and XunFei Spark model; 2. [One-Command ShelLow9/18/2023
v2.0.1-pythonAgently is now available in Python! ☄️ Only a few lines of code brings you an LLM agent worker in code to help you correct JSON string format error, generate API request parameters from natural languages, etc. 🎭 Easy ways to manage agent role settings, switch models, control status and context. 🧩 Arrange your agents' own Workflow and modify your own Work Nodes' working logic by code. Test your own chain of thoughts to make your agent smarter. 👥 Share your own designed agent throuLow8/31/2023
v1.1.3This version fixed a serious when making streaming request and this is a stable version to use for now. Agently develop team will move on to develop brand-new version v2.0.0 and hope to release as soon as possible. --- 该版本修复了Streaming请求时会发生的严重错误,目前已经是一个相对稳定的版本,能够帮助大家理解Agently的理念,并进行面向Agent的开发,验证基于Agent的各种LLM协同想法。 目前Agently开发团队正在努力进行全新的v2.0.0版本开发,v2.0.0版本将提供更加层次清晰和健壮的框架主体以及支持社区生态共同定制开发的协作模式。 我们希望这个版本尽快和大家见面。祝大家玩得愉快。Low7/31/2023
v1.1.0### What's New? Do you know the full name of GPT is "Generatvie Pre-trained Transformer"? From "Pre-trained" we know, LLM is not that kind of model can keep up current news and events. As we all well-known that when GPT3.5 just born, its knowledge can just reach up to the year 2021. Because it's "Pre-trained", before its next training, it will stay to its status. If we want LLM models to catch up in some areas, what can we do? One idea could be to equip the LLM-based agents with some skillLow7/17/2023
v1.0.0HOW TO INSTALL? npm:npm install agently yarn:yarn add agently --- HOW TO USE? README:[English](https://github.com/Maplemx/Agently/blob/main/README.md) | [中文](https://github.com/Maplemx/Agently/blob/main/README_CN.md) --- 🤵 Agently is a framework helps developers to create amazing LLM based applications. 🎭 You can use it to create an LLM based agent instance with role set and memory easily. ⚙️ You can use Agently agent instance just like an async function and put it aLow7/15/2023
v0.0.1Version v0.0.1 of Agently is a tools I develop for speeding the feasibility tests of my language model based application idea. I did not think too much to make it work in product environment. So there still are many functions to work on like streaming the reply, better memory management, etc. But I notice some people have already use this version through npm install... Because the next version(v1.0.0) will be publish soon and totally changed. I hope that I can keep a version snapshot by reLow7/12/2023

Similar Packages

pattern8Enforce zero-trust rules for AI agents to prevent hallucinations, unsafe actions, and policy bypasses0.0.0
Auto-Pentest-LLM🔍 Automate penetration testing with an intelligent agent that organizes security assessments, leveraging local LLMs and Kali Linux for effective exploitation.main@2026-04-21
SimpleLLMFuncA simple and well-tailored LLM application framework that enables you to seamlessly integrate LLM capabilities in the most "Code-Centric" manner. LLM As Function, Prompt As Code. 一个简单的恰到v0.7.8
@falai/agentStandalone, strongly-typed AI Agent framework with route DSL and AI provider strategy1.2.0
@neyugn/agent-kitsUniversal AI Agent Toolkit - Skills, Agents, and Workflows for any AI coding assistant0.5.8