freshcrate
Home > Databases > cognee

Description

Knowledge Engine for AI Agent Memory in 6 lines of code

README

Cognee Logo

Cognee - Build AI memory with a Knowledge Engine that learns

Demo . Docs . Learn More Β· Join Discord Β· Join r/AIMemory . Community Plugins & Add-ons

GitHub forksGitHub starsGitHub commits GitHub tag Downloads License Contributors Sponsor

topoteretes%2Fcognee | Trendshift

Use our knowledge engine to build personalized and dynamic memory for AI Agents.

🌐 Available Languages : Deutsch | EspaΓ±ol | FranΓ§ais | ζ—₯本θͺž | ν•œκ΅­μ–΄ | PortuguΓͺs | Русский | δΈ­ζ–‡

Why cognee?

About Cognee

Cognee is an open-source knowledge engine that lets you ingest data in any format or structure and continuously learns to provide the right context for AI agents. It combines vector search, graph databases and cognitive science approaches to make your documents both searchable by meaning and connected by relationships as they change and evolve.

⭐ Help us reach more developers and grow the cognee community. Star this repo!

πŸ“š Check our detailed documentation for setup and configuration.

πŸ¦€ Available as a plugin for your OpenClaw β€” cognee-openclaw

Why use Cognee:

  • Knowledge infrastructure β€” unified ingestion, graph/vector search, runs locally, ontology grounding, multimodal
  • Persistent and Learning Agents - learn from feedback, context management, cross-agent knowledge sharing
  • Reliable and Trustworthy Agents - agentic user/tenant isolation, traceability, OTEL collector, audit traits

Product Features

Cognee Products

Basic Usage & Feature Guide

To learn more, check out this short, end-to-end Colab walkthrough of Cognee's core features.

Open In Colab

Quickstart

Let’s try Cognee in just a few lines of code.

Prerequisites

  • Python 3.10 to 3.13

Step 1: Install Cognee

You can install Cognee with pip, poetry, uv, or your preferred Python package manager.

uv pip install cognee

Step 2: Configure the LLM

import os
os.environ["LLM_API_KEY"] = "YOUR OPENAI_API_KEY"

Alternatively, create a .env file using our template.

To integrate other LLM providers, see our LLM Provider Documentation.

Step 3: Run the Pipeline

Cognee's API gives you four operations β€” remember, recall, forget, and improve:

import cognee
import asyncio


async def main():
    # Store permanently in the knowledge graph (runs add + cognify + improve)
    await cognee.remember("Cognee turns documents into AI memory.")

    # Store in session memory (fast cache, syncs to graph in background)
    await cognee.remember("User prefers detailed explanations.", session_id="chat_1")

    # Query with auto-routing (picks best search strategy automatically)
    results = await cognee.recall("What does Cognee do?")
    for result in results:
        print(result)

    # Query session memory first, fall through to graph if needed
    results = await cognee.recall("What does the user prefer?", session_id="chat_1")
    for result in results:
        print(result)

    # Delete when done
    await cognee.forget(dataset="main_dataset")


if __name__ == '__main__':
    asyncio.run(main())

Use the Cognee CLI

cognee-cli remember "Cognee turns documents into AI memory."

cognee-cli recall "What does Cognee do?"

cognee-cli forget --all

To open the local UI, run:

cognee-cli -ui

Use with AI Agents

Claude Code

Install the Cognee memory plugin to give Claude Code persistent memory across sessions. The plugin automatically captures tool calls into session memory via hooks and syncs to the permanent knowledge graph at session end.

Setup:

# Install cognee
pip install cognee

# Configure
export LLM_API_KEY="your-openai-key"
export CACHING=true

# Clone the plugin
git clone https://github.com/topoteretes/cognee-integrations.git

# Enable it (add to ~/.zshrc for permanent use)
claude --plugin-dir ./cognee-integrations/integrations/claude-code

Or connect to Cognee Cloud instead of running locally:

export COGNEE_SERVICE_URL="https://your-instance.cognee.ai"
export COGNEE_API_KEY="ck_..."

The plugin hooks into Claude Code's lifecycle β€” SessionStart initializes memory, PostToolUse captures actions, UserPromptSubmit injects relevant context, PreCompact preserves memory across context resets, and SessionEnd bridges session data into the permanent graph.

Hermes Agent

Enable Cognee as the memory provider in Hermes Agent for session-aware knowledge graph memory with auto-routing recall.

Setup:

# ~/.hermes/config.yaml
memory:
  provider: cognee
export LLM_API_KEY="your-openai-key"
hermes  # start chatting β€” session memory and graph persistence are automatic

Or run hermes memory setup and select Cognee. For Cognee Cloud, set COGNEE_SERVICE_URL and COGNEE_API_KEY in ~/.hermes/.env.

Each conversation turn is stored in the session cache. When the session ends, improve() bridges session data into the permanent knowledge graph β€” applying feedback weights, persisting Q&A, enriching triplet embeddings, and syncing the graph back to the session cache.

Connect to Cognee Cloud

Point any Python agent at a managed Cognee instance β€” all SDK calls route to the cloud:

import cognee

await cognee.serve(url="https://your-instance.cognee.ai", api_key="ck_...")

await cognee.remember("important context")
results = await cognee.recall("what happened?")

await cognee.disconnect()

Examples

Browse more examples in the examples/ folder β€” demos, guides, custom pipelines, and database configurations.

Use Case 1 β€” Customer Support Agent

Goal: Resolve customer issues using their personal data across finance, support, and product history.

User: "My invoice looks wrong and the issue is still not resolved."

Cognee tracks: past interactions, failed actions, resolved cases, product history

# Agent response:
Agent: "I found 2 similar billing cases resolved last month.
        The issue was caused by a sync delay between payment
        and invoice systems β€” a fix was applied on your account."

# What happens under the hood:
- Unifies data sources from various company channels
- Reconstructs the interaction timeline and tracks outcomes
- Retrieves similar resolved cases
- Maps to the best resolution strategy
- Updates memory after execution so the agent never repeats the same mistake

Use Case 2 β€” Expert Knowledge Distillation (SQL Copilot)

Goal: Help junior analysts solve tasks by reusing expert-level queries, patterns, and reasoning.

User: "How do I calculate customer retention for this dataset?"

Cognee tracks: expert SQL queries, workflow patterns, schema structures, successful implementations

# Agent response:
Agent: "Here's how senior analysts solved a similar retention query.
        Cognee matched your schema to a known structure and adapted
        the expert's logic to fit your dataset."

# What happens under the hood:
- Extracts and stores patterns from expert SQL queries and workflows
- Maps the current schema to previously seen structures
- Retrieves similar tasks and their successful implementations
- Adapts expert reasoning to the current context
- Updates memory with new successful patterns so junior analysts perform at near-expert level

Deploy Cognee

Use Cognee Cloud for a fully managed experience, or self-host with one of the 1-click deployment configurations below.

Platform Best For Command
Cognee Cloud Managed service, no infrastructure to maintain Sign up or await cognee.serve()
Modal Serverless, auto-scaling, GPU workloads bash distributed/deploy/modal-deploy.sh
Railway Simplest PaaS, native Postgres railway init && railway up
Fly.io Edge deployment, persistent volumes bash distributed/deploy/fly-deploy.sh
Render Simple PaaS with managed Postgres Deploy to Render button
Daytona Cloud sandboxes (SDK or CLI) See distributed/deploy/daytona_sandbox.py

See the distributed/ folder for deploy scripts, worker configurations, and additional details.

Latest News

Watch Demo

Community & Support

Contributing

We welcome contributions from the community! Your input helps make Cognee better for everyone. See CONTRIBUTING.md to get started.

Code of Conduct

We're committed to fostering an inclusive and respectful community. Read our Code of Conduct for guidelines.

Research & Citation

We recently published a research paper on optimizing knowledge graphs for LLM reasoning:

@misc{markovic2025optimizinginterfaceknowledgegraphs,
      title={Optimizing the Interface Between Knowledge Graphs and LLMs for Complex Reasoning},
      author={Vasilije Markovic and Lazar Obradovic and Laszlo Hajdu and Jovan Pavlovic},
      year={2025},
      eprint={2505.24478},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2505.24478},
}

Release History

VersionChangesUrgencyDate
v1.0.1# Release Notes - v1.0.1 **Release Date:** 2026-04-18 **Changes:** v1.0.1.dev3 β†’ main --- ## Summary Cognee 1.0.1 fixes front-end compatibility issues on case-sensitive filesystems, updates dependency versions and lockfiles, and documents a new Claude Code memory plugin in the README. This release focuses on cross-platform robustness and developer ergonomics while keeping runtime behavior stable for end users. ## Highlights - Fixed front-end file/directory naming so the UI builds correctlyHigh4/18/2026
v1.0.1.dev1# Release Notes - v1.0.1.dev1 **Release Date:** 2026-04-15 **Changes:** v1.0.1.dev1 β†’ add-ontologies-to-user-dir --- ## Summary This release introduces per-user ontology storage so you can add, manage, and persist custom ontologies outside the application package. It improves reliability around ontology loading and includes small bug fixes and internal cleanups to make the platform more robust and easier to customize. ## Highlights - Ontologies can now be stored and loaded from a user-specHigh4/15/2026
v1.0.0# Release Notes - v1.0.0 **Release Date:** 2026-04-11 **Changes:** v1.0.0.dev0 β†’ main --- ## Summary Cognee 1.0.0 is the project's first stable release. It primarily formalizes the release (version bump), enables caching by default for better out-of-the-box performance, and includes test and packaging cleanups to improve reliability and reproducible installs. ## Highlights - Official 1.0.0 release β€” stable baseline for production use. - Caching is now enabled by default for improved perforHigh4/11/2026
v0.5.8rc1# Release Notes - v0.5.8rc1 **Release Date:** 2026-04-08 **Changes:** v0.5.8rc1 β†’ feat/sales-benchmark-demo --- ## Summary Release candidate 0.5.8rc1 introduces the new Sales Benchmark Demo (preview) and a set of relevancy, ingestion, and stability improvements aimed at making memory-driven workflows easier to test and evaluate. This RC focuses on user-facing demo functionality, faster and more accurate memory retrieval, and a handful of bug fixes and internal improvements to prepare for genHigh4/8/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

honcho Memory library for building stateful agentsmain@2026-04-21
PageIndexπŸ“‘ PageIndex: Document Index for Vectorless, Reasoning-based RAGmain@2026-04-10
shodh-memoryCognitive memory for AI agents β€” learns from use, forgets what's irrelevant, strengthens what matters. Single binary, fully offline.v0.2.0
agentic-ragπŸ“„ Enable smart document and data search with AI-powered chat, vector search, and SQL querying across multiple file formats.main@2026-04-21
memindSelf-evolving cognitive memory and context engine for AI agents in Java. Empowering 24/7 proactive agents like OpenClaw with understanding and SOTA performance.main@2026-04-21