freshcrate
Home > Frameworks > Vibe-Skills

Vibe-Skills

Vibe-Skills is an all-in-one AI skills package. It seamlessly integrates expert-level capabilities and context management into a general-purpose skills package, enabling any AI agent to instantly upgr

Description

Vibe-Skills is an all-in-one AI skills package. It seamlessly integrates expert-level capabilities and context management into a general-purpose skills package, enabling any AI agent to instantly upgrade its functionality—eliminating the friction of fragmented tools and complex harnesses.

README

🇬🇧 English  |  🇨🇳 中文

VibeSkills Typing Logo
VibeSkills Logo



More than a skill collection — your personal AI operating system

If your AI supports skills, VibeSkills works. 340+ skills spanning coding, research, data science & creative work.

  This is a new breed of "super skill" that essentially operates as a full-fledged Agent system.

  Packaged as individual Skills, it offers plug-and-play installation and on-demand execution.

  Backed by a highly customizable framework, it connects effortlessly to your exclusive workflows.

stars forks last commit gitcgr



VisitorsArchSkills Count

🧠 Planning · 🛠️ Engineering · 🤖 AI · 🔬 Research · 🎨 Creation


Install Docs Chinese

Install  →  vibe | vibe-want | vibe-how | vibe-do  →  Smart Routing  →  M / L / XL Execution  →  Governance Verification  →  ✅ Delivery

🔑 New here? Quick glossary of key terms (click to expand)
Term Plain-English Meaning
VibeSkills / VCO This project. VCO = Vibe Code Orchestrator — the runtime engine behind the skills.
Skill A focused capability module (e.g., tdd-guide, code-review). Think of skills as expert assistants the system calls on demand.
Governed runtime When you invoke vibe, the system follows a structured process — clarify → plan → execute → verify — instead of diving in blindly. The public discoverable wrapper set is vibe, vibe-want, vibe-how, and vibe-do; hosts may render them as labels like Vibe: What Do I Want?, but they still resolve to the same canonical runtime authority.
Canonical Router The internal logic that decides which skill to activate for your task. Just invoke /vibe and let it route automatically.
M / L / XL grade Task complexity level. M = quick focused task, L = multi-step task, XL = large task with parallel work. Automatically selected. Public overrides are limited to --l and --xl; they are execution preferences, not separate entrypoints.
Frozen requirement Once you confirm the plan, it is "frozen" — the system will not silently change scope mid-task.
Root / Child lane In XL tasks, there is a "root" coordinator and "child" worker agents. Prevents conflicting outputs from parallel agents.
Proof bundle Evidence that a task was actually completed correctly — test results, output, verification logs.

Important

🎯 Core Vision

VibeSkills evolves with the times — ensuring it stays genuinely useful while dramatically lowering the barrier to cutting-edge vibecoding technology, eliminating the cognitive anxiety and steep learning curve that comes with new AI tools.

Whether or not you have a programming background, you can directly harness the most advanced AI capabilities with minimal effort. Productivity gains from AI should be available to everyone.


Generated Image April 21, 2026 - 8_49PM

✨ What makes it different?

Traditional skill repos answer: "What tools do I have?" VibeSkills tackles the core pain point of heavy AI users: "How do I manage and invoke large numbers of Skills, and get work done efficiently and reliably?"


❌  Traditional Pain Points (you've probably felt these) ✅  VibeSkills Solutions (what we've built)
Skills never activate: Hundreds of capabilities in the repo, but AI rarely remembers to use them — activation rate is extremely low. 🧠 Intelligent Routing: The system automatically routes to the right skill based on context — no need to memorize a skill list.
Blind execution: AI dives in without clarifying requirements — fast but off-target, projects gradually become black boxes. 🧭 Governed Workflow: Clarify → Verify → Trace is enforced in a unified process; every step is auditable.
Conflicting tools: Lack of coordination between plugins and workflows leads to environment pollution or infinite loops. 🧩 Global Governance: 129 contract rules define safety boundaries and fallback mechanisms for long-term stability.
Messy workspace: After extended use, repos become cluttered; new Agents miss project details when taking over, causing handoff gaps. 📁 Semantic Directory Governance: Fixed-architecture file storage so any new AI conversation instantly understands the project context.
AI bad habits: Deletes main files while clearing backups; writes silent fallbacks then confidently claims "it's done". 🛡️ Built-in Safety Rules: Governed execution blocks dangerous bulk deletion and blind recursive wipes by default; fallback mechanisms must always show explicit warnings.
Manual workflow discipline: Users must maintain their own AI collaboration process from experience — high learning cost. 🚦 Framework-guided end-to-end: Requirements → Plan → Multi-agent execution → Automated test iteration — fully managed.
Skill dispatch chaos in multi-agent runs: Hard to assign the right skills to each agent for different tasks. 🤖 Automatic Skill Dispatch: Multi-agent workflows automatically assign the corresponding Skills to each Agent's task.


👥 Who is it for?

Which of those pain points hit home? Find your position — what comes next will land harder.

Is this for you? Click to expand
Audience Description
🎯 Users who need reliable delivery Want AI to be a dependable partner, not a runaway horse
Power users heavily relying on AI/Agents Need a unified foundation to support large-scale workflows
🏢 Small teams with high standardization needs Want AI workflows to be more maintainable and transferable
😩 Practitioners exhausted by skill sprawl Already tired of tool hunting — just want a ready-to-use solution

If you're looking for a single small script, this may be overkill. But if you want to use AI more reliably, smoothly, and sustainably — this is your indispensable foundation.



🔀 Intelligent Routing: How 340+ Skills Collaborate Without Conflict

The core point is simple: 340+ skills do not all compete at once. vibe is the governed coordinator, and other skills are routed in only when a specific phase or work unit actually needs them.

Common worry What actually happens
Similar skills will fight each other The router picks one primary route first. Specialist skills stay scoped to a phase or bounded work unit.
Some skills look similar, so why keep both? They usually exist for different phases, domains, or execution intensity. They are not meant to all fire on the same step.
XL means multiple agents can pull in anything No. XL first splits the job into bounded units, then assigns skills per unit under coordinator approval.

How routing works in practice

  • Start with one primary route: Most complex tasks enter through vibe, which stays responsible for the overall governed flow.
  • Bring in specialists only when needed: Requirement, planning, execution, and verification can each pull in different supporting skills, but only for that phase.
  • Keep the workflow stable: The path is still Clarify ➔ Plan ➔ Execute ➔ Verify, so more available skills do not mean a looser process.

Why similar skills can coexist

  • They are not all active at once. Routing chooses the skill that fits the current task or current step.
  • Some overlap on the surface but serve different roles: clarify vs plan, execute vs verify, or narrow work vs higher-risk work.
  • Governance rules, priority ordering, and exclusion rules keep same-role skills from colliding and provide fallback when the preferred one is unavailable.

M / L / XL Execution Levels

After selecting the primary route, the runtime also chooses the execution grade based on task complexity:

Level Use Case Characteristics
M Narrow-scope work with clear boundaries Single-agent, token-efficient, fast response
L Medium complexity requiring design, planning, and review Governed multi-step execution, usually in planned serial order
XL Large tasks with independent parts worth splitting The coordinator breaks work into bounded units and can run independent units in parallel waves

Even in XL, this is not a free-for-all. The system decides the main route first, then assigns skills to each bounded unit under the same governed coordinator.


🔍 Expand: wrapper entrypoints, grade overrides, and routing notes
  • Public wrapper entries are vibe, vibe-want, vibe-how, and vibe-do. Hosts may render them as Vibe, Vibe: What Do I Want?, Vibe: How Do We Do It?, and Vibe: Do It, but they still enter the same governed runtime.
  • vibe runs the full governed flow.
  • vibe-want stops after the requirement is clarified and frozen.
  • vibe-how stops after the requirement and plan are frozen.
  • vibe-do runs the full governed flow without skipping requirement or plan.
  • The only lightweight public grade overrides are --l and --xl. Aliases like vibe-l, vibe-xl, or vibe-how-xl are intentionally unsupported.
  • When specialist skills such as tdd-guide or code-review are called, they assist a phase or a bounded unit. They do not take over global coordination.
  • In XL multi-agent work, worker lanes can suggest specialist help, but the coordinator approves the final assignment.


🧠 Memory System: Resume Context Across the Same Workspace

Routing decides which skill should lead. Memory decides whether the next session has to start from zero.

VibeSkills memory is built to solve three practical problems:

  • resume confirmed project context inside the same workspace
  • keep long tasks resumable after interruption or handoff
  • preserve decisions, handoff notes, and related evidence without dumping full history back into every prompt

It does not mean "save everything forever." By default, memory is scoped and layered: session state, project conventions, task-relevant retrieval, and controlled long-term knowledge all have different boundaries.


What users usually ask Default behavior
Do I need to re-explain project context in every new session? No. Confirmed project context can be resumed inside the same workspace.
What if a long task gets interrupted? Key progress can be folded into resumable working, tool, and evidence memory.
Will unrelated history flood the prompt? No. Retrieval stays bounded and task-relevant.
Will one project leak into another? No. Different workspaces stay isolated by default.
Does it write everything automatically? No. Durable writes stay governed, and some writes require explicit confirmation.

What the workspace-shared memory upgrade changes in practice

You can read the current behavior like this:

  • Same workspace can resume: codex, claude-code, and other supported hosts can reconnect to the same project memory inside one workspace.
  • Different workspaces stay isolated: even if two workspaces point at the same backend root, memory does not bleed across repos.
  • Only related memory comes back: generic scaffold terms such as $vibe, plan, or continuity are filtered out so recall depends on task-relevant content instead of noisy keywords.
  • Long tasks are easier to continue: the runtime keeps key decisions, handoff cards, and evidence anchors so a later turn or a new agent can continue from the useful parts.
  • Failure is explicit: if the workspace broker is unavailable, the runtime fails openly instead of pretending that memory continuity still exists.

What this system actually remembers

You can think of it as four memory categories rather than one giant "long-term memory":

  • Session memory

    • Keeps current progress, intermediate results, and temporary state
    • Useful for finishing the work happening right now
  • Project memory

    • Keeps confirmed project conventions, architecture decisions, and durable working agreements
    • Useful when you come back later and do not want to restate the same background
  • Task-semantic memory

    • Keeps the relevant fragments of long-running tasks easy to retrieve
    • Useful when the context gets large and earlier details would otherwise disappear
  • Long-term knowledge memory

    • Keeps durable relations, knowledge links, and information worth retaining across sessions
    • Useful when something should be preserved beyond a single task
📐 Expand: memory layers, write boundaries, and how the memory skills fit together

This part explains three things:

  1. which memory category is responsible for which job
  2. why several memory-related components exist at the same time
  3. which writes are automatic, which require confirmation, and which are optional extensions

Four memory categories and their primary owners

Memory Category Primary Owner Default Scope What It Keeps
Session memory state_store Current session execution progress, temporary state, intermediate results
Project memory Serena Current workspace / project confirmed architecture decisions, conventions, durable project rules
Task-semantic memory ruflo Intra-session / long task retrieval relevant context fragments for long-running tasks
Long-term knowledge memory Cognee Controlled cross-session knowledge entities, relations, and durable knowledge links

Optional extensions: mem0 can be used as a personal preference backend, and Letta can provide memory-block mapping vocabulary. Neither replaces the canonical memory roles above.

Why several memory layers coexist

They are not duplicate systems. They cover different responsibilities:

  • session memory helps finish the current task
  • project memory helps a later session reconnect to the same project
  • task-semantic memory helps long tasks recover the right context without replaying everything
  • long-term knowledge memory keeps the things worth retaining beyond a single task

If you removed any one of these layers, a different part of the workflow would get worse. Session memory alone cannot survive a later return, and long-term memory alone is too coarse to replace current-task state.

How the memory skills fit into that model

These skills are not a second, competing memory system. They are common entrypoints or helpers around the layers above:

  • knowledge-steward

    • Best when a prompt, bug lesson, or insight is worth preserving on purpose
    • Think of it as "store this in the right long-term place"
  • digital-brain

    • Best when you want a more structured personal knowledge base
    • Think of it as a long-term knowledge organization entrypoint
  • deepagent-memory-fold

    • Best when a long task is getting too large and needs a clean handoff
    • Think of it as a continuity tool for long-running work

Write boundaries and governance

The important part is the boundary model, not just the feature names:

  • not everything becomes durable memory
  • project-level decision writes stay governed, and Serena requires confirmation before writing durable project truth
  • retrieval returns only bounded, relevant capsules instead of replaying the whole store
  • episodic-memory stays disabled
  • mem0 is limited to personal preferences rather than project truth or routing authority
  • every external backend can be disabled with a kill switch

If you remember one thing

The goal is not to make AI remember everything about you. The goal is to resume the right project context, preserve the right task state, and keep durable knowledge in controlled places.

See workspace memory plane design for the technical contract and quantitative Codex memory simulation for the benchmark coverage.


✦ Full Capability Map: Your All-in-One Workbench

This section is not a full inventory of skill IDs. It is a practical map of the kinds of work VibeSkills can cover.

If you only want to judge whether VibeSkills fits your task, the table below is the fastest way to read it.


Work Area What It Helps With Representative Engines
💡 Requirements, Planning & Product Work Clarify vague ideas, write specs, and break work into executable plans and tasks brainstorming, writing-plans, speckit-specify
🏗️ Engineering, Architecture & Governed Execution Design systems, implement changes, and coordinate multi-step governed workflows aios-architect, autonomous-builder, vibe
🔧 Debugging, Testing & Quality Control Investigate failures, add tests, review code, and verify changes before completion systematic-debugging, verification-before-completion, code-review
📊 Data Analysis & Statistical Modeling Clean data, run statistical analysis, explore patterns, and explain results statistical-analysis, performing-regression-analysis, data-exploration-visualization
🤖 Machine Learning & AI Engineering Train, evaluate, explain, and iterate on model-driven workflows senior-ml-engineer, training-machine-learning-models, evaluating-machine-learning-models
🔬 Research, Literature & Life Sciences Review papers, support scientific workflows, and handle bioinformatics-heavy tasks literature-review, research-lookup, scanpy
📐 Scientific Computing & Mathematical Modeling Handle symbolic math, probabilistic modeling, simulation, and optimization sympy, pymc-bayesian-modeling, pymoo
🎨 Documentation, Visualization & Output Turn work into readable docs, charts, figures, slides, and other deliverables docs-write, plotly, scientific-visualization
🔌 External Integrations, Automation & Delivery Work with browsers, web content, external services, CI/CD, and deployment surfaces playwright, scrapling, aios-devops

👉 Expand if needed: detailed categories, usage scenarios, and why similar skills coexist

This section explains the full coverage in plain language. It is meant to answer three practical questions:

  1. When would this category be used?
  2. Why do several similar skills exist at the same time?
  3. Which entries are the representative starting points?

The names below are representative, not a full inventory dump. The point of this section is to explain roles and boundaries, not to turn the README into a warehouse list.


🧠 Requirements, Planning & Product Management

When this gets used: when the task is still fuzzy and the first job is to decide what problem is actually being solved before anyone starts coding.

Why similar skills coexist: they handle different stages of the same path. One clarifies the ask, another writes the spec, another turns that spec into a plan, and another breaks the plan into tasks.

How you usually meet them: early in a project, before a large change, or whenever a request is too vague to execute safely.

Representative entries: brainstorming, speckit-clarify, writing-plans, speckit-specify


🛠️ Software Engineering & Architecture

When this gets used: when the problem is clear enough to design system boundaries, make code changes, or coordinate a multi-step implementation.

Why similar skills coexist: some focus on architecture, some on implementation, and some on governed execution across several steps or agents. They are adjacent, but they are not doing the same job.

How you usually meet them: after planning is done, when a change touches several files, several layers, or several execution phases.

Representative entries: aios-architect, architecture-patterns, autonomous-builder, vibe


🔧 Debugging, Testing & Quality Assurance

When this gets used: when something is broken, risky, hard to trust, or ready for review.

Why similar skills coexist: debugging, testing, review, and final verification are separate actions. A quick bug-fix entrypoint is not the same thing as a disciplined debugging workflow, and neither replaces review or regression checks.

How you usually meet them: after a failure, before a PR, or whenever a change needs evidence instead of guesswork.

Representative entries: systematic-debugging, error-resolver, verification-before-completion, code-review


📊 Data Analysis & Statistical Modeling

When this gets used: when the main task is to understand data, clean it, test assumptions, or explain findings.

Why similar skills coexist: some are for cleaning and exploration, some for statistical testing, some for visualization, and some for specific data types or pipelines. They support one another, rather than duplicating one another.

How you usually meet them: before modeling, during experiment analysis, or anytime the question is "what does this data actually say?"

Representative entries: statistical-analysis, performing-regression-analysis, detecting-data-anomalies, data-exploration-visualization


🤖 Machine Learning & AI Engineering

When this gets used: when the task is no longer just data understanding, but model building, evaluation, iteration, and explanation.

Why similar skills coexist: training, evaluation, explainability, and experiment tracking are different parts of a model workflow. A model-training skill should not be expected to cover data analysis, and an explainability skill should not be expected to replace training infrastructure.

How you usually meet them: after data prep is done, when you need to train something, compare results, or understand why a model behaves a certain way.

Representative entries: senior-ml-engineer, training-machine-learning-models, evaluating-machine-learning-models, explaining-machine-learning-models


🧬 Research, Literature & Life Sciences

When this gets used: when the work itself is research-heavy, especially in literature review, scientific support, life sciences, or bioinformatics.

Why similar skills coexist: research workflows are naturally multi-step. One skill helps find papers, another structures evidence, another handles scientific analysis, and another focuses on life-science-specific toolchains.

How you usually meet them: when the request is about papers, experiments, scientific evidence, single-cell workflows, genomics, or drug-related analysis.

Representative entries: literature-review, research-lookup, biopython, scanpy


🔬 Scientific Computing & Mathematical Logic

When this gets used: when the hard part of the task is mathematical reasoning, symbolic work, formal modeling, simulation, or optimization.

Why similar skills coexist: some focus on symbolic derivation, some on probabilistic models, some on simulation, and some on optimization or formal logic. They may sit near each other, but they solve different kinds of mathematical work.

How you usually meet them: in research-heavy tasks, quantitative modeling, or workflows where natural-language reasoning is not precise enough.

Representative entries: sympy, pymc-bayesian-modeling, pymoo, qiskit


🎨 Multimedia, Visualization & Documentation

When this gets used: when the job is to turn work into something another person can read, present, review, or publish.

Why similar skills coexist: a chart generator, a documentation writer, a slide tool, and an image tool are all output layers, but they serve different formats and audiences. They belong in the same family because they are delivery surfaces, not because they are interchangeable.

How you usually meet them: near the end of a workflow, once results need to become reports, figures, slides, diagrams, or polished documentation.

Representative entries: docs-write, plotly, scientific-visualization, generate-image


🔌 External Integrations, Automation & Deployment

When this gets used: when the task depends on browsers, web content, design surfaces, external services, CI, or deployment.

Why similar skills coexist: browser interaction, content extraction, external service adapters, and deployment automation are related, but they solve different surface-level problems. playwright and scrapling, for example, both touch the web, but one is better for browser behavior and the other for fetching or extracting content efficiently.

How you usually meet them: when the work cannot stay inside the model alone and needs to touch the outside world.

Representative entries: playwright, scrapling, mcp-integration, aios-devops


Taken together, these categories are meant to cover different task types, different workflow stages, and different output surfaces. Similar skills usually coexist for predictable reasons: stage differences, domain specialization, host adaptation, or format-specific delivery.



📊 Why is it powerful?

Now for the numbers. This isn't a demo project — it's a running system.

The runtime core behind VibeSkills is VCO. This is not a single-point tool or a "code completion" script — it is a super-capability network that has been deeply integrated and governed:


🧩 Skill Modules 🌍 Ecosystem ⚖️ Governance Rules

340+

Directly callable Skills
covering the full chain from requirements to delivery

19+

Absorbed high-value upstream
open-source projects and best practices

129

Policy rules and contracts
ensuring stable, traceable, divergence-free execution


⚙️ Installation & Skills Management

You do not need to learn the whole architecture before you install VibeSkills.

Default install path

  1. Decide which app you are installing into: codex, claude-code, cursor, windsurf, openclaw, or opencode
  2. If this is your first install and you have no special constraint, choose install + full
  3. Open the main install guide: Prompt-based install (recommended)
  4. Copy the prompt that matches your app and version, then paste it into that AI app
  5. Finish the install, then continue with Getting Started

full or minimal?

  • Choose full if you want the recommended setup and the simplest default path
  • Choose minimal only if you deliberately want the smaller framework-only install

When should you open the other install docs?

🔧 Advanced install details

Only read this part if you need manual configuration, troubleshooting, or advanced customization.

If a guide asks you to edit something manually, these are the real file paths

  • Codex: ~/.codex/settings.json
  • Claude Code: ~/.claude/settings.json
  • Cursor: ~/.cursor/settings.json
  • OpenCode: ~/.config/opencode/opencode.json
  • Windsurf / OpenClaw sidecar state: <target-root>/.vibeskills/host-settings.json

What stays visible after install

  • public runtime entry: <target-root>/skills/vibe
  • internal bundled corpus: <target-root>/skills/vibe/bundled/skills/*
  • compatibility helper files: only when a host explicitly needs them

The .vibeskills folders are split on purpose:

  • host-sidecar: <target-root>/.vibeskills/host-settings.json, host-closure.json, install-ledger.json, bin/*
  • workspace-sidecar: <workspace-root>/.vibeskills/project.json, .vibeskills/docs/requirements/*, .vibeskills/docs/plans/*, .vibeskills/outputs/runtime/vibe-sessions/*

What has been verified after install

Host Verified areas after install
codex planning, debug, governed execution, memory continuity
claude-code planning, debug, governed execution, memory continuity
openclaw planning, debug, governed execution, memory continuity
opencode planning, debug, governed execution, memory continuity

These checks confirm that the installed runtime still controls routing, still writes its governance and cleanup records, and still preserves memory continuity. They do not mean that every host-specific invocation surface was exercised in the exact same way.

Uninstall and custom skills

📦 Standing on the Shoulders of Giants

These capabilities were not built in isolation. VibeSkills draws on existing open-source projects, patterns, and tools, then adapts them into one governed runtime.

VibeSkills does not claim to replace or fully reproduce every upstream project listed below. The practical goal is narrower: reuse proven ideas where they fit, connect them through one runtime and governance layer, and make them easier to activate together in day-to-day work.

🙏 Acknowledgements

This project references, adapts, or integrates ideas, workflows, or tooling from projects such as:

superpower · claude-scientific-skills · get-shit-done · aios-core · OpenSpec · ralph-claude-code · SuperClaude_Framework · spec-kit · Agent-S · mem0 · scrapling · claude-flow · serena · everything-claude-code · DeepAgent and more

We try to attribute upstream work carefully. If we missed a source or described a dependency inaccurately, please open an Issue and we will correct it.



🚀 Getting Started

If VibeSkills is already installed, start with one invocation.

⚠️ Invocation note: VibeSkills uses a Skills-format runtime. Invoke it through your host's Skills entrypoint, not as a standalone CLI program.


Host Environment Invocation Example
Claude Code /vibe Plan this task /vibe
Codex $vibe Plan this task $vibe
OpenCode /vibe Plan this task with vibe.
OpenClaw Skills entry Refer to the host docs
Cursor / Windsurf Skills entry Refer to each platform's Skills docs

  • First try a small request such as planning, clarifying, or breaking down a task.
  • If you want later turns to stay inside the governed workflow, append $vibe or /vibe to each message.
  • If VibeSkills is not installed yet, start with Prompt-based install (recommended).

MCP note: $vibe or /vibe only enters the governed runtime. It is not MCP completion, and it does not by itself prove that MCP is installed in the host's native MCP surface.

Public host status: codex and claude-code are the clearest install-and-use paths today. cursor, windsurf, openclaw, and opencode are available too, but some of those paths are still preview-oriented or host-specific.



📚 Documentation & Installation Guides (click to expand)

Start here

Open only if needed


🤝 Join the Community · Build Together

Give it a try! If you have questions, ideas, or suggestions, feel free to open an issue — I'll take every piece of feedback seriously and make improvements.


This project is fully open source. All contributions are welcome!

Whether it's fixing bugs, improving performance, adding features, or improving documentation — every PR is deeply appreciated.

Fork → Modify → Pull Request → Merge ✅

⭐ If this project helps you, a Star is the greatest support you can give! Its underlying philosophy has been well-received; however, the current codebase carries some technical debt, and certain features still require refinement. We welcome you to point out any such issues in the Issues section. Your support is the enriched uranium that fuels this nuclear-powered donkey 🫏


Thank you to the LinuxDo community for your support!

LinuxDoTech discussions, AI frontiers, AI experience sharing — all at Linuxdo!



Star History Chart

Transform the parts of real work most prone to going off the rails into a system that is more callable, more governable, and more maintainable over time.


Made with ❤️  ·  GitHub  ·  中文

Release History

VersionChangesUrgencyDate
v3.0.4# VCO Release v3.0.4 - Date: 2026-04-19 - Commit(base): eeb09f3 - Previous public release: `v3.0.3` - Previous public release commit: `48743c5` ## Highlights - Refreshed `v3.0.4` from a later maintained source so the Windows verification-gate repair does not remain stranded behind the original `2026-04-18` cut. This refresh packages the current maintained source at `eeb09f3`. - Replaced the odd "open a Codex child session to ask a specialist" behavior with direct in-session specialist routingHigh4/18/2026
v3.0.3# VCO Release v3.0.3 - Date: 2026-04-15 - Commit(base): 537db3f - Previous public release: `v3.0.2` - Previous public release commit: `235e6e1` ## Highlights - Promoted the latest governed repository state after `v3.0.2` instead of leaving the newer runtime, install, and bootstrap work stranded behind the public line. This cut packages the current maintained source at `537db3f`. - Added explicit host-global bootstrap lifecycle support for supported instruction-file hosts. Codex, Claude Code, High4/15/2026
v3.0.2# VCO Release v3.0.2 - Date: 2026-04-13 - Commit(base): c2f98b5 - Previous public release: `v3.0.1` - Previous public release commit: `d94b772` ## Highlights - Published the current `main` line instead of leaving recent work stranded behind the `v3.0.1` side-release branch. This cut packages the latest governed repository state at `c2f98b5` as the next public release. - Added a more explicit upgrade path for existing installations. The repo now ships the `vibe-upgrade` skill surface, version Medium4/13/2026
v3.0.1# VCO Release v3.0.1 - Date: 2026-04-09 - Commit(base): a5befb8 - Previous remote release: `v3.0.0` ## Highlights - Restored install-time generated nested compatibility for managed installs. The installer now materializes the nested `bundled/skills/vibe` compatibility surface again, and the materializer no longer deletes the source tree when the source root already points at that nested skills directory. - Hardened the governed release operator instead of relying on a fragile happy path. `relMedium4/9/2026
v3.0.0# VCO Release v3.0.0 - Date: 2026-04-07 - Commit(base): 9bea31e - Previous remote release: `v2.3.55` - Unpublished absorbed baseline: `v2.3.56` ## Highlights - Promoted the unpublished `v2.3.56` architecture-closure baseline into the official major-release line instead of leaving that closure work stranded as a local-only release note. The repository now advances from the last public `v2.3.55` tag to a new `v3.0.0` baseline in one truthful step. - Hardened install and uninstall behavior arounMedium4/7/2026
v2.3.55# VCO Release v2.3.55 - Date: 2026-03-30 - Commit(base): f3ab6e5 - Previous release: `v2.3.54` ## Highlights - Promoted the unified owned-only uninstall surface into the stable release line. `uninstall.sh` / `uninstall.ps1` now route supported hosts through the same adapter-driven contract, default to direct uninstall, and remove only content that the install ledger, host closure, or conservative legacy rules can prove belongs to Vibe. - Realigned host installs, checks, and runtime framing arMedium3/30/2026
v2.3.54# VCO Release v2.3.54 - Date: 2026-03-30 - Commit(base): 1a049f1 - Previous release: `v2.3.53` ## Highlights - Closed the release-surface truth gap left by `v2.3.53`: `release-cut.ps1` is now the authoritative path for version governance, changelog / ledger writes, `docs/releases/README.md`, dist manifest `source_release` updates, and bundled / nested bundled sync during release apply. - Promoted runtime contract debt work into a documented baseline instead of scattered implicit structure. RuMedium3/30/2026
v2.3.53# VCO Release v2.3.53 - Date: 2026-03-30 - Commit(base): 4f5676f - Previous release: `v2.3.52` ## Highlights - Closed the governed specialist-dispatch gap with explicit custom-admission handling and restored the missing delegated-lane / host-adapter metadata handoff. Router admission, confirm UI, runtime input packets, delegated lane payloads, and specialist execution now carry the same bounded-dispatch truth instead of relying on weaker implied behavior. - Hardened Windows PowerShell host reMedium3/29/2026
v2.3.52# VCO Release v2.3.52 - Date: 2026-03-29 - Commit(base): 870fd20 - Previous release: `v2.3.51` ## Highlights - Landed stage-aware memory activation inside the normal six-stage `vibe` governed runtime. Governed runs now emit `memory-activation-report.json` and `memory-activation-report.md`, carry bounded memory-context injection into requirement/plan artifacts, and keep the activation path visible in per-stage receipts. - Added real governed backend adapter calls for `Serena`, `ruflo`, and `CoMedium3/29/2026
v2.3.51# VCO Release v2.3.51 - Date: 2026-03-28 - Commit(base): 18e6b9c - Previous release: `v2.3.50` ## Highlights - Moved downstream delivery acceptance from an external verification layer into the normal `vibe` governed runtime main chain. Governed runs now freeze product acceptance criteria, manual spot checks, completion-language policy, and a delivery-truth contract directly in the main requirement surface. - Extended the governed plan and closure path so `xl_plan` records a delivery-acceptancMedium3/28/2026
v2.3.50# VCO Release v2.3.50 - Date: 2026-03-26 - Commit(base): d5da111 ## Highlights - Added a router AI connectivity probe for the governance advice path, including a PowerShell gate, a runtime-neutral Python probe, structured status artifacts, and install-entry quick checks that distinguish local install completion from online governance readiness. - Hardened the LLM acceleration overlay’s optional-field handling so verification and fallback behavior stay stable when provider-side fields are missMedium3/26/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

MeowKitProduction ready. AI Agent Workflow System for Claude Codev2.6.0
yao-meta-skillYAO = Yielding AI Outcomes. A lightweight but rigorous system for creating, evaluating, packaging, and governing reusable agent skills.main@2026-04-19
shipped-by-agentsThe building blocks of an enterprise adoption framework for agentic coding — technical training, adoption playbooks, governance policies, industry analysis, proposal templates, and practical workflov0.1
awesome-top-skills🤖 Discover top AI agent skills with our curated collection, featuring automated updates and precise classification from the GitHub ecosystem.main@2026-04-21
planning-with-files📄 Transform your workflow with persistent markdown files for planning, tracking progress, and storing knowledge like a pro.master@2026-04-21