If your AI supports skills, VibeSkills works. 340+ skills spanning coding, research, data science & creative work.
This is a new breed of "super skill" that essentially operates as a full-fledged Agent system.
Packaged as individual Skills, it offers plug-and-play installation and on-demand execution.
Backed by a highly customizable framework, it connects effortlessly to your exclusive workflows.
🧠 Planning · 🛠️ Engineering · 🤖 AI · 🔬 Research · 🎨 Creation
Install → vibe | vibe-want | vibe-how | vibe-do → Smart Routing → M / L / XL Execution → Governance Verification → ✅ Delivery
- What makes it different
- Who is it for
- Intelligent Routing
- Memory System
- Full Capability Map
- Installation & Management
- Getting Started
🔑 New here? Quick glossary of key terms (click to expand)
| Term | Plain-English Meaning |
|---|---|
| VibeSkills / VCO | This project. VCO = Vibe Code Orchestrator — the runtime engine behind the skills. |
| Skill | A focused capability module (e.g., tdd-guide, code-review). Think of skills as expert assistants the system calls on demand. |
| Governed runtime | When you invoke vibe, the system follows a structured process — clarify → plan → execute → verify — instead of diving in blindly. The public discoverable wrapper set is vibe, vibe-want, vibe-how, and vibe-do; hosts may render them as labels like Vibe: What Do I Want?, but they still resolve to the same canonical runtime authority. |
| Canonical Router | The internal logic that decides which skill to activate for your task. Just invoke /vibe and let it route automatically. |
| M / L / XL grade | Task complexity level. M = quick focused task, L = multi-step task, XL = large task with parallel work. Automatically selected. Public overrides are limited to --l and --xl; they are execution preferences, not separate entrypoints. |
| Frozen requirement | Once you confirm the plan, it is "frozen" — the system will not silently change scope mid-task. |
| Root / Child lane | In XL tasks, there is a "root" coordinator and "child" worker agents. Prevents conflicting outputs from parallel agents. |
| Proof bundle | Evidence that a task was actually completed correctly — test results, output, verification logs. |
Important
VibeSkills evolves with the times — ensuring it stays genuinely useful while dramatically lowering the barrier to cutting-edge vibecoding technology, eliminating the cognitive anxiety and steep learning curve that comes with new AI tools.
Whether or not you have a programming background, you can directly harness the most advanced AI capabilities with minimal effort. Productivity gains from AI should be available to everyone.
Traditional skill repos answer: "What tools do I have?" VibeSkills tackles the core pain point of heavy AI users: "How do I manage and invoke large numbers of Skills, and get work done efficiently and reliably?"
| ❌ Traditional Pain Points (you've probably felt these) | ✅ VibeSkills Solutions (what we've built) |
|---|---|
| Skills never activate: Hundreds of capabilities in the repo, but AI rarely remembers to use them — activation rate is extremely low. | 🧠 Intelligent Routing: The system automatically routes to the right skill based on context — no need to memorize a skill list. |
| Blind execution: AI dives in without clarifying requirements — fast but off-target, projects gradually become black boxes. | 🧭 Governed Workflow: Clarify → Verify → Trace is enforced in a unified process; every step is auditable. |
| Conflicting tools: Lack of coordination between plugins and workflows leads to environment pollution or infinite loops. | 🧩 Global Governance: 129 contract rules define safety boundaries and fallback mechanisms for long-term stability. |
| Messy workspace: After extended use, repos become cluttered; new Agents miss project details when taking over, causing handoff gaps. | 📁 Semantic Directory Governance: Fixed-architecture file storage so any new AI conversation instantly understands the project context. |
| AI bad habits: Deletes main files while clearing backups; writes silent fallbacks then confidently claims "it's done". | 🛡️ Built-in Safety Rules: Governed execution blocks dangerous bulk deletion and blind recursive wipes by default; fallback mechanisms must always show explicit warnings. |
| Manual workflow discipline: Users must maintain their own AI collaboration process from experience — high learning cost. | 🚦 Framework-guided end-to-end: Requirements → Plan → Multi-agent execution → Automated test iteration — fully managed. |
| Skill dispatch chaos in multi-agent runs: Hard to assign the right skills to each agent for different tasks. | 🤖 Automatic Skill Dispatch: Multi-agent workflows automatically assign the corresponding Skills to each Agent's task. |
Which of those pain points hit home? Find your position — what comes next will land harder.
Is this for you? Click to expand
| Audience | Description |
|---|---|
| 🎯 Users who need reliable delivery | Want AI to be a dependable partner, not a runaway horse |
| ⚡ Power users heavily relying on AI/Agents | Need a unified foundation to support large-scale workflows |
| 🏢 Small teams with high standardization needs | Want AI workflows to be more maintainable and transferable |
| 😩 Practitioners exhausted by skill sprawl | Already tired of tool hunting — just want a ready-to-use solution |
If you're looking for a single small script, this may be overkill. But if you want to use AI more reliably, smoothly, and sustainably — this is your indispensable foundation.
The core point is simple: 340+ skills do not all compete at once. vibe is the governed coordinator, and other skills are routed in only when a specific phase or work unit actually needs them.
| Common worry | What actually happens |
|---|---|
| Similar skills will fight each other | The router picks one primary route first. Specialist skills stay scoped to a phase or bounded work unit. |
| Some skills look similar, so why keep both? | They usually exist for different phases, domains, or execution intensity. They are not meant to all fire on the same step. |
| XL means multiple agents can pull in anything | No. XL first splits the job into bounded units, then assigns skills per unit under coordinator approval. |
- Start with one primary route: Most complex tasks enter through
vibe, which stays responsible for the overall governed flow. - Bring in specialists only when needed: Requirement, planning, execution, and verification can each pull in different supporting skills, but only for that phase.
- Keep the workflow stable: The path is still
Clarify ➔ Plan ➔ Execute ➔ Verify, so more available skills do not mean a looser process.
- They are not all active at once. Routing chooses the skill that fits the current task or current step.
- Some overlap on the surface but serve different roles: clarify vs plan, execute vs verify, or narrow work vs higher-risk work.
- Governance rules, priority ordering, and exclusion rules keep same-role skills from colliding and provide fallback when the preferred one is unavailable.
After selecting the primary route, the runtime also chooses the execution grade based on task complexity:
| Level | Use Case | Characteristics |
|---|---|---|
| M | Narrow-scope work with clear boundaries | Single-agent, token-efficient, fast response |
| L | Medium complexity requiring design, planning, and review | Governed multi-step execution, usually in planned serial order |
| XL | Large tasks with independent parts worth splitting | The coordinator breaks work into bounded units and can run independent units in parallel waves |
Even in XL, this is not a free-for-all. The system decides the main route first, then assigns skills to each bounded unit under the same governed coordinator.
🔍 Expand: wrapper entrypoints, grade overrides, and routing notes
- Public wrapper entries are
vibe,vibe-want,vibe-how, andvibe-do. Hosts may render them asVibe,Vibe: What Do I Want?,Vibe: How Do We Do It?, andVibe: Do It, but they still enter the same governed runtime. viberuns the full governed flow.vibe-wantstops after the requirement is clarified and frozen.vibe-howstops after the requirement and plan are frozen.vibe-doruns the full governed flow without skipping requirement or plan.- The only lightweight public grade overrides are
--land--xl. Aliases likevibe-l,vibe-xl, orvibe-how-xlare intentionally unsupported. - When specialist skills such as
tdd-guideorcode-revieware called, they assist a phase or a bounded unit. They do not take over global coordination. - In XL multi-agent work, worker lanes can suggest specialist help, but the coordinator approves the final assignment.
Routing decides which skill should lead. Memory decides whether the next session has to start from zero.
VibeSkills memory is built to solve three practical problems:
- resume confirmed project context inside the same workspace
- keep long tasks resumable after interruption or handoff
- preserve decisions, handoff notes, and related evidence without dumping full history back into every prompt
It does not mean "save everything forever." By default, memory is scoped and layered: session state, project conventions, task-relevant retrieval, and controlled long-term knowledge all have different boundaries.
| What users usually ask | Default behavior |
|---|---|
| Do I need to re-explain project context in every new session? | No. Confirmed project context can be resumed inside the same workspace. |
| What if a long task gets interrupted? | Key progress can be folded into resumable working, tool, and evidence memory. |
| Will unrelated history flood the prompt? | No. Retrieval stays bounded and task-relevant. |
| Will one project leak into another? | No. Different workspaces stay isolated by default. |
| Does it write everything automatically? | No. Durable writes stay governed, and some writes require explicit confirmation. |
You can read the current behavior like this:
- Same workspace can resume:
codex,claude-code, and other supported hosts can reconnect to the same project memory inside one workspace. - Different workspaces stay isolated: even if two workspaces point at the same backend root, memory does not bleed across repos.
- Only related memory comes back: generic scaffold terms such as
$vibe,plan, orcontinuityare filtered out so recall depends on task-relevant content instead of noisy keywords. - Long tasks are easier to continue: the runtime keeps key decisions, handoff cards, and evidence anchors so a later turn or a new agent can continue from the useful parts.
- Failure is explicit: if the workspace broker is unavailable, the runtime fails openly instead of pretending that memory continuity still exists.
You can think of it as four memory categories rather than one giant "long-term memory":
-
Session memory
- Keeps current progress, intermediate results, and temporary state
- Useful for finishing the work happening right now
-
Project memory
- Keeps confirmed project conventions, architecture decisions, and durable working agreements
- Useful when you come back later and do not want to restate the same background
-
Task-semantic memory
- Keeps the relevant fragments of long-running tasks easy to retrieve
- Useful when the context gets large and earlier details would otherwise disappear
-
Long-term knowledge memory
- Keeps durable relations, knowledge links, and information worth retaining across sessions
- Useful when something should be preserved beyond a single task
📐 Expand: memory layers, write boundaries, and how the memory skills fit together
This part explains three things:
- which memory category is responsible for which job
- why several memory-related components exist at the same time
- which writes are automatic, which require confirmation, and which are optional extensions
| Memory Category | Primary Owner | Default Scope | What It Keeps |
|---|---|---|---|
| Session memory | state_store |
Current session | execution progress, temporary state, intermediate results |
| Project memory | Serena |
Current workspace / project | confirmed architecture decisions, conventions, durable project rules |
| Task-semantic memory | ruflo |
Intra-session / long task retrieval | relevant context fragments for long-running tasks |
| Long-term knowledge memory | Cognee |
Controlled cross-session knowledge | entities, relations, and durable knowledge links |
Optional extensions:
mem0can be used as a personal preference backend, andLettacan provide memory-block mapping vocabulary. Neither replaces the canonical memory roles above.
They are not duplicate systems. They cover different responsibilities:
- session memory helps finish the current task
- project memory helps a later session reconnect to the same project
- task-semantic memory helps long tasks recover the right context without replaying everything
- long-term knowledge memory keeps the things worth retaining beyond a single task
If you removed any one of these layers, a different part of the workflow would get worse. Session memory alone cannot survive a later return, and long-term memory alone is too coarse to replace current-task state.
These skills are not a second, competing memory system. They are common entrypoints or helpers around the layers above:
-
knowledge-steward- Best when a prompt, bug lesson, or insight is worth preserving on purpose
- Think of it as "store this in the right long-term place"
-
digital-brain- Best when you want a more structured personal knowledge base
- Think of it as a long-term knowledge organization entrypoint
-
deepagent-memory-fold- Best when a long task is getting too large and needs a clean handoff
- Think of it as a continuity tool for long-running work
The important part is the boundary model, not just the feature names:
- not everything becomes durable memory
- project-level decision writes stay governed, and
Serenarequires confirmation before writing durable project truth - retrieval returns only bounded, relevant capsules instead of replaying the whole store
episodic-memorystays disabledmem0is limited to personal preferences rather than project truth or routing authority- every external backend can be disabled with a kill switch
The goal is not to make AI remember everything about you. The goal is to resume the right project context, preserve the right task state, and keep durable knowledge in controlled places.
See workspace memory plane design for the technical contract and quantitative Codex memory simulation for the benchmark coverage.
This section is not a full inventory of skill IDs. It is a practical map of the kinds of work VibeSkills can cover.
If you only want to judge whether VibeSkills fits your task, the table below is the fastest way to read it.
| Work Area | What It Helps With | Representative Engines |
|---|---|---|
| 💡 Requirements, Planning & Product Work | Clarify vague ideas, write specs, and break work into executable plans and tasks | brainstorming, writing-plans, speckit-specify |
| 🏗️ Engineering, Architecture & Governed Execution | Design systems, implement changes, and coordinate multi-step governed workflows | aios-architect, autonomous-builder, vibe |
| 🔧 Debugging, Testing & Quality Control | Investigate failures, add tests, review code, and verify changes before completion | systematic-debugging, verification-before-completion, code-review |
| 📊 Data Analysis & Statistical Modeling | Clean data, run statistical analysis, explore patterns, and explain results | statistical-analysis, performing-regression-analysis, data-exploration-visualization |
| 🤖 Machine Learning & AI Engineering | Train, evaluate, explain, and iterate on model-driven workflows | senior-ml-engineer, training-machine-learning-models, evaluating-machine-learning-models |
| 🔬 Research, Literature & Life Sciences | Review papers, support scientific workflows, and handle bioinformatics-heavy tasks | literature-review, research-lookup, scanpy |
| 📐 Scientific Computing & Mathematical Modeling | Handle symbolic math, probabilistic modeling, simulation, and optimization | sympy, pymc-bayesian-modeling, pymoo |
| 🎨 Documentation, Visualization & Output | Turn work into readable docs, charts, figures, slides, and other deliverables | docs-write, plotly, scientific-visualization |
| 🔌 External Integrations, Automation & Delivery | Work with browsers, web content, external services, CI/CD, and deployment surfaces | playwright, scrapling, aios-devops |
👉 Expand if needed: detailed categories, usage scenarios, and why similar skills coexist
This section explains the full coverage in plain language. It is meant to answer three practical questions:
- When would this category be used?
- Why do several similar skills exist at the same time?
- Which entries are the representative starting points?
The names below are representative, not a full inventory dump. The point of this section is to explain roles and boundaries, not to turn the README into a warehouse list.
When this gets used: when the task is still fuzzy and the first job is to decide what problem is actually being solved before anyone starts coding.
Why similar skills coexist: they handle different stages of the same path. One clarifies the ask, another writes the spec, another turns that spec into a plan, and another breaks the plan into tasks.
How you usually meet them: early in a project, before a large change, or whenever a request is too vague to execute safely.
Representative entries: brainstorming, speckit-clarify, writing-plans, speckit-specify
When this gets used: when the problem is clear enough to design system boundaries, make code changes, or coordinate a multi-step implementation.
Why similar skills coexist: some focus on architecture, some on implementation, and some on governed execution across several steps or agents. They are adjacent, but they are not doing the same job.
How you usually meet them: after planning is done, when a change touches several files, several layers, or several execution phases.
Representative entries: aios-architect, architecture-patterns, autonomous-builder, vibe
When this gets used: when something is broken, risky, hard to trust, or ready for review.
Why similar skills coexist: debugging, testing, review, and final verification are separate actions. A quick bug-fix entrypoint is not the same thing as a disciplined debugging workflow, and neither replaces review or regression checks.
How you usually meet them: after a failure, before a PR, or whenever a change needs evidence instead of guesswork.
Representative entries: systematic-debugging, error-resolver, verification-before-completion, code-review
When this gets used: when the main task is to understand data, clean it, test assumptions, or explain findings.
Why similar skills coexist: some are for cleaning and exploration, some for statistical testing, some for visualization, and some for specific data types or pipelines. They support one another, rather than duplicating one another.
How you usually meet them: before modeling, during experiment analysis, or anytime the question is "what does this data actually say?"
Representative entries: statistical-analysis, performing-regression-analysis, detecting-data-anomalies, data-exploration-visualization
When this gets used: when the task is no longer just data understanding, but model building, evaluation, iteration, and explanation.
Why similar skills coexist: training, evaluation, explainability, and experiment tracking are different parts of a model workflow. A model-training skill should not be expected to cover data analysis, and an explainability skill should not be expected to replace training infrastructure.
How you usually meet them: after data prep is done, when you need to train something, compare results, or understand why a model behaves a certain way.
Representative entries: senior-ml-engineer, training-machine-learning-models, evaluating-machine-learning-models, explaining-machine-learning-models
When this gets used: when the work itself is research-heavy, especially in literature review, scientific support, life sciences, or bioinformatics.
Why similar skills coexist: research workflows are naturally multi-step. One skill helps find papers, another structures evidence, another handles scientific analysis, and another focuses on life-science-specific toolchains.
How you usually meet them: when the request is about papers, experiments, scientific evidence, single-cell workflows, genomics, or drug-related analysis.
Representative entries: literature-review, research-lookup, biopython, scanpy
When this gets used: when the hard part of the task is mathematical reasoning, symbolic work, formal modeling, simulation, or optimization.
Why similar skills coexist: some focus on symbolic derivation, some on probabilistic models, some on simulation, and some on optimization or formal logic. They may sit near each other, but they solve different kinds of mathematical work.
How you usually meet them: in research-heavy tasks, quantitative modeling, or workflows where natural-language reasoning is not precise enough.
Representative entries: sympy, pymc-bayesian-modeling, pymoo, qiskit
When this gets used: when the job is to turn work into something another person can read, present, review, or publish.
Why similar skills coexist: a chart generator, a documentation writer, a slide tool, and an image tool are all output layers, but they serve different formats and audiences. They belong in the same family because they are delivery surfaces, not because they are interchangeable.
How you usually meet them: near the end of a workflow, once results need to become reports, figures, slides, diagrams, or polished documentation.
Representative entries: docs-write, plotly, scientific-visualization, generate-image
When this gets used: when the task depends on browsers, web content, design surfaces, external services, CI, or deployment.
Why similar skills coexist: browser interaction, content extraction, external service adapters, and deployment automation are related, but they solve different surface-level problems. playwright and scrapling, for example, both touch the web, but one is better for browser behavior and the other for fetching or extracting content efficiently.
How you usually meet them: when the work cannot stay inside the model alone and needs to touch the outside world.
Representative entries: playwright, scrapling, mcp-integration, aios-devops
Taken together, these categories are meant to cover different task types, different workflow stages, and different output surfaces. Similar skills usually coexist for predictable reasons: stage differences, domain specialization, host adaptation, or format-specific delivery.
Now for the numbers. This isn't a demo project — it's a running system.
The runtime core behind VibeSkills is VCO. This is not a single-point tool or a "code completion" script — it is a super-capability network that has been deeply integrated and governed:
| 🧩 Skill Modules | 🌍 Ecosystem | ⚖️ Governance Rules |
|---|---|---|
| Directly callable Skills covering the full chain from requirements to delivery |
Absorbed high-value upstream open-source projects and best practices |
Policy rules and contracts ensuring stable, traceable, divergence-free execution |
You do not need to learn the whole architecture before you install VibeSkills.
- Decide which app you are installing into:
codex,claude-code,cursor,windsurf,openclaw, oropencode - If this is your first install and you have no special constraint, choose
install + full - Open the main install guide: Prompt-based install (recommended)
- Copy the prompt that matches your app and version, then paste it into that AI app
- Finish the install, then continue with Getting Started
- Choose
fullif you want the recommended setup and the simplest default path - Choose
minimalonly if you deliberately want the smaller framework-only install
- If you are not sure which host path matches your app, start with the cold-start host matrix
- If you want the longer step-by-step command path, use the multi-host command reference
- If you need host-specific notes for OpenClaw or OpenCode, open the OpenClaw host guide or the OpenCode host guide
- If you need an offline or manual copy path, open the manual install guide
🔧 Advanced install details
Only read this part if you need manual configuration, troubleshooting, or advanced customization.
If a guide asks you to edit something manually, these are the real file paths
- Codex:
~/.codex/settings.json - Claude Code:
~/.claude/settings.json - Cursor:
~/.cursor/settings.json - OpenCode:
~/.config/opencode/opencode.json - Windsurf / OpenClaw sidecar state:
<target-root>/.vibeskills/host-settings.json
What stays visible after install
- public runtime entry:
<target-root>/skills/vibe - internal bundled corpus:
<target-root>/skills/vibe/bundled/skills/* - compatibility helper files: only when a host explicitly needs them
The .vibeskills folders are split on purpose:
- host-sidecar:
<target-root>/.vibeskills/host-settings.json,host-closure.json,install-ledger.json,bin/* - workspace-sidecar:
<workspace-root>/.vibeskills/project.json,.vibeskills/docs/requirements/*,.vibeskills/docs/plans/*,.vibeskills/outputs/runtime/vibe-sessions/*
What has been verified after install
| Host | Verified areas after install |
|---|---|
codex |
planning, debug, governed execution, memory continuity |
claude-code |
planning, debug, governed execution, memory continuity |
openclaw |
planning, debug, governed execution, memory continuity |
opencode |
planning, debug, governed execution, memory continuity |
These checks confirm that the installed runtime still controls routing, still writes its governance and cleanup records, and still preserves memory continuity. They do not mean that every host-specific invocation surface was exercised in the exact same way.
Uninstall and custom skills
- uninstall paths:
uninstall.ps1 -HostId <host>anduninstall.sh --host <host> - uninstall governance notes:
docs/uninstall-governance.md - custom skill onboarding: custom workflow & skill onboarding guide
These capabilities were not built in isolation. VibeSkills draws on existing open-source projects, patterns, and tools, then adapts them into one governed runtime.
VibeSkills does not claim to replace or fully reproduce every upstream project listed below. The practical goal is narrower: reuse proven ideas where they fit, connect them through one runtime and governance layer, and make them easier to activate together in day-to-day work.
🙏 Acknowledgements
This project references, adapts, or integrates ideas, workflows, or tooling from projects such as:
superpower·claude-scientific-skills·get-shit-done·aios-core·OpenSpec·ralph-claude-code·SuperClaude_Framework·spec-kit·Agent-S·mem0·scrapling·claude-flow·serena·everything-claude-code·DeepAgentand moreWe try to attribute upstream work carefully. If we missed a source or described a dependency inaccurately, please open an Issue and we will correct it.
If VibeSkills is already installed, start with one invocation.
⚠️ Invocation note: VibeSkills uses a Skills-format runtime. Invoke it through your host's Skills entrypoint, not as a standalone CLI program.
| Host Environment | Invocation | Example |
|---|---|---|
| Claude Code | /vibe |
Plan this task /vibe |
| Codex | $vibe |
Plan this task $vibe |
| OpenCode | /vibe |
Plan this task with vibe. |
| OpenClaw | Skills entry | Refer to the host docs |
| Cursor / Windsurf | Skills entry | Refer to each platform's Skills docs |
- First try a small request such as planning, clarifying, or breaking down a task.
- If you want later turns to stay inside the governed workflow, append
$vibeor/vibeto each message. - If VibeSkills is not installed yet, start with Prompt-based install (recommended).
MCP note:
$vibeor/vibeonly enters the governed runtime. It is not MCP completion, and it does not by itself prove that MCP is installed in the host's native MCP surface.
Public host status: codex and claude-code are the clearest install-and-use paths today. cursor, windsurf, openclaw, and opencode are available too, but some of those paths are still preview-oriented or host-specific.
📚 Documentation & Installation Guides (click to expand)
Start here
Open only if needed
Give it a try! If you have questions, ideas, or suggestions, feel free to open an issue — I'll take every piece of feedback seriously and make improvements.
This project is fully open source. All contributions are welcome!
Whether it's fixing bugs, improving performance, adding features, or improving documentation — every PR is deeply appreciated.
Fork → Modify → Pull Request → Merge ✅
⭐ If this project helps you, a Star is the greatest support you can give! Its underlying philosophy has been well-received; however, the current codebase carries some technical debt, and certain features still require refinement. We welcome you to point out any such issues in the Issues section. Your support is the enriched uranium that fuels this nuclear-powered donkey 🫏
Thank you to the LinuxDo community for your support!
Tech discussions, AI frontiers, AI experience sharing — all at Linuxdo!
