freshcrate
Home > MCP Servers > vurb.ts

vurb.ts

Vurb.ts - The TypeScript Framework for MCP Servers. Type-safe tools, structured AI perception, and built-in security. Deploy once โ€” every AI assistant connects instantly.

Description

Vurb.ts - The TypeScript Framework for MCP Servers. Type-safe tools, structured AI perception, and built-in security. Deploy once โ€” every AI assistant connects instantly.

README

Vurb.ts

The Express.js for MCP Servers.
Type-safe tools ยท Presenters that control what the LLM sees ยท Built-in PII redaction ยท Deploy once โ€” every AI assistant connects.

npm version Downloads TypeScript MCP Standard License llms.txt

Documentation ยท Quick Start ยท API Reference ยท llms.txt

image

Get Started in 5 Seconds

vurb create my-server

Open it in Cursor, Claude Code, or GitHub Copilot and prompt:

๐Ÿ’ฌ Tell your AI agent:

"Build an MCP server for patient records with Prisma. Redact SSN and diagnosis from LLM output. Add an FSM that gates discharge tools until attending physician signs off."

โ–ถ Open in Claude ยท โ–ถ Open in ChatGPT

The agent reads the SKILL.md (or the llms.txt) and writes the entire server. First pass โ€” no corrections.

One command. Your MCP server is live on Vinkius Edge, Vercel Functions, or Cloudflare Workers.

vurb deploy

A production-ready MCP server with file-based routing, Presenters, middleware, tests, and pre-configured connections for Cursor, Claude Desktop, Claude Code, Windsurf, Cline, and VS Code + GitHub Copilot.


Table of Contents


Zero Learning Curve โ€” Ship a SKILL.md, Not a Tutorial

Every framework you've adopted followed the same loop: read the docs, study the conventions, hit an edge case, search GitHub issues, re-read the docs. Weeks before your first production PR. Your AI coding agent does the same โ€” it hallucinates Express patterns into your Hono project because it has no formal contract to work from.

Vurb.ts ships a SKILL.md โ€” a machine-readable architectural contract that your AI agent ingests before generating a single line. Not a tutorial. Not a "getting started guide" the LLM will paraphrase loosely. A typed specification: every Fluent API method, every builder chain, every Presenter composition rule, every middleware signature, every file-based routing convention. The agent doesn't approximate โ€” it compiles against the spec.

The agent reads SKILL.md and produces:

// src/tools/patients/discharge.ts โ€” generated by your AI agent
const PatientPresenter = createPresenter('Patient')
    .schema({ id: t.string, name: t.string, ssn: t.string, diagnosis: t.string })
    .redactPII(['ssn', 'diagnosis'])
    .rules(['HIPAA: diagnosis visible in UI blocks but REDACTED in LLM output']);

const gate = f.fsm({
    id: 'discharge', initial: 'admitted',
    states: {
        admitted:   { on: { SIGN_OFF: 'cleared' } },
        cleared:    { on: { DISCHARGE: 'discharged' } },
        discharged: { type: 'final' },
    },
});

export default f.mutation('patients.discharge')
    .describe('Discharge a patient')
    .bindState('cleared', 'DISCHARGE')
    .returns(PatientPresenter)
    .handle(async (input, ctx) => ctx.db.patients.update({
        where: { id: input.id }, data: { status: 'discharged' },
    }));

Correct Presenter with .redactPII(). FSM gating that makes patients.discharge invisible until sign-off. File-based routing. Typed handler. First pass โ€” no corrections.

This works on Cursor, Claude Code, GitHub Copilot, Windsurf, Cline โ€” any agent that can read a file. The SKILL.md is the single source of truth: the agent doesn't need to have been trained on Vurb.ts, it just needs to read the spec.

You don't learn Vurb.ts. You don't teach your agent Vurb.ts. You hand it a 400-line contract. It writes the server. You review the PR.

๐Ÿค– Don't have Cursor? Try it right now โ€” zero install

Click one of these links. The AI will read the Vurb.ts architecture and generate production-ready code in seconds:

The "super prompt" behind these links forces the AI to read vurb.vinkius.com/llms.txt before writing code โ€” guaranteeing correct MVA patterns, not hallucinated syntax.


Scaffold Options

vurb create my-server
  Project name?  โ€บ my-server
  Transport?     โ€บ http
  Vector?        โ€บ vanilla

  โ— Scaffolding project โ€” 14 files (6ms)
  โ— Installing dependencies...
  โœ” Done โ€” vurb dev to start

Choose a vector to scaffold exactly the project you need:

Vector What it scaffolds
vanilla autoDiscover() file-based routing. Zero external deps
prisma Prisma schema + CRUD tools with field-level security
n8n n8n workflow bridge โ€” auto-discover webhooks as tools
openapi OpenAPI 3.x / Swagger 2.0 โ†’ full MVA tool generation
oauth RFC 8628 Device Flow authentication

Deploy Targets

Choose where your server runs with --target:

Target Runtime Deploy with
vinkius (default) Vinkius Edge vurb deploy
vercel Vercel Functions vercel deploy
cloudflare Cloudflare Workers wrangler deploy
# Vinkius Edge (default) โ€” deploy with vurb deploy
vurb create my-server --yes

# Vercel Functions โ€” Next.js App Router + @vurb/vercel adapter
vurb create my-server --target vercel --yes

# Cloudflare Workers โ€” wrangler + @vurb/cloudflare adapter
vurb create my-server --target cloudflare --yes

Each target scaffolds the correct project structure, adapter imports, config files (next.config.ts, wrangler.toml), and deploy instructions. Same Fluent API, same Presenters, same middleware โ€” only the transport layer changes.

# Database-driven server with Presenter egress firewall
vurb create my-api --vector prisma --transport http --yes

# Bridge your n8n workflows to any MCP client
vurb create ops-bridge --vector n8n --yes

# REST API โ†’ MCP in one command
vurb create petstore --vector openapi --yes

Drop a file in src/tools/, restart โ€” it's a live MCP tool. No central import file, no merge conflicts:

src/tools/
โ”œโ”€โ”€ billing/
โ”‚   โ”œโ”€โ”€ get_invoice.ts  โ†’ billing.get_invoice
โ”‚   โ””โ”€โ”€ pay.ts          โ†’ billing.pay
โ”œโ”€โ”€ users/
โ”‚   โ”œโ”€โ”€ list.ts         โ†’ users.list
โ”‚   โ””โ”€โ”€ ban.ts          โ†’ users.ban
โ””โ”€โ”€ system/
    โ””โ”€โ”€ health.ts       โ†’ system.health

Why Vurb.ts Exists

Every raw MCP server does the same thing: JSON.stringify() the database result and ship it to the LLM. Three catastrophic consequences:

// What every MCP tutorial teaches
server.setRequestHandler(CallToolRequestSchema, async (request) => {
    const { name, arguments: args } = request.params;
    if (name === 'get_invoice') {
        const invoice = await db.invoices.findUnique(args.id);
        return { content: [{ type: 'text', text: JSON.stringify(invoice) }] };
        // AI receives: { password_hash, internal_margin, customer_ssn, ... }
    }
    // ...50 more if/else branches
});

๐Ÿ”ด Data exfiltration. JSON.stringify(invoice) sends password_hash, internal_margin, customer_ssn โ€” every column โ€” straight to the LLM provider. One field = one GDPR violation.

๐Ÿ”ด Token explosion. Every tool schema is sent on every turn, even when irrelevant. System prompt rules for every domain entity are sent globally, bloating context with wasted tokens.

๐Ÿ”ด Context DDoS. An unbounded findMany() can dump thousands of rows into the context window. The LLM hallucinates. Your API bill explodes.

Raw MCP SDK vs. Vurb.ts

Raw SDK Vurb.ts
Data leakage ๐Ÿ”ด JSON.stringify() โ€” every column ๐ŸŸข Presenter schema โ€” allowlist only
PII protection ๐Ÿ”ด Manual, error-prone ๐ŸŸข .redactPII() โ€” zero-leak guarantee
Tool routing ๐Ÿ”ด Giant if/else chains ๐ŸŸข File-based autoDiscover()
Context bloat ๐Ÿ”ด Unbounded findMany() ๐ŸŸข .limit() + TOON encoding
Hallucination guard ๐Ÿ”ด None ๐ŸŸข 8 anti-hallucination mechanisms
Temporal safety ๐Ÿ”ด LLM calls anything anytime ๐ŸŸข FSM State Gate โ€” tools disappear
Governance ๐Ÿ”ด None ๐ŸŸข Lockfile + SHA-256 attestation
Multi-agent ๐Ÿ”ด Manual HTTP wiring ๐ŸŸข @vurb/swarm FHP โ€” zero-trust B2BUA
Lines of code ๐Ÿ”ด ~200 per tool ๐ŸŸข ~15 per tool
AI agent setup ๐Ÿ”ด Days of learning ๐ŸŸข Reads SKILL.md โ€” first pass correct

The MVA Solution

Vurb.ts replaces JSON.stringify() with a Presenter โ€” a deterministic perception layer that controls exactly what the agent sees, knows, and can do next.

Handler (Model)          Presenter (View)              Agent (LLM)
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€          โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€              โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
Raw DB data        โ†’     Zod-validated schema      โ†’   Structured
{ amount_cents,          + System rules                perception
  password_hash,         + UI blocks (charts)          package
  internal_margin,       + Suggested next actions
  ssn, ... }             + PII redaction
                         + Cognitive guardrails
                         - password_hash  โ† STRIPPED
                         - internal_margin โ† STRIPPED
                         - ssn โ† REDACTED

The result is not JSON โ€” it's a Perception Package:

Block 1 โ€” DATA:    {"id":"INV-001","amount_cents":45000,"status":"pending"}
Block 2 โ€” UI:      [ECharts gauge chart config]
Block 3 โ€” RULES:   "amount_cents is in CENTS. Divide by 100 for display."
Block 4 โ€” ACTIONS: โ†’ billing.pay: "Invoice is pending โ€” process payment"
Block 5 โ€” EMBEDS:  [Client Presenter + LineItem Presenter composed]

No guessing. Undeclared fields rejected. Domain rules travel with data โ€” not in the system prompt. Next actions computed from data state.


Before vs. After

๐Ÿ”ด DANGER ZONE โ€” raw MCP:

case 'get_invoice':
    const invoice = await db.invoices.findUnique(args.id);
    return { content: [{ type: 'text', text: JSON.stringify(invoice) }] };
    // Leaks internal columns. No rules. No guidance.

๐ŸŸข SAFE ZONE โ€” Vurb.ts with MVA:

import { createPresenter, suggest, ui, t } from '@vurb/core';

const InvoicePresenter = createPresenter('Invoice')
    .schema({
        id:           t.string,
        amount_cents: t.number.describe('Amount in cents โ€” divide by 100'),
        status:       t.enum('paid', 'pending', 'overdue'),
    })
    .rules(['CRITICAL: amount_cents is in CENTS. Divide by 100 for display.'])
    .redactPII(['*.customer_ssn', '*.credit_card'])
    .ui((inv) => [
        ui.echarts({
            series: [{ type: 'gauge', data: [{ value: inv.amount_cents / 100 }] }],
        }),
    ])
    .suggest((inv) =>
        inv.status === 'pending'
            ? [suggest('billing.pay', 'Invoice pending โ€” process payment')]
            : [suggest('billing.archive', 'Invoice settled โ€” archive it')]
    )
    .embed('client', ClientPresenter)
    .embed('line_items', LineItemPresenter)
    .limit(50);

export default f.query('billing.get_invoice')
    .describe('Get an invoice by ID')
    .withString('id', 'Invoice ID')
    .returns(InvoicePresenter)
    .handle(async (input, ctx) => ctx.db.invoices.findUnique({
        where: { id: input.id },
        include: { client: true, line_items: true },
    }));

The handler returns raw data. The Presenter shapes absolutely everything the agent perceives.

๐Ÿ—๏ธ Architect's Checklist โ€” when reviewing AI-generated Vurb code, verify:

  1. .schema() only declares fields the LLM needs โ€” undeclared columns are stripped.
  2. .redactPII() is called on the Presenter, not the handler โ€” Late Guillotine pattern.
  3. .rules() travel with data, not in the system prompt โ€” contextual, not global.
  4. .suggest() computes next actions from data state โ€” not hardcoded.

Architecture

Egress Firewall โ€” Schema as Security Boundary

The Presenter's Zod schema acts as a whitelist. Only declared fields pass through. A database migration that adds customer_ssn doesn't change what the agent sees โ€” the new column is invisible unless you explicitly declare it in the schema.

const UserPresenter = createPresenter('User')
    .schema({ id: t.string, name: t.string, email: t.string });
// password_hash, tenant_id, internal_flags โ†’ STRIPPED at RAM level
// A developer CANNOT accidentally expose a new column

๐Ÿ’ฌ Tell your AI agent:

"Add an Egress Firewall to the User Presenter โ€” only expose id, name, and email. Strip password_hash and tenant_id at RAM level."

โ–ถ Open in Claude ยท โ–ถ Open in ChatGPT

DLP Compliance Engine โ€” PII Redaction

GDPR / LGPD / HIPAA compliance built into the framework. .redactPII() compiles a V8-optimized redaction function via fast-redact that masks sensitive fields after UI blocks and rules have been computed (Late Guillotine Pattern) โ€” the LLM receives [REDACTED] instead of real values.

const PatientPresenter = createPresenter('Patient')
    .schema({ name: t.string, ssn: t.string, diagnosis: t.string })
    .redactPII(['ssn', 'diagnosis'])
    .ui((patient) => [
        ui.markdown(`**Patient:** ${patient.name}`),
        // patient.ssn available for UI logic โ€” but LLM sees [REDACTED]
    ]);

Custom censors, wildcard paths ('*.email', 'patients[*].diagnosis'), and centralized PII field lists. Zero-leak guarantee โ€” the developer cannot accidentally bypass redaction.

๐Ÿ—๏ธ Architect's Check: Always verify that .redactPII() runs on the Presenter, not in the handler. The Late Guillotine pattern ensures UI blocks can use real values for logic, but the LLM never sees them.

๐Ÿ’ฌ Tell your AI agent:

"Add PII redaction to the PatientPresenter โ€” mask ssn and diagnosis. Use the Late Guillotine pattern so UI blocks can reference real values but the LLM sees [REDACTED]."

โ–ถ Open in Claude ยท โ–ถ Open in ChatGPT

8 Anti-Hallucination Mechanisms

โ‘  Action Consolidation    โ†’ groups operations behind fewer tools    โ†’ โ†“ tokens
โ‘ก TOON Encoding           โ†’ pipe-delimited compact descriptions    โ†’ โ†“ tokens
โ‘ข Zod .strict()           โ†’ rejects hallucinated params at build   โ†’ โ†“ retries
โ‘ฃ Self-Healing Errors     โ†’ directed correction prompts            โ†’ โ†“ retries
โ‘ค Cognitive Guardrails    โ†’ .limit() truncates before LLM sees it โ†’ โ†“ tokens
โ‘ฅ Agentic Affordances     โ†’ HATEOAS next-action hints from data   โ†’ โ†“ retries
โ‘ฆ JIT Context Rules       โ†’ rules travel with data, not globally  โ†’ โ†“ tokens
โ‘ง State Sync              โ†’ RFC 7234 cache-control for agents     โ†’ โ†“ requests

Each mechanism compounds. Fewer tokens in context, fewer requests per task, less hallucination, lower cost.

FSM State Gate โ€” Temporal Anti-Hallucination

The first framework where it is physically impossible for an AI to execute tools out of order.

LLMs are chaotic โ€” even with HATEOAS suggestions, a model can ignore them and call cart.pay with an empty cart. The FSM State Gate makes temporal hallucination structurally impossible: if the workflow state is empty, the cart.pay tool doesn't exist in tools/list. The LLM literally cannot call it.

const gate = f.fsm({
    id: 'checkout',
    initial: 'empty',
    states: {
        empty:     { on: { ADD_ITEM: 'has_items' } },
        has_items: { on: { CHECKOUT: 'payment', CLEAR: 'empty' } },
        payment:   { on: { PAY: 'confirmed', CANCEL: 'has_items' } },
        confirmed: { type: 'final' },
    },
});

const pay = f.mutation('cart.pay')
    .describe('Process payment')
    .bindState('payment', 'PAY')  // Visible ONLY in 'payment' state
    .handle(async (input, ctx) => ctx.db.payments.process(input.method));
State Visible Tools
empty cart.add_item, cart.view
has_items cart.add_item, cart.checkout, cart.view
payment cart.pay, cart.view
confirmed cart.view

Three complementary layers: Format (Zod validates shape), Guidance (HATEOAS suggests the next tool), Gate (FSM physically removes wrong tools). XState v5 powered, serverless-ready with fsmStore.

๐Ÿ’ฌ Tell your AI agent:

"Add an FSM State Gate to the checkout flow โ€” cart.pay is only visible in the 'payment' state. Use bindState to physically remove tools from tools/list."

โ–ถ Open in Claude ยท โ–ถ Open in ChatGPT

Zero-Trust Sandbox โ€” Computation Delegation

The LLM sends JavaScript logic to your data instead of shipping data to the LLM. Code runs inside a sealed V8 isolate โ€” zero access to process, require, fs, net, fetch, Buffer. Timeout kill, memory cap, output limit, automatic isolate recovery, and AbortSignal kill-switch (Connection Watchdog).

export default f.query('analytics.compute')
    .describe('Run a computation on server-side data')
    .sandboxed({ timeout: 3000, memoryLimit: 64 })
    .handle(async (input, ctx) => {
        const data = await ctx.db.records.findMany();
        const engine = f.sandbox({ timeout: 3000, memoryLimit: 64 });
        try {
            const result = await engine.execute(input.expression, data);
            if (!result.ok) return f.error('VALIDATION_ERROR', result.error)
                .suggest('Fix the JavaScript expression and retry.');
            return result.value;
        } finally { engine.dispose(); }
    });

.sandboxed() auto-injects HATEOAS instructions into the tool description โ€” the LLM knows exactly how to format its code. Prototype pollution contained. constructor.constructor escape blocked. One isolate per engine, new pristine context per call.

๐Ÿ’ฌ Tell your AI agent:

"Add a sandboxed computation tool that lets the LLM send JavaScript to run on server-side data inside a sealed V8 isolate. Timeout 3s, memory 64MB."

โ–ถ Open in Claude ยท โ–ถ Open in ChatGPT

State Sync โ€” Temporal Awareness for Agents

LLMs have no sense of time. After sprints.list then sprints.create, the agent still believes the list is unchanged. Vurb.ts injects RFC 7234-inspired cache-control signals:

const listSprints = f.query('sprints.list')
    .stale()                              // no-store โ€” always re-fetch
    .handle(async (input, ctx) => ctx.db.sprints.findMany());

const createSprint = f.action('sprints.create')
    .invalidates('sprints.*', 'tasks.*')  // causal cross-domain invalidation
    .withString('name', 'Sprint name')
    .handle(async (input, ctx) => ctx.db.sprints.create(input));
// After mutation: [System: Cache invalidated for sprints.*, tasks.* โ€” caused by sprints.create]
// Failed mutations emit nothing โ€” state didn't change.

Registry-level policies with f.stateSync(), glob patterns (*, **), policy overlap detection, observability hooks, and MCP notifications/resources/updated emission.

๐Ÿ’ฌ Tell your AI agent:

"Mark 'sprints.list' as stale (no-store) and configure 'sprints.create' to invalidate sprints. and tasks.* on mutation. Use RFC 7234 cache-control signals."*

โ–ถ Open in Claude ยท โ–ถ Open in ChatGPT

Prompt Engine โ€” Server-Side Templates

MCP Prompts as executable server-side templates with the same Fluent API as tools. Middleware, hydration timeout, schema-informed coercion, interceptors, multi-modal messages, and the Presenter bridge:

const IncidentAnalysis = f.prompt('incident_analysis')
    .title('Incident Analysis')
    .describe('Structured analysis of a production incident')
    .tags('engineering', 'ops')
    .input({
        incident_id: { type: 'string', description: 'Incident ticket ID' },
        severity: { enum: ['sev1', 'sev2', 'sev3'] as const },
    })
    .use(requireAuth, requireRole('engineer'))
    .timeout(10_000)
    .handler(async (ctx, { incident_id, severity }) => {
        const incident = await ctx.db.incidents.findUnique({ where: { id: incident_id } });
        return {
            messages: [
                PromptMessage.system(`You are a Senior SRE. Severity: ${severity.toUpperCase()}.`),
                ...PromptMessage.fromView(IncidentPresenter.make(incident, ctx)),
                PromptMessage.user('Begin root cause analysis.'),
            ],
        };
    });

PromptMessage.fromView() decomposes any Presenter into prompt messages โ€” same schema, same rules, same affordances in both tools and prompts. Multi-modal with .image(), .audio(), .resource(). Interceptors inject compliance footers after every handler. PromptRegistry with filtering, pagination, and lifecycle sync.

๐Ÿ’ฌ Tell your AI agent:

"Create a prompt called 'incident_analysis' with auth middleware, severity enum input, and PromptMessage.fromView() that decomposes the IncidentPresenter into structured messages."

โ–ถ Open in Claude ยท

Release History

VersionChangesUrgencyDate
v3.17.1### Fixed - **@vurb/yaml** โ€” Fixed `ReferenceError: crypto is not defined` in `BasicToolExecutor` and `YamlMcpServer` by importing `randomUUID` from `node:crypto` instead of using bare `globalThis.crypto`. ### Added - **@vurb/yaml** โ€” Exported reusable MCP handler helpers: `buildToolsList()`, `buildResourcesList()`, `buildPromptsList()`, `readResourceContent()`. - **@vurb/core** โ€” Added `./cli` export path (`@vurb/core/cli`) exposing `readVurbRc()`, `writeVurbRc()`, `loadEnv()`.High4/21/2026
v3.15.2๏ปฟ### Added #### @vurb/core โ€” Edge Deploy Listing Sync Control - ** urb deploy --no-marketplace** โ€” Added a new flag extending the deploy command to bypass reading and sending urb.marketplace.json. This allows developers to deploy new code bundles, schemas, and introspected tool contracts without updating listing metadata like titles, descriptions, and FAQs. High4/11/2026
v3.15.1## Fixed ### @vurb/core โ€” Unified Brand-Based ToolResponse Detection Five fixes eliminating latent architectural fragilities discovered during a deep audit of the core framework. - **PostProcessor.isToolResponse()** โ€” shape-based heuristic replaced with `TOOL_RESPONSE_BRAND` symbol detection, eliminating false positives from domain objects coincidentally matching the ToolResponse shape - **ResponseBuilder.build()** โ€” now stamps `TOOL_RESPONSE_BRAND` (was missing, causing MVA layer corruption High4/10/2026

Similar Packages

photonDefine intent once. Photon turns a single TypeScript file into CLI tools, MCP servers, and web interfaces.v1.23.1
multi-postgres-mcp-server๐Ÿš€ Manage multiple PostgreSQL databases with one MCP server, offering hot reload, access control, and read-only query safety in a single config filemaster@2026-04-21
google-workspace-mcp-with-scriptNo descriptionmain@2026-04-21
cc-skillsClaude Code Skills Marketplace: plugins, skills for ADR-driven development, DevOps automation, ClickHouse management, semantic versioning, and productivity workflowsv14.0.0
claude-code-statuslineโšก Real-time token, context & agent dashboard for Claude Code โ€” zero polling, pure stdinv1.1.0