freshcrate
Home > Databases > vectra

vectra

Vectra is a local vector database for Node.js with features similar to pinecone but built using local files.

Description

Vectra is a local vector database for Node.js with features similar to pinecone but built using local files.

README

Vectra: a local vector database

npm version Build Coverage Status License: MIT Agent Ready

Vectra is a local, file-backed, in-memory vector database with an optional gRPC server for cross-language access. Each index is a folder on disk — queries use MongoDB-style metadata filtering and cosine similarity ranking, with sub-millisecond latency for small indexes.

What's New in Vectra 0.14

  • Browser & Electron supportvectra/browser entry point with IndexedDBStorage and TransformersEmbeddings
  • Local embeddingsLocalEmbeddings and TransformersEmbeddings run HuggingFace models with no API key
  • Protocol Buffers — opt-in binary format, 40-50% smaller files
  • gRPC servervectra serve exposes 19 RPCs for cross-language access
  • FolderWatcher — auto-sync directories into a document index
  • Language bindingsvectra generate scaffolds clients for 6 languages

See the Changelog for breaking changes and migration details.

Install

npm install vectra

Quick Example

import { LocalDocumentIndex, OpenAIEmbeddings } from 'vectra';

const docs = new LocalDocumentIndex({
  folderPath: './my-index',
  embeddings: new OpenAIEmbeddings({
    apiKey: process.env.OPENAI_API_KEY!,
    model: 'text-embedding-3-small',
    maxTokens: 8000,
  }),
});

if (!(await docs.isIndexCreated())) {
  await docs.createIndex({ version: 1 });
}

await docs.upsertDocument('doc://readme', 'Vectra is a local vector database...', 'md');

const results = await docs.queryDocuments('What is Vectra?', { maxDocuments: 5 });
if (results.length > 0) {
  const sections = await results[0].renderSections(2000, 1, true);
  console.log(sections[0].text);
}

Documentation

Full docs at stevenic.github.io/vectra:

Guide Description
Getting Started Install, requirements, quick start with both index types
Core Concepts Index types, metadata filtering, on-disk layout
Embeddings Guide Choose and configure an embeddings provider
Document Indexing Chunking, retrieval, hybrid search, FolderWatcher
CLI Reference All CLI commands, flags, and provider config
API Reference TypeScript API overview
Best Practices Performance tuning, troubleshooting
Storage Pluggable backends, browser/IndexedDB, serialization formats
gRPC Server Cross-language access and language bindings
Changelog Breaking changes and migration guides
Tutorials RAG pipeline, browser app, gRPC, custom storage, folder sync
Samples Runnable examples: quickstart, RAG, browser, SQLite storage, gRPC, folder watcher

Agent Ready

Vectra ships an llms.txt file that gives coding agents everything they need to integrate Vectra into your project. Point your agent at it and let it do the work:

Read the llms.txt file at https://raw.githubusercontent.com/Stevenic/vectra/main/llms.txt
and then add Vectra support to this project. Use LocalDocumentIndex for document
storage and retrieval.

The llms.txt file covers all exports, index types, CLI commands, gRPC bindings, and on-disk format — enough for any coding agent to scaffold a working integration without browsing docs.

License

MIT License. See LICENSE.

Contributing

See CONTRIBUTING.md for guidelines. Please review our Code of Conduct.

Release History

VersionChangesUrgencyDate
v0.14.0## What's New ### Breaking Changes - **`fetch()` replaces axios** — All HTTP requests now use the built-in `fetch()` API. Projects relying on axios interceptors or custom axios config need to switch to the `requestConfig` option (a standard `RequestInit` object) on `OpenAIEmbeddings`. - **Node.js 22.x minimum** — Minimum Node.js version is now 22.x (up from 20.x), driven by `undici@8.0.0` requiring `node >=22.19.0`. ### New Features - **Browser & Electron support** — Full browser and ElectroHigh4/3/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

openclaw-engramLocal-first memory plugin for OpenClaw AI agents. LLM-powered extraction, plain markdown storage, hybrid search via QMD. Gives agents persistent long-term memory across conversations.v9.3.142
remnicLocal-first memory plugin for OpenClaw AI agents. LLM-powered extraction, plain markdown storage, hybrid search via QMD. Gives agents persistent long-term memory across conversations.v9.3.142
mem9Enable AI agents to retain memory across sessions using persistent storage designed for continuous context retention.main@2026-04-21
HelixTransform Claude into a local AI assistant for Mac that controls apps, manages tasks, and remembers context across sessions.main@2026-04-21
CodeRAGBuild semantic vector databases from code and docs to enable AI agents to understand and navigate your entire codebase effectively.main@2026-04-21