freshcrate
Home > MCP Servers > obsidian-local-llm-hub

obsidian-local-llm-hub

All-in-one local AI hub for Obsidian โ€” LLM chat with vault tools, MCP servers, RAG, workflow automation, encryption, and edit history. Fully private, no cloud required.

Description

All-in-one local AI hub for Obsidian โ€” LLM chat with vault tools, MCP servers, RAG, workflow automation, encryption, and edit history. Fully private, no cloud required.

README

Local LLM Hub for Obsidian

Your company's security policy blocks cloud APIs. But you refuse to give up AI-powered note automation.

Local LLM Hub brings the full power of Gemini Helper's workflow automation, RAG, MCP integration, and agent skills to a completely local environment. Ollama, LM Studio, vLLM, or AnythingLLM โ€” your data never leaves your machine.

Workflow Execution


Why Local?

Every byte stays on your machine. No API keys sent to the cloud. No vault contents uploaded anywhere. This isn't a privacy "option" โ€” it's the architecture.

What Where it stays
Chat history Markdown files in your vault
RAG index Local embeddings in workspace folder
LLM requests localhost only (Ollama / LM Studio / vLLM / AnythingLLM)
MCP servers Local child processes via stdio
Encrypted files Encrypted/decrypted locally
Edit history In-memory (cleared on restart)

If you use Gemini Helper at home but need something for work โ€” this is it. Same workflow engine, same UX, zero cloud dependency.


Workflow Automation โ€” The Core Feature

Describe what you want in plain language. The AI builds the workflow. No YAML knowledge required.

Create Workflows & Skills with AI

Create Workflow with AI

  1. Open the Workflow tab โ†’ select + New (AI)
  2. Describe: "Convert the current page into an infographic and save it"
  3. Check "Create as agent skill" if you want to create an agent skill instead of a standalone workflow
  4. Click Generate โ€” done

Don't have a powerful local model? Click Copy Prompt, paste into Claude/GPT/Gemini, paste the response back, and click Apply.

Create Skill with External LLM

Modify with AI

Load any workflow, click AI Modify, describe the change. Reference execution history to debug failures.

Modify Workflow with AI

Visual Node Editor

23 node types across 12 categories:

Category Nodes
Variables variable, set
Control if, while
LLM command
Data http, json
Notes note, note-read, note-search, note-list, folder-list, open
Files file-explorer, file-save
Prompts prompt-file, prompt-selection, dialog
Composition workflow (sub-workflows)
RAG rag-sync
Script script (sandboxed JavaScript)
External obsidian-command
Utility sleep

Workflow Panel

Event Triggers & Hotkeys

  • Event triggers โ€” auto-run workflows on file create / modify / delete / rename / open
  • Hotkey support โ€” assign keyboard shortcuts to any named workflow
  • Execution history โ€” review past runs with step-by-step details

See WORKFLOW_NODES.md for the complete node reference.


AI Chat

Streaming chat with your local LLM. Thinking display, file attachments, @ mentions for vault notes, multiple sessions.

Chat with RAG

Vault Tools (Function Calling)

Models with function calling support (Qwen, Llama 3.1+, Mistral) can directly interact with your vault:

read_note ยท create_note ยท update_note ยท rename_note ยท create_folder ยท search_notes ยท list_notes ยท list_folders ยท get_active_note ยท propose_edit ยท execute_javascript

Three modes โ€” All, No Search, Off โ€” selectable from the input area.

Tool Settings

MCP Servers

Connect local MCP servers to extend the AI with external tools. MCP tools are merged with vault tools and routed via function calling โ€” all running as local child processes.

Chat with MCP

RAG (Local Embeddings)

Index your vault with a local embedding model (e.g. nomic-embed-text). Relevant notes and PDFs are automatically included as context. PDF text is extracted via PDF.js and chunked alongside Markdown files. Everything computed and stored locally.

RAG Search

A dedicated search interface for semantic vector search with keyword filtering, chunk editing, and AI-powered refinement.

RAG Search

  • Keyword filter โ€” Narrow semantic search results by text or file path
  • Chunk editor โ€” Edit result text, load adjacent chunks with automatic overlap removal
  • AI refine โ€” Automatically expand context and clean up text using your local LLM

See RAG_SEARCH.md for details.

Agent Skills

Inject reusable instructions into the system prompt via SKILL.md files. Activate per conversation. Skills can also expose workflows that the AI can invoke as tools during chat.

Create skills the same way as workflows โ€” select + New (AI), check "Create as agent skill", and describe what you want. The AI generates both the SKILL.md instructions and the workflow.

Agent Skills

See SKILLS.md for details.

Slash Commands & Compact History

  • Custom prompt templates triggered by /
  • /compact to compress long conversations while preserving context

File Encryption

Password-protect sensitive notes. Encrypted files are invisible to AI chat tools but accessible to workflows with password prompt โ€” ideal for storing API keys or credentials.

Edit History

Automatic tracking of AI-made changes with diff view and one-click restore.


Setup

Requirements

Quick Start

  1. Install and start your LLM server
  2. Open plugin settings โ†’ select framework (Ollama / LM Studio / vLLM / AnythingLLM)
  3. Set the server URL (defaults pre-filled)
  4. Fetch and select your chat model
  5. Click Verify connection

LLM Settings

RAG Setup

  1. Enable RAG in settings
  2. Fetch and select the embedding model
  3. Configure target folders (optional โ€” defaults to entire vault)
  4. Click Sync to build the index

RAG Settings

MCP Server Setup

  1. Settings โ†’ MCP servers โ†’ Add server
  2. Configure: name, command (e.g. npx), arguments, optional env vars
  3. Toggle on โ€” connects automatically via stdio

MCP & Encryption Settings

Workspace Settings

Workspace Settings

Supported Frameworks

Framework Chat Endpoint Streaming Thinking Function Calling
Ollama /api/chat (native) Real-time message.thinking field tools parameter
LM Studio (OpenAI compatible) /v1/chat/completions SSE <think> tags tools parameter
vLLM /v1/chat/completions SSE <think> tags tools parameter
AnythingLLM /v1/openai/chat/completions SSE <think> tags tools parameter

Using Cloud LLMs (OpenAI, Gemini, etc.)

The "LM Studio (OpenAI compatible)" framework works with any OpenAI-compatible API endpoint, including cloud services:

Service Base URL API Key
OpenAI https://api.openai.com Your OpenAI API key
Google Gemini https://generativelanguage.googleapis.com/v1beta/openai Your Gemini API key

RAG with cloud LLMs: Cloud LLMs cannot use local embedding models directly. To use RAG, configure the Embedding server URL in RAG settings to point to a local Ollama instance (e.g. http://localhost:11434) and select an embedding model like nomic-embed-text.


Installation

BRAT (Recommended)

  1. Install BRAT plugin
  2. Open BRAT settings โ†’ "Add Beta plugin"
  3. Enter: https://github.com/takeshy/obsidian-local-llm-hub
  4. Enable the plugin in Community plugins settings

Manual

  1. Download main.js, manifest.json, styles.css from releases
  2. Create local-llm-hub folder in .obsidian/plugins/
  3. Copy files and enable in Obsidian settings

From Source

git clone https://github.com/takeshy/obsidian-local-llm-hub
cd obsidian-local-llm-hub
npm install
npm run build

Gemini Helper ใจใฎ้–ขไฟ‚ / Relationship to Gemini Helper

This plugin is the local-only sibling of obsidian-gemini-helper. Same workflow engine, same UX patterns, but designed for environments where cloud APIs are not an option.

Gemini Helper Local LLM Hub
LLM Backend Google Gemini API / CLI Ollama / LM Studio / vLLM / AnythingLLM / OpenAI-compatible APIs
Data destination Google servers localhost only
Workflow engine โœ… โœ… (same architecture)
RAG Google File Search Local embeddings
MCP โœ… โœ… (stdio only)
Agent Skills โœ… โœ…
Image generation โœ… (Gemini) โ€”
Web search โœ… (Google) โ€”
Cost Free / Pay-per-use Free forever (your hardware)

Choose Gemini Helper when you want cutting-edge cloud models. Choose Local LLM Hub when privacy is non-negotiable.

Release History

VersionChangesUrgencyDate
0.12.2## New Features - Clickable tool tags in chat โ€” Tool names shown under "Tools used" (read_note, create_note, update_note, propose_edit, rename_note, get_active_note) are now clickable and open the referenced note directly. Tools that don't target a specific note (search_notes, list_notes, MCP tools, etc.) show their argument details in a Notice on clickHigh4/16/2026
0.12.1๐Ÿ› Bug Fixes @-mention resolution now handles all vault file paths Fixed a class of bugs where mentioning a note via ๏ผ path in Chat or the AI Workflow modal silently failed or produced wrong results. ### What was broken - Paths with spaces โ€” @My Notes/Daily.md resolved only My and left the rest as literal text. - Unicode paths โ€” @ใƒกใƒข/ๆ—ฅๆœฌ่ชž.md often failed to match depending on surrounding characters. - Regex-special characters โ€” paths containing (, ), +, ., [, etc. were silently truncaHigh4/15/2026
0.10.2 ## โœจ Confirm Modal Overhaul ### Rich diff view - **Unified / Split view toggle** - **Word-level highlighting** โ€” changed parts are highlighted per word - **Per-line comments** โ€” click any line to attach a comment; comments are sent to the LLM as structured feedback (`Line 12 (+): <content>\nComment: ...`) when you click "Request changes" ### Skill-modify visibility - **SKILL.md instructions diff now shown** alongside the YAML diff when modifying a skill. Previously only thHigh4/13/2026
0.9.5- Chat view restores previous thread โ€” Reopening the Chat view within the same Obsidian session now automatically restores the last active conversation. No more losing your place when switching between tabs. High4/11/2026
0.9.1## RAG Search: Multi-field keyword filter with AI suggestion ### Multi-field AND/OR keyword filter - Keyword filter now supports multiple fields - Within a field: space-separated terms use OR logic (any term matches) - Between fields: AND logic (all fields must match) - Click + AND to add a filter field, โœ• to remove ### AI keyword suggestion - Each filter field has a โœฆ (sparkle) button - Uses the configured AI refine model to expand keywords with synonyms and relaHigh4/8/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

fast-rlmImplement Recursive Language Models using Deno and Pyodide to enable scalable, code-driven prompt processing with modular sub-agent calls.main@2026-04-21
vobaseThe app framework built for AI coding agents. Own every line. Your AI already knows how to build on it.create-vobase@0.6.2
anything-llmThe all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configuration.v1.12.0
spaceship-mcp๐Ÿš€ Manage domains, DNS, contacts, and listings with spaceship-mcp, a community-built MCP server for the Spaceship API.main@2026-04-21
website-design-systems-mcp๐ŸŽจ Extract complete design systems from websites and generate AI-ready skill.md files to replicate exact design elements efficiently.main@2026-04-21