Your company's security policy blocks cloud APIs. But you refuse to give up AI-powered note automation.
Local LLM Hub brings the full power of Gemini Helper's workflow automation, RAG, MCP integration, and agent skills to a completely local environment. Ollama, LM Studio, vLLM, or AnythingLLM โ your data never leaves your machine.
Every byte stays on your machine. No API keys sent to the cloud. No vault contents uploaded anywhere. This isn't a privacy "option" โ it's the architecture.
| What | Where it stays |
|---|---|
| Chat history | Markdown files in your vault |
| RAG index | Local embeddings in workspace folder |
| LLM requests | localhost only (Ollama / LM Studio / vLLM / AnythingLLM) |
| MCP servers | Local child processes via stdio |
| Encrypted files | Encrypted/decrypted locally |
| Edit history | In-memory (cleared on restart) |
If you use Gemini Helper at home but need something for work โ this is it. Same workflow engine, same UX, zero cloud dependency.
Describe what you want in plain language. The AI builds the workflow. No YAML knowledge required.
- Open the Workflow tab โ select + New (AI)
- Describe: "Convert the current page into an infographic and save it"
- Check "Create as agent skill" if you want to create an agent skill instead of a standalone workflow
- Click Generate โ done
Don't have a powerful local model? Click Copy Prompt, paste into Claude/GPT/Gemini, paste the response back, and click Apply.
Load any workflow, click AI Modify, describe the change. Reference execution history to debug failures.
23 node types across 12 categories:
| Category | Nodes |
|---|---|
| Variables | variable, set |
| Control | if, while |
| LLM | command |
| Data | http, json |
| Notes | note, note-read, note-search, note-list, folder-list, open |
| Files | file-explorer, file-save |
| Prompts | prompt-file, prompt-selection, dialog |
| Composition | workflow (sub-workflows) |
| RAG | rag-sync |
| Script | script (sandboxed JavaScript) |
| External | obsidian-command |
| Utility | sleep |
- Event triggers โ auto-run workflows on file create / modify / delete / rename / open
- Hotkey support โ assign keyboard shortcuts to any named workflow
- Execution history โ review past runs with step-by-step details
See WORKFLOW_NODES.md for the complete node reference.
Streaming chat with your local LLM. Thinking display, file attachments, @ mentions for vault notes, multiple sessions.
Models with function calling support (Qwen, Llama 3.1+, Mistral) can directly interact with your vault:
read_note ยท create_note ยท update_note ยท rename_note ยท create_folder ยท search_notes ยท list_notes ยท list_folders ยท get_active_note ยท propose_edit ยท execute_javascript
Three modes โ All, No Search, Off โ selectable from the input area.
Connect local MCP servers to extend the AI with external tools. MCP tools are merged with vault tools and routed via function calling โ all running as local child processes.
Index your vault with a local embedding model (e.g. nomic-embed-text). Relevant notes and PDFs are automatically included as context. PDF text is extracted via PDF.js and chunked alongside Markdown files. Everything computed and stored locally.
A dedicated search interface for semantic vector search with keyword filtering, chunk editing, and AI-powered refinement.
- Keyword filter โ Narrow semantic search results by text or file path
- Chunk editor โ Edit result text, load adjacent chunks with automatic overlap removal
- AI refine โ Automatically expand context and clean up text using your local LLM
See RAG_SEARCH.md for details.
Inject reusable instructions into the system prompt via SKILL.md files. Activate per conversation. Skills can also expose workflows that the AI can invoke as tools during chat.
Create skills the same way as workflows โ select + New (AI), check "Create as agent skill", and describe what you want. The AI generates both the SKILL.md instructions and the workflow.
See SKILLS.md for details.
- Custom prompt templates triggered by
/ /compactto compress long conversations while preserving context
Password-protect sensitive notes. Encrypted files are invisible to AI chat tools but accessible to workflows with password prompt โ ideal for storing API keys or credentials.
Automatic tracking of AI-made changes with diff view and one-click restore.
- Ollama, LM Studio, vLLM, or AnythingLLM
- A chat model (e.g.
ollama pull qwen3.5:4b) - For RAG: an embedding model (e.g.
ollama pull nomic-embed-text)
- Install and start your LLM server
- Open plugin settings โ select framework (Ollama / LM Studio / vLLM / AnythingLLM)
- Set the server URL (defaults pre-filled)
- Fetch and select your chat model
- Click Verify connection
- Enable RAG in settings
- Fetch and select the embedding model
- Configure target folders (optional โ defaults to entire vault)
- Click Sync to build the index
- Settings โ MCP servers โ Add server
- Configure: name, command (e.g.
npx), arguments, optional env vars - Toggle on โ connects automatically via stdio
| Framework | Chat Endpoint | Streaming | Thinking | Function Calling |
|---|---|---|---|---|
| Ollama | /api/chat (native) |
Real-time | message.thinking field |
tools parameter |
| LM Studio (OpenAI compatible) | /v1/chat/completions |
SSE | <think> tags |
tools parameter |
| vLLM | /v1/chat/completions |
SSE | <think> tags |
tools parameter |
| AnythingLLM | /v1/openai/chat/completions |
SSE | <think> tags |
tools parameter |
The "LM Studio (OpenAI compatible)" framework works with any OpenAI-compatible API endpoint, including cloud services:
| Service | Base URL | API Key |
|---|---|---|
| OpenAI | https://api.openai.com |
Your OpenAI API key |
| Google Gemini | https://generativelanguage.googleapis.com/v1beta/openai |
Your Gemini API key |
RAG with cloud LLMs: Cloud LLMs cannot use local embedding models directly. To use RAG, configure the Embedding server URL in RAG settings to point to a local Ollama instance (e.g. http://localhost:11434) and select an embedding model like nomic-embed-text.
- Install BRAT plugin
- Open BRAT settings โ "Add Beta plugin"
- Enter:
https://github.com/takeshy/obsidian-local-llm-hub - Enable the plugin in Community plugins settings
- Download
main.js,manifest.json,styles.cssfrom releases - Create
local-llm-hubfolder in.obsidian/plugins/ - Copy files and enable in Obsidian settings
git clone https://github.com/takeshy/obsidian-local-llm-hub
cd obsidian-local-llm-hub
npm install
npm run buildThis plugin is the local-only sibling of obsidian-gemini-helper. Same workflow engine, same UX patterns, but designed for environments where cloud APIs are not an option.
| Gemini Helper | Local LLM Hub | |
|---|---|---|
| LLM Backend | Google Gemini API / CLI | Ollama / LM Studio / vLLM / AnythingLLM / OpenAI-compatible APIs |
| Data destination | Google servers | localhost only |
| Workflow engine | โ | โ (same architecture) |
| RAG | Google File Search | Local embeddings |
| MCP | โ | โ (stdio only) |
| Agent Skills | โ | โ |
| Image generation | โ (Gemini) | โ |
| Web search | โ (Google) | โ |
| Cost | Free / Pay-per-use | Free forever (your hardware) |
Choose Gemini Helper when you want cutting-edge cloud models. Choose Local LLM Hub when privacy is non-negotiable.














