Glassmorphic web interface for Hermes Agent β your self-hosted AI assistant
README
Hermes UI
A sleek, glassmorphic web interface for Hermes Agent β your self-hosted AI assistant.
Built as a single-file HTML application with React 18, Hermes UI provides a full-featured chat interface, real-time log streaming, file browsing, memory inspection, and more β all through a lightweight Python proxy server.
# Clone the repo
git clone https://github.com/pyrate-llama/hermes-ui.git
cd hermes-ui
# Start the proxy server
python3 serve.py
# Or specify a custom port
python3 serve.py 8080
That's it β no npm install, no build step, no dependencies beyond Python's standard library.
Configuration
The proxy server connects to Hermes at http://127.0.0.1:8642 by default. To change this, edit the HERMES variable at the top of serve.py.
For image analysis (paste/drop images in chat), add your Gemini API key in the Settings modal within the UI.
Using OpenRouter or Custom Inference Endpoints
Hermes supports any OpenAI-compatible API endpoint, which means you can use OpenRouter to access Claude, GPT-4, Llama, Mistral, and dozens of other models through a single API key.
In your ~/.hermes/config.yaml, set your inference endpoint and API key:
This also works with other compatible providers like LiteLLM (self-hosted proxy), Ollama (http://localhost:11434/v1), or any endpoint that speaks the OpenAI chat completions format.
Remote Access (Tailscale)
Access Hermes UI from your phone, tablet, or any device using Tailscale β a zero-config mesh VPN built on WireGuard. No ports exposed to the internet, no DNS to configure, all traffic encrypted end-to-end.
Install Tailscale on your server (the machine running Hermes):
brew install tailscale # macOS# or: curl -fsSL https://tailscale.com/install.sh | sh # Linux
tailscale up
Install Tailscale on your phone/other devices β download the app (iOS/Android) and sign in with the same account.
Connect β find your server's Tailscale IP (tailscale ip) and open:
http://100.x.x.x:3333/hermes-ui.html
Optional: HTTPS via Tailscale Serve β get a real certificate and clean URL:
tailscale serve --bg 3333
# Accessible at https://your-machine.tail1234.ts.net
A built-in setup guide is also available in the app under Settings > Remote Access.
hermes-ui.html β The entire frontend in a single file: React components, CSS, and markup. Uses Babel standalone for JSX compilation in the browser.
serve.py β A lightweight Python proxy (stdlib only, no pip dependencies) that serves static files, proxies API calls to Hermes, streams logs via SSE, provides shell/Claude CLI execution, and enables file browsing/editing within ~/.hermes.
CDN Dependencies
All loaded from cdnjs.cloudflare.com at runtime:
Library
Version
Purpose
React
18.2.0
UI framework
React DOM
18.2.0
DOM rendering
Babel Standalone
7.23.9
JSX compilation
marked
11.1.1
Markdown parsing
highlight.js
11.9.0
Code syntax highlighting
Inter
β
UI typography (Google Fonts)
JetBrains Mono
β
Code/terminal typography (Google Fonts)
Keyboard Shortcuts
Shortcut
Action
Enter
Send message
Shift+Enter
New line in input
?
Show keyboard shortcuts
Ctrl/Cmd+K
Focus search
Ctrl/Cmd+N
New chat
Ctrl/Cmd+\
Toggle sidebar
Ctrl/Cmd+E
Export chat as markdown
Escape
Close modals / dismiss
Themes
Hermes UI ships with three built-in themes, accessible via the theme switcher in the header:
Midnight (default) β Deep indigo/purple glassmorphism with ambient purple and green glow
Twilight β Warm amber/gold tones with copper accents
Dawn β Soft light theme with blue-gray tones for daytime use
Troubleshooting
Hermes stops responding / hangs after a few messages
If Hermes responds once or twice then goes silent, check your ~/.hermes/config.yaml for this bug in the context compression config:
compression:
summary_base_url: null # β this causes a 404 and hangs the agent
Fix it by setting summary_base_url to match your inference provider's base URL. For MiniMax:
Chat is stuck in streaming state / can't send a second message
This happens if you're running Hermes v0.7.0+ with an older version of hermes-ui. The v0.7.0 API uses /v1/chat/completions (OpenAI-compatible) instead of the old /api/sessions routes. Make sure you're on the latest hermes-ui by pulling the repo:
## What's New in v2.0 A major update with new features, reliability fixes, and a significantly improved chat experience. ### β¨ New Features **Concurrent Multi-Chat Streaming** Work in multiple chats simultaneously β start a prompt in one chat, switch to another, and both run in parallel. Each chat has its own independent pause/stop controls. **Pause & Resume Streaming** Pause a response mid-stream to interject context, then resume. Smart resume includes the last 300 characters so Hermes pick
High
4/12/2026
0.0.0
No release found β using repo HEAD
High
4/9/2026
Dependencies & License Audit
Loading dependencies...
Similar Packages
tekmetric-mcpπ Ask questions about your shop data in natural language and get instant answers about appointments, customers, and repair orders with Tekmetric MCP.main@2026-04-21
mcp-ts-coreAgent-native TypeScript framework for building MCP servers. Build tools, not infrastructure.main@2026-04-21
asya-chat-uiBuild multi-organization LLM chat platforms with model routing, tool execution, usage analytics, and OpenAI-compatible APIs.main@2026-04-21