freshcrate
Home > MCP Servers > open-computer-use

open-computer-use

MCP server that gives any LLM its own computer β€” managed Docker workspaces with live browser, terminal, code execution, document skills, and autonomous sub-agents. Self-hosted, open-source, pluggable

Description

MCP server that gives any LLM its own computer β€” managed Docker workspaces with live browser, terminal, code execution, document skills, and autonomous sub-agents. Self-hosted, open-source, pluggable into any model.

README

Open Computer Use

Build CodeQL Release License Stars Issues PRs Welcome

MCP server that gives any LLM its own computer β€” managed Docker workspaces with live browser, terminal, code execution, document skills, and autonomous sub-agents. Self-hosted, open-source, pluggable into any model.

Online demo: chat.yambr.com β€” Open WebUI with Computer Use already set up, sign in with GitHub or Google. (More ways to try it below.)

Demo: AI reads GitHub README and creates a landing page

What is this?

An MCP server that gives any LLM a fully-equipped Ubuntu sandbox with isolated Docker containers. Think of it as your AI's computer β€” it can do everything a developer can do:

  • Execute code β€” bash, Python, Node.js, Java in isolated containers
  • Create documents β€” Word, Excel, PowerPoint, PDF with professional styling via skills
  • Browse the web β€” Playwright + live CDP browser streaming (you see what AI sees in real-time)
  • Run Claude Code β€” autonomous sub-agent with interactive terminal, MCP servers auto-configured
  • Use 13+ skills β€” battle-tested workflows for document creation, web testing, design, and more

Built for production multi-user deployments. Tested with 1,000+ MAU. Each chat session runs in its own isolated Docker container β€” the AI can install packages, create files, run servers, and nothing leaks between users. Works seamlessly across MCP clients: start with Open WebUI today, switch to Claude Desktop or n8n tomorrow β€” same backend, no migration.

Key differentiators

Feature Open Computer Use Claude.ai (Claude Code web) open-terminal OpenAI Operator
Self-hosted Yes No Yes No
Any LLM Yes (OpenAI-compatible) Claude only Any (via Open WebUI) GPT only
Code execution Full Linux sandbox Sandbox (Claude Code web) Sandbox / bare metal No
Live browser CDP streaming (shared, interactive) Screenshot-based No Screenshot-based
Terminal + Claude Code ttyd + tmux + Claude Code CLI Claude Code web (built-in) PTY + WebSocket N/A
Skills system 13 built-in (auto-injected) + custom Built-in skills + custom instructions Open WebUI native (text-only) N/A
Container isolation Docker (runc), per chat Docker (gVisor) Shared container (OS-level users) N/A

Works with any MCP-compatible client: Open WebUI, Claude Desktop, LiteLLM, n8n, or your own integration. See docs/COMPARISON.md for a detailed comparison with alternatives.

Live browser streaming

Browser Viewer

File preview with skills

File Preview

Claude Code β€” interactive terminal in the cloud

Claude Code Terminal

Sub-agent dashboard β€” monitor and control

Sub-Agent Dashboard

See docs/FEATURES.md for architecture details and docs/SCREENSHOTS.md for all screenshots.

Pro tip: Create skills with Claude Code in the terminal, then use them with any model in the chat. Skills are model-agnostic β€” write once, use everywhere.

Architecture

Architecture

Ways to try it

Path URL What you need Best for
Free online demo β€” Open WebUI + Computer Use, models included chat.yambr.com GitHub or Google sign-in Trying it end-to-end in 30 seconds
Hosted MCP endpoint β€” tools only, bring your own LLM Key at app.yambr.com β†’ connect to https://api.yambr.com/mcp/computer_use GitHub/Google sign-in; your own OpenAI / Anthropic / OpenRouter key Plugging Computer Use into Claude Desktop, n8n, OpenAI Agents SDK
Self-host Quick Start below Docker, ~15 min first build Full control, air-gapped, heavy use

OAuth only β€” no email/password, no SMS. On chat.yambr.com models are bundled as a free convenience; the hosted API is tools-only. Canonical cloud docs: docs.yambr.com. Repo-side orientation: docs/CLOUD.md.

Quick Start

git clone https://github.com/Yambr/open-computer-use.git
cd open-computer-use
cp .env.example .env
# Edit .env β€” set OPENAI_API_KEY (or any OpenAI-compatible provider)

# 1. Start Computer Use Server (builds workspace image on first run, ~15 min)
docker compose up --build

# 2. Start Open WebUI (in another terminal)
docker compose -f docker-compose.webui.yml up --build

Open http://localhost:3000 β€” Open WebUI with Computer Use ready to go.

Note: Two separate docker-compose files: docker-compose.yml (Computer Use Server) and docker-compose.webui.yml (Open WebUI). They communicate via localhost:8081. This mirrors real deployments where the server and UI run on different hosts.

Model Settings (important!)

After adding a model in Open WebUI, go to Model Settings and set:

Setting Value Why
Function Calling Native Required for Computer Use tools to work
Stream Chat Response On Enables real-time output streaming

Without Function Calling: Native, the model won't invoke Computer Use tools.

What's Inside the Sandbox

Sandbox Contents

Category Tools
Languages Python 3.12, Node.js 22, Java 21, Bun
Documents LibreOffice, Pandoc, python-docx, python-pptx, openpyxl
PDF pypdf, pdf-lib, reportlab, tabula-py, ghostscript
Images Pillow, OpenCV, ImageMagick, sharp, librsvg
Web Playwright (Chromium), Mermaid CLI
AI Claude Code CLI, Playwright MCP
OCR Tesseract (configurable languages)
Media FFmpeg
Diagrams Graphviz, Mermaid
Dev TypeScript, tsx, git

Skills

13 built-in public skills + 14 examples:

Skill Description
pptx Create/edit PowerPoint presentations with html2pptx
docx Create/edit Word documents with tracked changes
xlsx Create/edit Excel spreadsheets with formulas
pdf Create, fill forms, extract, merge PDFs
sub-agent Delegate complex tasks to Claude Code
playwright-cli Browser automation and web scraping
describe-image Vision API image analysis
frontend-design Build production-grade UIs
webapp-testing Test web applications with Playwright
doc-coauthoring Structured document co-authoring workflow
test-driven-development TDD methodology enforcement
skill-creator Create custom skills
gitlab-explorer Explore GitLab repositories

14 example skills: web-artifacts-builder, copy-editing, social-content, canvas-design, algorithmic-art, theme-factory, mcp-builder, and more.

See docs/SKILLS.md for details.

MCP Integration

The server speaks standard MCP over Streamable HTTP. Point any MCP client at it β€” hosted or self-hosted.

  • Hosted: https://api.yambr.com/mcp/computer_use with Authorization: Bearer <key from app.yambr.com>. Client configs and full reference live on docs.yambr.com.

  • Self-hosted: http://localhost:8081/mcp. Quick sanity check:

    curl -X POST http://localhost:8081/mcp \
      -H "Content-Type: application/json" \
      -H "X-Chat-Id: test" \
      -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}'

    Full self-host integration guide (LiteLLM, Claude Desktop, custom clients): docs/MCP.md. The per-chat system prompt rides six redundant MCP-native channels (tool descriptions, /home/assistant/README.md in the sandbox, InitializeResult.instructions, resources/list for uploaded files, plus an HTTP /system-prompt endpoint for legacy integrations) β€” full map in docs/system-prompt.md.

Configuration

All settings via .env:

Variable Default Description
OPENAI_API_KEY β€” LLM API key (any OpenAI-compatible)
OPENAI_API_BASE_URL β€” Custom API base URL (OpenRouter, etc.)
MCP_API_KEY β€” Bearer token for MCP endpoint
DOCKER_IMAGE open-computer-use:latest Sandbox container image
COMMAND_TIMEOUT 120 Bash tool timeout (seconds)
SUB_AGENT_TIMEOUT 3600 Sub-agent timeout (seconds)
SINGLE_USER_MODE β€” true = one container, no chat ID needed; false = require X-Chat-Id; unset = lenient
PUBLIC_BASE_URL http://computer-use-server:8081 Browser-reachable URL of the Computer Use server. Baked into /system-prompt and returned to the Open WebUI filter in the X-Public-Base-URL response header β€” single source of truth for the public URL. Open WebUI filter URL requirements.
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES, ORCHESTRATOR_URL, TOOL_RESULT_MAX_CHARS, TOOL_RESULT_PREVIEW_CHARS, build-arg COMPUTER_USE_SERVER_URL β€” Settings on the open-webui container (not CU-server). Required when embedding β€” see Required setup when embedding Open WebUI.
POSTGRES_PASSWORD openwebui PostgreSQL password
VISION_API_KEY β€” Vision API key (for describe-image)
ANTHROPIC_AUTH_TOKEN β€” Anthropic key (for Claude Code sub-agent)
MCP_TOKENS_URL β€” Settings Wrapper URL (optional, see below)
MCP_TOKENS_API_KEY β€” Settings Wrapper auth key

Custom Skills & Token Management (optional)

By default, all 13 built-in skills are available to everyone. For per-user skill access and custom skills, deploy the Settings Wrapper β€” see settings-wrapper/README.md.

Personal Access Tokens (PATs): The settings wrapper can also store encrypted per-user PATs for external services (GitLab, Confluence, Jira, etc.). The server fetches them by user email and injects into the sandbox β€” so each user's AI has access to their repos/docs without sharing credentials. The server-side code for token injection is implemented (docker_manager.py), but the Open WebUI tool doesn't pass the required headers yet. This is on the roadmap β€” if you need PAT management, open an issue.

MCP Client Integrations

The Computer Use Server speaks standard MCP over Streamable HTTP β€” any MCP-compatible client can connect. Open WebUI is the primary tested frontend, but not the only option.

Client Self-hosted URL Hosted URL Status
Open WebUI Docker Compose stack included, auto-configured n/a β€” use chat.yambr.com directly (pointing your own Open WebUI at the hosted API isn't a documented path) Tested in production
Claude Desktop http://localhost:8081/mcp β€” see docs/MCP.md https://api.yambr.com/mcp/computer_use β€” see docs/CLOUD.md Works
n8n MCP Tool node β†’ http://computer-use-server:8081/mcp MCP Tool node β†’ https://api.yambr.com/mcp/computer_use Works
LiteLLM MCP proxy config β€” see docs/MCP.md MCP proxy β†’ https://api.yambr.com/mcp/computer_use Works
Custom client Any HTTP client with MCP JSON-RPC β€” see curl examples in docs/MCP.md Same, with Authorization: Bearer sk-... (key from app.yambr.com) Works

Open WebUI Integration

Open WebUI is an extensible, self-hosted AI interface. We use it as the primary frontend because it supports tool calling, function filters, and artifacts β€” everything needed for Computer Use.

Compatibility: Tested with Open WebUI v0.8.11–0.8.12. Set OPENWEBUI_VERSION in .env to pin a specific version.

Why not a fork? We intentionally did not fork Open WebUI. Instead, everything is bolted on via the official plugin API (tools + functions) and build-time patches for missing features. This means you can use stock Open WebUI versions v0.8.11–0.8.12 (tested) β€” just install the tool and filter. Patches are applied at Docker build time; strongly recommended β€” 4 of them affect user-visible UX (artifacts panel, preview iframe, error banners, large tool-result handling). Pulling ghcr.io/open-webui/open-webui directly skips all of them β€” see Required setup when embedding Open WebUI for the full checklist.

Running Claude Code through a corporate gateway (LiteLLM, Azure, Bedrock)? See docs/claude-code-gateway.md for the three-path operator recipe.

The openwebui/ directory contains:

  • tools/ β€” MCP client tool (thin proxy to Computer Use Server). Required β€” this is the bridge between Open WebUI and the sandbox.
  • functions/ β€” System prompt injector + file link rewriter + archive button. Required β€” without it the model doesn't know about skills and file URLs.
  • patches/ β€” Build-time fixes for artifacts, error handling, file preview. Optional but recommended β€” improves UX significantly.
  • init.sh β€” Auto-installs tool + filter on first startup. Optional β€” you can install manually via Workspace UI instead.
  • Dockerfile β€” Builds a patched Open WebUI image with auto-init. Optional β€” use stock Open WebUI + manual setup if you prefer.

How auto-init works

On first docker compose up, the init script automatically:

  1. Creates an admin user (admin@open-computer-use.dev / admin)
  2. Installs the Computer Use tool via POST /api/v1/tools/create
  3. Installs the Computer Use filter via POST /api/v1/functions/create
  4. Configures tool and filter valves (ORCHESTRATOR_URL=http://computer-use-server:8081 β€” internal URL for server↔server, seeded into both Valves)
  5. Marks the tool public-read (access grants for both group:* and user:* wildcards) β€” so non-admin users see the tool in their workspace
  6. Marks the filter both active and global (two separate toggles: /toggle and /toggle/global) β€” active-but-not-global is silently inert and a common manual-setup mistake
  7. Merges {function_calling: "native", stream_response: true} into DEFAULT_MODEL_PARAMS via POST /api/v1/configs/models β€” every model gets the right defaults without per-model Advanced Params clicks

A marker file (.computer-use-initialized) prevents re-running on subsequent starts.

Note: Open WebUI doesn't support pre-installed tools from the filesystem β€” they must be loaded via the REST API. The init script automates this so you don't have to do it manually.

Manual setup (if not using docker-compose)

If you run Open WebUI separately, you need to manually:

  1. Go to Workspace > Tools β†’ Create new tool β†’ paste contents of openwebui/tools/computer_use_tools.py
  2. Set Tool ID to ai_computer_use (required for filter to work)
  3. Configure Valves: ORCHESTRATOR_URL = internal URL of your Computer Use Server (http://computer-use-server:8081 for Docker compose)
  4. Open the tool's β‹― β†’ Share menu and set access to Public (grants read to both group:* and user:* wildcards) β€” otherwise only your admin account sees the tool and non-admin users get an empty tool list with no error
  5. Go to Workspace > Functions β†’ Create new function β†’ paste openwebui/functions/computer_link_filter.py
  6. Enable the filter: toggle Active and toggle Global in the Functions list β€” these are two separate switches, and active-but-not-global means the filter loads but is never applied to chats
  7. In your model settings, set Function Calling = Native and Stream Chat Response = On. Or set them globally once in Admin β†’ Settings β†’ Models β†’ Advanced Params (function_calling: native, stream_response: true) β€” that becomes DEFAULT_MODEL_PARAMS for every model.

The docker-compose stack handles all of this automatically.

Required setup when embedding Open WebUI into your own stack

If you run Open WebUI outside the stock docker-compose.webui.yml β€” your own compose, Kubernetes, Portainer, or a downstream repo β€” there are four traps that will silently break Computer Use. All four hit us in production. Check in this order.

Step 1 β€” Build the image from openwebui/Dockerfile, don't pull upstream

Pulling ghcr.io/open-webui/open-webui:vX.Y.Z gives you a stock image without any of this repo's patches. Four of them are critical for UX:

Patch Without it
fix_artifacts_auto_show HTML/iframe renders as raw text in chat body instead of the artifacts panel
fix_preview_url_detection Preview iframe is never auto-inserted after file links
fix_tool_loop_errors Raw exceptions instead of banners; MCP call failed: Session terminated appears unwrapped
fix_large_tool_results TOOL_RESULT_MAX_CHARS stops truncating and the large-result upload path (via ORCHESTRATOR_URL) becomes a no-op; large outputs wreck the model context

Only CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES keeps working on an upstream image (it's a stock Open WebUI env) β€” which creates a false "everything is configured" feeling.

Use build: in your downstream compose, mirroring docker-compose.webui.yml:11-15:

services:
  open-webui:
    build:
      context: ./openwebui   # path into this repo
      dockerfile: Dockerfile
      args:
        OPENWEBUI_VERSION: "0.8.12"
        COMPUTER_USE_SERVER_URL: "cu.your-domain.com"   # see Step 2 β€” NOT an internal hostname
    image: open-webui-with-cu-patches:latest   # local tag, do not pull

Verify the patches are baked into the running container:

docker exec open-webui bash -c \
  'grep -l "bn.set(!0),Jr.set(!0)" /app/build/_app/immutable/chunks/*.js >/dev/null \
   && echo "patches applied" || echo "MISSING β€” you are on upstream image"'

The bn.set(!0),Jr.set(!0) marker is injected by fix_artifacts_auto_show into the minified Svelte chunks at build time. Empty output = stock upstream image, not ours.

Step 2 β€” Set COMPUTER_USE_SERVER_URL build-arg to the PUBLIC domain (counterintuitive)

This is the most confusing trap. COMPUTER_USE_SERVER_URL is a build argument in openwebui/Dockerfile:16-17 that β€” despite the name β€” is not a network endpoint. It is compiled into a regex inside the minified Svelte chunks by openwebui/patches/fix_preview_url_detection.py:54. The regex searches assistant messages for links of the form {COMPUTER_USE_SERVER_URL}/(files|preview)/... and triggers the preview iframe.

The model writes whatever URL the Computer Use Server injected into the system prompt β€” i.e. the server's PUBLIC_BASE_URL, which is your public domain. So the regex must match that public domain, not the internal Docker service name.

Environment Correct value
Production with domain cu.your-domain.com (no scheme β€” the regex wraps it)
Local dev (Docker Desktop) localhost:8081 (the default)

⚠️ If you change this after an initial build, you must rebuild the image (docker compose up -d --build open-webui) β€” the value is compiled into chunks, not read at runtime.

Verify:

docker exec open-webui bash -c \
  'grep -oE "[a-z0-9.:-]+\\\\/\\(files\\|preview" /app/build/_app/immutable/chunks/*.js | head -1'
# β†’ should contain your public domain (e.g. cu.your-domain.com), NOT computer-use-server:8081

Step 3 β€” Three URL settings, two roles (public vs internal)

v4.0.0: the old "three FILE_SERVER_URL places that must match" footgun is gone. There are now only three places and two distinct roles β€” public (browser-reachable) vs internal (Docker-local).

Where Role Who reads it Prod (with domain) Local dev (Docker Desktop)
PUBLIC_BASE_URL env on the computer-use-server container (docker-compose.yml / .env) PUBLIC β€” baked into /system-prompt links + returned to filter via X-Public-Base-URL response header Server (single source of truth for public URL) https://cu.your-domain.com http://localhost:8081
Build-arg COMPUTER_USE_SERVER_URL (docker-compose build.args for open-webui) PUBLIC β€” compiled into Svelte regex by fix_preview_url_detection; must match what the model emits Open WebUI (text match in assistant messages) cu.your-domain.com (no scheme) localhost:8081
Filter + Tool Valves ORCHESTRATOR_URL (seeded by init.sh from ORCHESTRATOR_URL env on the open-webui container) INTERNAL β€” server↔server fetch of /system-prompt; MCP tools/call forwarding Filter and tool (Docker network) http://computer-use-server:8081 http://computer-use-server:8081

⚠️ Do NOT point ORCHESTRATOR_URL at your public domain. It technically works, but every MCP request then goes browserβ†’CDNβ†’Traefikβ†’container. Any hiccup in that chain kills the stream mid-tool-call and the user sees MCP call failed: Session terminated. Stay inside the Docker network.

⚠️ Do NOT set the build-arg to the internal service name. The regex will then look for computer-use-server:8081/files/... in assistant text, but the model writes whatever is in the server's PUBLIC_BASE_URL β€” your public domain. Mismatch β†’ the patched frontend won't auto-promote the preview link into the artifact panel; the markdown link stays plain clickable text. (Filter v4.1.0 dropped the artifact/both PREVIEW_MODE values so the raw-<iframe>-in-chat symptom that #43 described is no longer possible.)

The filter no longer has a public-URL Valve at all β€” it reads the public URL from the server's X-Public-Base-URL response header and caches it alongside the prompt. One public knob, one internal knob.

See also docs/openwebui-filter.md.

Step 4 β€” Four env vars on the open-webui container

Copy-paste into your downstream compose environment: block:

services:
  open-webui:
    environment:
      # --- Computer Use required env vars (read by build-time patches) ---
      - CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES=200
      - TOOL_RESULT_MAX_CHARS=50000
      - TOOL_RESULT_PREVIEW_CHARS=2000
      # Internal URL of the Computer Use server β€” seeded by init.sh into both
      # Tool and Filter Valves, and read by the fix_large_tool_results patch.
      # Same Docker network: use the service DNS name.
      - ORCHESTRATOR_URL=http://computer-use-server:8081
Variable Default if unset Effect when correctly set
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES 30 (upstream) Tool-call cap per turn. 30 cuts Computer Use multi-step tasks short; stock repo uses 200.
TOOL_RESULT_MAX_CHARS 50000 (patch built-in) Truncation threshold above which a tool result is truncated or uploaded. 0 disables.
TOOL_RESULT_PREVIEW_CHARS 2000 (patch built-in) Preview size the model sees after truncation or upload.
ORCHESTRATOR_URL empty Seeded into both Tool and Filter Valves by init.sh, and read by fix_large_tool_results patch as the upload target. If empty, oversized results are silently truncated β€” the model loses the data.

Note: the last three are no-ops if the image is upstream ghcr.io β€” they need fix_large_tool_results from Step 1.

Step 5 β€” Filter must be global, tool must be public-read

Open WebUI has two separate switches for each function (is_active and is_global) and two required grants for each tool (group:* + user:*). The stock init.sh does this for you; manual / custom deployments commonly miss one side and then spend hours wondering why "everything is installed but nothing works."

Resource What to flip UI path Endpoint Why
Filter computer_use_filter is_active = true AND is_global = true Admin β†’ Functions β†’ computer_use_filter β†’ toggle Active + toggle Global POST /api/v1/functions/id/computer_use_filter/toggle + .../toggle/global is_active only loads the function; is_global actually applies it to every chat. Active-but-not-global is silently inert with no log line.
Tool ai_computer_use access_grants for group:* AND user:*, permission: read Workspace β†’ Tools β†’ ai_computer_use β†’ β‹― β†’ Share β†’ Public POST /api/v1/tools/id/ai_computer_use/access/update with {"access_grants":[{"principal_type":"group","principal_id":"*","permission":"read"},{"principal_type":"user","principal_id":"*","permission":"read"}]} Without grants, only the admin account that created the tool sees it. Non-admin users get an empty tool list and no error. The UI "Public" toggle writes both wildcards; writing only one leaves the tool visible to some users and invisible to others depending on Open WebUI version.

Verify against the database (Postgres used by the stock stack; see docker-compose.webui.yml:53):

# Filter flags β€” expect (t, t):
docker exec <postgres-container> psql -U openwebui -d openwebui -c \
  "SELECT is_active, is_global FROM function WHERE id='computer_use_filter';"

# Tool grants β€” expect TWO rows (group|* and user|*, both 'read'):
docker exec <postgres-container> psql -U openwebui -d openwebui -c \
  "SELECT principal_type, principal_id, permission FROM access_grant WHERE resource_id='ai_computer_use';"

For SQLite-backed Open WebUI deployments, swap psql for sqlite3 /app/backend/data/webui.db with the same SQL.

Step 6 β€” Verify everything at once

# 1. Image has patches:
docker exec open-webui bash -c \
  'grep -l "bn.set(!0),Jr.set(!0)" /app/build/_app/immutable/chunks/*.js >/dev/null \
   && echo OK || echo MISSING'

# 2. Build-arg baked into regex matches your public domain:
docker exec open-webui bash -c \
  'grep -oE "[a-z0-9.:-]+\\\\/\\(files\\|preview" /app/build/_app/immutable/chunks/*.js | head -1'

# 3. Env vars reached the container:
docker exec open-webui env | grep -E 'CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES|TOOL_RESULT_|ORCHESTRATOR_URL'

# 4. Tool+Filter Valve (Session-terminated trap) β€” Admin UI is simplest:
#    Workspace β†’ Tools β†’ ai_computer_use β†’ Valves β†’ ORCHESTRATOR_URL
#    Admin β†’ Functions β†’ computer_link_filter β†’ Valves β†’ ORCHESTRATOR_URL
#    β†’ both must be http://computer-use-server:8081 (internal URL, Docker service DNS),
#      NOT your public domain.

# 5. Server env (baked into system prompt AND returned to filter via header):
docker exec computer-use-server env | grep ^PUBLIC_BASE_URL=
# β†’ must equal your public URL (matches the build-arg from #2).

# 7. Filter is ACTIVE *and* GLOBAL (see Step 5):
docker exec <postgres-container> psql -U openwebui -d openwebui -c \
  "SELECT is_active, is_global FROM function WHERE id='computer_use_filter';"
# β†’ expect (t, t). Two 't's, not one.

# 8. Tool is public-read with both wildcards (see Step 5):
docker exec <postgres-container> psql -U openwebui -d openwebui -c \
  "SELECT principal_type, principal_id, permission FROM access_grant WHERE resource_id='ai_computer_use';"
# β†’ expect TWO rows: (group, *, read) and (user, *, read).

After rebuilding the image, do a hard reload in the browser (Cmd+Shift+R / Ctrl+Shift+R). Otherwise it keeps the old cached JS chunks and you'll think the fix didn't work.

Symptom β†’ which step is wrong

Symptom Step
HTML artifact renders as raw <iframe ...> text in chat 1 (upstream image) β€” if not β†’ 2 (build-arg wrong)
Preview iframe auto-insertion doesn't happen for file links 2 (build-arg mismatched with what model emits)
MCP call failed: Session terminated on every tool call 3 (tool Valve points at public domain)
Tool loop cuts off at ~30 calls; banner "Model temporarily unavailable" 4 (CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES not set)
Large tool outputs silently ...(truncated); model makes wrong decisions 4 (ORCHESTRATOR_URL not set or unreachable) OR 1 (fix_large_tool_results missing)
Tool-loop errors show raw Python exception 1 (fix_tool_loop_errors missing)
Tool list is empty for non-admin users (admin sees it) 5 (tool missing access_grants β€” not public-read)
Filter looks "Active" in UI but preview iframe / archive button never appear 5 (filter is_global=false β€” only is_active=true was flipped)
File links in chat go to 404 / white screen PUBLIC_BASE_URL on the server doesn't match what the browser can reach β€” see docs/openwebui-filter.md
New behavior didn't appear even after rebuild Browser cached old JS β€” hard reload

Security Notes

Production tested with 1000+ users on Open WebUI in a self-hosted environment. For public-facing deployments, see the hardening roadmap below.

Current model

  • Docker socket: The server needs Docker socket access to manage sandbox containers. This grants significant host access β€” run in a trusted environment only.
  • MCP_API_KEY: Set a strong random key in production. Without it, anyone with network access to port 8081 can execute arbitrary commands in containers.
  • Sandbox isolation: Each chat session runs in a separate container with resource limits (2GB RAM, 1 CPU). Containers use standard Docker runtime (runc), not gVisor β€” they share the host kernel. For stronger isolation, consider switching to gVisor runtime (see roadmap). Containers have network access by default.
  • POSTGRES_PASSWORD: Change the default password in .env for production.

Known limitations

  • Unauthenticated file/preview endpoints: /files/{chat_id}/, /api/outputs/{chat_id}, /browser/{chat_id}/, /terminal/{chat_id}/ β€” accessible to anyone who knows the chat ID. Chat IDs are UUIDs (hard to guess but not a real security boundary).
  • No per-user auth on server: The MCP server trusts whoever sends a valid MCP_API_KEY. User identity (X-User-Email) is passed by the client but not verified server-side.
  • Credentials in HTTP headers: API keys (GitLab, Anthropic, MCP tokens) are passed as HTTP headers from client to server. Safe within Docker network, but use HTTPS if exposing externally.
  • Default admin credentials: admin@open-computer-use.dev / admin β€” change immediately in multi-user setups.

Security roadmap

We plan to address these in future releases:

  • Per-session signed tokens for file/preview/terminal endpoints (replace chat ID as auth)
  • Server-side user verification via Open WebUI JWT validation
  • HTTPS support with automatic TLS certificates
  • Audit logging for all tool calls and file access
  • Network policies for sandbox containers (restrict egress by default)
  • Secret management β€” move credentials from headers to encrypted server-side storage
  • gVisor (runsc) runtime β€” optional container sandboxing for stronger isolation (like Claude.ai)

Ideas? Open a GitHub Issue. Want to contribute? See CONTRIBUTING.md or reach out on Telegram @yambrcom.

Development

# Build workspace image locally
docker build --platform linux/amd64 -t open-computer-use:latest .

# Run tests
./tests/test-docker-image.sh open-computer-use:latest
./tests/test-no-corporate.sh
./tests/test-project-structure.sh

# Build and run full stack
docker compose up --build

Contributing

See CONTRIBUTING.md. PRs welcome!

Community

License

This project uses a multi-license model:

  • Core (computer-use-server/, openwebui/, settings-wrapper/, Docker configs): Business Source License 1.1 β€” free for production use, modification, and self-hosting. Converts to Apache 2.0 on the Change Date. Offering as a managed/hosted service requires a commercial agreement.
  • Our skills (skills/public/describe-image, skills/public/sub-agent): MIT
  • Third-party skills: see individual LICENSE.txt files or original sources.

Attribution required: include "Open Computer Use" and a link to this repository.

See NOTICE for details.

Release History

VersionChangesUrgencyDate
v0.8.12.8## Docker Images ```bash # Sandbox (AI workspace) docker pull ghcr.io/Yambr/open-computer-use:v0.8.12.8 # Computer Use Server (MCP orchestrator) docker pull ghcr.io/Yambr/open-computer-use-server:v0.8.12.8 ``` ## Quick Start ```bash git clone https://github.com/Yambr/open-computer-use.git cd open-computer-use cp .env.example .env # Edit .env with your API key docker compose up ``` ## Changes 57cc871 chore: release v0.8.12.8 8cd426d refactor: maximum MCP-native system-prompt surface (6 tiersHigh4/19/2026
v0.8.12.7## Docker Images ```bash # Sandbox (AI workspace) docker pull ghcr.io/Yambr/open-computer-use:v0.8.12.7 # Computer Use Server (MCP orchestrator) docker pull ghcr.io/Yambr/open-computer-use-server:v0.8.12.7 ``` ## Quick Start ```bash git clone https://github.com/Yambr/open-computer-use.git cd open-computer-use cp .env.example .env # Edit .env with your API key docker compose up ``` ## Changes 6f4e8f0 chore: expand v0.8.12.7 release notes d0dea68 docs: document the two FILE_SERVER_URL settingMedium4/13/2026
v0.8.12.6## Docker Images ```bash # Sandbox (AI workspace) docker pull ghcr.io/Yambr/open-computer-use:v0.8.12.6 # Computer Use Server (MCP orchestrator) docker pull ghcr.io/Yambr/open-computer-use-server:v0.8.12.6 ``` ## Quick Start ```bash git clone https://github.com/Yambr/open-computer-use.git cd open-computer-use cp .env.example .env # Edit .env with your API key docker compose up ``` ## Changes 57c16f1 chore: release v0.8.12.6 3145819 fix: duplicate default-container warning to server logs 7e3Medium4/4/2026
v0.8.12.4## Docker Images ```bash # Sandbox (AI workspace) docker pull ghcr.io/Yambr/open-computer-use:v0.8.12.4 # Computer Use Server (MCP orchestrator) docker pull ghcr.io/Yambr/open-computer-use-server:v0.8.12.4 ``` ## Quick Start ```bash git clone https://github.com/Yambr/open-computer-use.git cd open-computer-use cp .env.example .env # Edit .env with your API key docker compose up ``` ## Changes 5168b07 chore: release v0.8.12.4 f0df0de chore: release v0.8.13.0 00efef3 fix: Pillow 12 API compat Medium4/2/2026
v0.8.12.3## Docker Images ```bash # Sandbox (AI workspace) docker pull ghcr.io/Yambr/open-computer-use:v0.8.12.3 # Computer Use Server (MCP orchestrator) docker pull ghcr.io/Yambr/open-computer-use-server:v0.8.12.3 ``` ## Quick Start ```bash git clone https://github.com/Yambr/open-computer-use.git cd open-computer-use cp .env.example .env # Edit .env with your API key docker compose up ``` ## Changes 33dbf34 feat: MCP tools best practices + large tool result truncation patch 94e472f fix: add securitMedium4/1/2026
v0.8.12.2## Docker Images ```bash # Sandbox (AI workspace) docker pull ghcr.io/Yambr/open-computer-use:v0.8.12.2 # Computer Use Server (MCP orchestrator) docker pull ghcr.io/Yambr/open-computer-use-server:v0.8.12.2 ``` ## Quick Start ```bash git clone https://github.com/Yambr/open-computer-use.git cd open-computer-use cp .env.example .env # Edit .env with your API key docker compose up ``` ## Changes e0e92c2 Improve playwright-cli skill discovery for weaker models + demo GIF efbba13 Bump flask from Medium3/31/2026
v0.8.12.1## Docker Images ```bash # Sandbox (AI workspace) docker pull ghcr.io/Yambr/open-computer-use:v0.8.12.1 # Computer Use Server (MCP orchestrator) docker pull ghcr.io/Yambr/open-computer-use-server:v0.8.12.1 ``` ## Quick Start ```bash git clone https://github.com/Yambr/open-computer-use.git cd open-computer-use cp .env.example .env # Edit .env with your API key docker compose up ``` ## Changes 27bcc75 Fix Docker tag patterns for 4-segment versioning (0.8.12.x) ef0eadb Rename repo to open-compMedium3/30/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

doryOne memory layer for every AI agent. Local-first, markdown source of truth, and CLI/HTTP/MCP native. Your agent forgot who you are. Again. Dory fixes that.v0.1.0
claude-code-configClaude Code skills, architectural principles, and alternative approaches for AI-assisted development0.0.0
npcpyThe python library for research and development in NLP, multimodal LLMs, Agents, ML, Knowledge Graphs, and more.v1.4.21
jdocmunch-mcpThe leading, most token-efficient MCP server for documentation exploration and retrieval via structured section indexingv1.9.0
claude-plugins-officialOfficial, Anthropic-managed directory of high quality Claude Code Plugins.0.0.0