A self-hosted AI workspace with chat, code execution, parallel multi-agent orchestration, cross-machine remote agents, Auto (AI create architecture), per-project agent overrides, full chat logs with agent reasoning, and a skill marketplace. Runs on macOS and Windows. Everything executes inside a secure Ubuntu sandbox ā no Docker required.
AI-generated code and shell commands cannot escape the sandbox or touch your files without permission. Mix different AI providers in the same agent team ā OpenAI-compatible APIs, Claude Code CLI, and Codex CLI. Delegate tasks to remote TigrimOS instances running on other machines ā the orchestrator chooses the right agent based on persona and responsibility. Auto mode lets the AI analyze your prompt, design a custom multi-agent architecture (YAML), and boot all agents automatically ā no manual configuration needed. Each project can now override the global agent mode with its own sub-agent configuration, and every chat session records a complete log of user messages, tool calls, and sub-agent reasoning. Connect external MCP servers to extend the AI's toolbox. Built with 16 built-in tools and designed for long-running sessions with smart context compression and checkpoint recovery.
- Per-Project Agent Mode Override ā each project can override the global sub-agent mode (Auto Spawn, Auto Create, Manual, Realtime, Auto Swarm) and pick its own YAML config, architecture type, agent count, and connection protocols. The active override is shown as a clickable purple tag in the project header and chat banner.
- Auto Architecture ā AI-Decided Settings ā new "Auto (AI decides)" option for architecture type and agent count (3ā8 default). Connection protocols are now multi-select toggle buttons instead of a single dropdown.
- Full Chat Log with Agent Reasoning ā every chat session records a complete log file capturing user messages, tool calls (with arguments), sub-agent reasoning text, and final responses. New Log button next to Activity opens a live-updating panel, and a new Export button downloads the log as
.txt. - Finished Tasks History ā the Tasks page now shows the last 100 completed/cancelled/errored tasks with status, duration, agents used, and tools called. An Open Chat button on each finished task jumps directly to that session (and for project tasks, into the correct project chat).
- Project List Sorting ā sort projects by AāZ or Recent (most recently updated). Sort preference persists across reloads.
- Sub-Agent Reasoning in Chat Log ā orchestrators and worker agents stream their intermediate thinking text to the chat log between tool calls, giving full visibility into the decision-making chain.
- AsyncLocalStorage Settings Override ā project agent overrides now propagate correctly through every async call in the backend, ensuring
getSettings()returns the project-scoped configuration throughout the entire chat lifecycle.
Security first: Everything runs inside a real Ubuntu sandbox. Your host file system is completely invisible to the AI unless you explicitly share a folder.
AI Chat with tool-calling ā generates React/Recharts visualizations rendered in the output panel.
Visual Agent Editor ā drag-and-drop multi-agent design with mesh networking and YAML export.
Minecraft Task Monitor ā live pixel-art agents with speech bubbles, walking animations, and inter-agent interactions.
Download from the latest release:
| Platform | Download | Sandbox Technology |
|---|---|---|
| macOS ā Apple Silicon (M1/M2/M3/M4) | TigrimOS-v1.2.1-macOS-AppleSilicon.zip | Apple Virtualization.framework |
| macOS ā Apple Silicon (macOS 26 Tahoe) | TigrimOS-v1.2.1-macOS-Tahoe-AppleSilicon.zip | Apple Virtualization.framework |
| macOS ā Intel | TigrimOS-v1.2.1-macOS-Intel.zip | Apple Virtualization.framework |
| Windows 10/11 | TigrimOS-v1.2.1-Windows.zip | WSL2 (Windows Subsystem for Linux) |
- macOS 13.0 (Ventura) or later
- Homebrew with
qemu(Intel only:brew install qemu) - 4 GB RAM available for the VM
- ~5 GB disk space (Ubuntu image + TigrimOS)
- Windows 10 version 2004+ or Windows 11
- WSL2 support (enabled automatically by the installer)
- 4 GB RAM available for the WSL2 instance
- ~5 GB disk space (Ubuntu + TigrimOS)
- Install Homebrew if you don't have it:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" - Intel Macs only ā install qemu (needed to convert the disk image):
brew install qemu
- Download the release zip for your Mac:
- Apple Silicon (M1/M2/M3/M4): TigrimOS-v1.2.1-macOS-AppleSilicon.zip
- Intel: TigrimOS-v1.2.1-macOS-Intel.zip
- Unzip ā you get
TigrimOS.app(orTigrimOS_i.app) andtiger_cowork/folder - Keep both in the same directory (the app needs
tiger_cowork/next to it) - Double-click the
.appto launch - First launch: if macOS blocks it, right-click ā Open, or go to System Settings ā Privacy & Security ā Open Anyway
- Wait ~5-10 minutes for the Ubuntu sandbox to provision on first run
That's it. Subsequent launches start in under a minute.
- Download and unzip TigrimOS-v1.2.1-Windows.zip
- Double-click
TigrimOSInstaller.bat - The graphical installer will guide you through:
- Enabling WSL2 (may require a one-time restart)
- Installing Ubuntu 22.04 as a dedicated "TigrimOS" WSL2 distribution
- Installing Node.js 20 + Python 3 inside the sandbox
- Optionally connecting a shared folder (can also be done later from the app)
- Cloning, building, and starting TigrimOS
- TigrimOS opens as a standalone desktop window (Edge app mode ā no browser tabs or address bar)
- A desktop shortcut TigrimOS is created automatically
After installation, use TigrimOSStart.bat (or the desktop shortcut) to launch and TigrimOSStop.bat to stop.
If you prefer to install from source instead of downloading the release zip:
macOS:
git clone https://github.com/Sompote/TigrimOS.git
cd TigrimOS
xattr -cr TigrimOS.app # Apple Silicon (M1/M2/M3/M4)
open TigrimOS.app
# or
xattr -cr TigrimOS_i.app # Intel
open TigrimOS_i.appWindows:
git clone https://github.com/Sompote/TigrimOS.git
cd TigrimOS
powershell -ExecutionPolicy Bypass -File install_windows.ps1Note (macOS): Run the app from inside the cloned folder ā
tiger_cowork/must be next to the.appfor the VM to find it.
- Launch TigrimOS
- macOS: Open the app ā the setup wizard runs on first launch
- Windows: Double-click
TigrimOSStart.bator the desktop shortcut ā opens as a standalone app window
- Wait for the Ubuntu sandbox to provision (~5-10 minutes on first launch)
- Open Settings ā enter your API Key, API URL, and Model
- Click Test Connection to verify
- Start chatting ā the AI can search the web, run code, generate charts, and more
Subsequent launches start in under a minute (no re-download).
TigrimOS can use AI models running on your host machine ā no cloud API key needed.
The server must listen on 0.0.0.0 (all interfaces), not 127.0.0.1. The sandbox connects through a network bridge, so localhost-only servers are unreachable.
llama.cpp / llama-server:
llama-server -hf LiquidAI/LFM2.5-1.2B-Instruct-GGUF -c 4096 --port 8080 --host 0.0.0.0Ollama:
OLLAMA_HOST=0.0.0.0 ollama serveLM Studio:
In LM Studio settings ā Server ā set host to 0.0.0.0, then start the server.
In the TigrimOS web UI, go to Settings ā AI Provider:
| Field | llama.cpp | Ollama | LM Studio |
|---|---|---|---|
| Provider | OpenAI-Compatible (Local) | Ollama (Local) | LM Studio (Local) |
| API URL | http://host.local:8080/v1 |
http://host.local:11434/v1 |
http://host.local:1234/v1 |
| Model | Your model name (e.g. LiquidAI/LFM2.5-1.2B-Instruct-GGUF) |
llama3.2, mistral, etc. |
local-model |
| API Key | local (any text) |
local (any text) |
local (any text) |
macOS:
host.localis a special hostname inside the VM that routes to your Mac. It's set up automatically during provisioning.Windows:
host.localresolves to your Windows host via WSL2 networking. If it doesn't work, use your PC's local IP address (e.g.192.168.1.x).
Click Test Connection in Settings. If it succeeds, you're ready to chat.
| Problem | Solution |
|---|---|
| "fetch failed" | Make sure the server is running with --host 0.0.0.0 |
| "Connection error" | Check the port number matches your server |
| "host.local not found" | macOS: Click Reset VM in toolbar ā restart the app. Windows: Use your PC's IP instead |
| Server works in browser but not in TigrimOS | Your server is on 127.0.0.1 ā restart with 0.0.0.0 |
- AI Chat with 16 Built-in Tools ā web search, Python, React, shell, files, skills, sub-agents
- Mix Any Model per Agent ā assign different AI providers per agent (API, Claude Code CLI, Codex CLI)
- Parallel Multi-Agent System ā 7 orchestration topologies (hierarchical, mesh, hybrid, P2P, P2P+orchestrator, pipeline, broadcast), 4 communication protocols, P2P swarm governance with blackboard bidding
- Swarm Communication Protocols ā TCP (private 1-on-1 channels), Bus (broadcast to all), Blackboard (P2P auction: propose ā bid ā award ā execute), Mesh (any agent can talk to any other)
- Remote Agents ā delegate tasks to TigrimOS instances on other machines over the network; orchestrator auto-selects agents by persona and responsibility; fully peer-to-peer (any machine can be orchestrator or worker)
- Built-in Terminal ā full xterm.js terminal with root access to the Ubuntu sandbox (install packages, manage services, run CLI tools)
- Minecraft Task Monitor ā live pixel-art characters with speech bubbles showing agent activity and remote progress
- Long-Running Session Stability ā sliding window compression, smart tool result handling, checkpoint recovery
- MCP Integration ā connect any Model Context Protocol server (Stdio, SSE, StreamableHTTP)
- Output Panel ā renders React components, charts, HTML, PDF, Word, Excel, images, and Markdown
- Skills & ClawHub ā install AI skills from the marketplace or build your own
- Projects ā dedicated workspaces with memory, skill selection, and file browser
- Cross-Platform ā native macOS app + Windows WSL2 installer
TigrimOS includes a built-in terminal (Settings ā Terminal) that gives you root access to the Ubuntu sandbox. It runs a real PTY with full color, tab completion, and cursor support via xterm.js.
Use the terminal to install additional tools, manage services, or debug the sandbox environment.
-
Go to Settings ā Terminal ā Open Terminal
-
Install and login:
npm i -g @anthropic-ai/claude-code ln -sf /root/.local/bin/claude /usr/local/bin/claude claude login
A URL will appear ā open it in your browser and authorize. That's it.
Or use an API key instead:
echo 'export ANTHROPIC_API_KEY=sk-ant-...' >> /root/.bashrc && source /root/.bashrc
-
Go to Settings ā Terminal ā Open Terminal
-
Install and login:
npm i -g @openai/codex codex login --device-auth
A URL and code will appear ā open the URL in your browser and enter the code. That's it.
Or use an API key instead:
echo 'export OPENAI_API_KEY=sk-...' >> /root/.bashrc && source /root/.bashrc
Important: Use
codex login --device-auth(notcodex login) ā standard OAuth uses a localhost callback that can't reach the sandbox.
Once installed and logged in, you can assign Claude Code or Codex as the AI model for any agent in the Agent Editor:
-
Go to Settings ā Agent Editor (or the Agents page)
-
Create or edit an agent
-
Set the Model field to:
Model value What it uses claude-codeClaude Code CLI (default model) claude-code:sonnetClaude Code CLI with Sonnet model claude-code:opusClaude Code CLI with Opus model codexCodex CLI (default model) codex:o3Codex CLI with o3 model codex:o4-miniCodex CLI with o4-mini model -
Save the agent configuration
These agents run as autonomous coders ā they have their own tool loop with file reading, editing, shell commands, and code execution. They work independently within the sandbox, reading and writing files, running tests, and iterating on code.
You can mix them in a multi-agent swarm ā for example, one agent using claude-code:opus for architecture decisions and another using codex:o3 for implementation, coordinated by the swarm orchestrator.
Note: All CLI tools run inside the sandbox ā they cannot access your host system. API keys and credentials are isolated from your host environment.
TigrimOS instances can delegate tasks to each other across machines. Any TigrimOS instance can be both an orchestrator and a remote worker ā fully peer-to-peer.
Machine A (Home Mac) Machine B (Cloud PC)
āāāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāāā
TigrimOS running TigrimOS running
Settings ā Remote Instances: Settings ā Remote Instances:
- cloud-pc ā http://B:3001 - home-mac ā http://A:3001
Agent Editor YAML: Agent Editor YAML:
- id: cloud-researcher - id: home-coder
type: remote type: remote
remote_instance: cloud-pc remote_instance: home-mac
- Both machines run TigrimOS (same codebase, same app)
- On Machine A, go to Settings ā Remote Instances ā add Machine B's URL and bridge token
- On Machine B, go to Settings ā Remote Bridge Tokens ā create a token and share it with Machine A
- In the Agent Editor, add an agent with type Remote and select the saved instance from the dropdown
- Set Persona and Responsibility on the remote agent ā the orchestrator uses these to decide which agent gets which task
-
The orchestrator reads each agent's responsibility (what tasks it handles) and persona (expertise/skills) to choose the right agent
-
Remote tasks are sent via HTTP polling with configurable timeouts:
Setting Default Description Poll Interval 2s How often to check for remote agent progress Idle Timeout 60s Abort if no progress for this long Max Timeout 1800s Hard cap regardless of activity -
Configure timeouts in Settings ā Remote Agent Timeouts
-
Remote agent progress appears live in the Minecraft Task Monitor with speech bubbles
TigrimOS offers five ways to organize your AI agents. Choose a mode in Settings ā Sub-Agent Mode:
| Mode | Description |
|---|---|
| Auto Spawn | The AI freely spawns sub-agents as needed ā no configuration required. Best for simple tasks where you don't need a specific team structure. |
| Auto (AI create architecture) | The AI analyzes your prompt, designs a custom multi-agent architecture (YAML), saves it, and boots all agents in realtime mode ā fully automatic. A "View Architecture" button appears in chat so you can inspect, edit, and save the generated YAML for reuse. |
| Spawn Agent (YAML config) | You provide a YAML file defining your agent team. The orchestrator spawns agents one-at-a-time by agentId. Each agent runs a single LLM call and returns a result. |
| Realtime Agent (YAML config) | All agents defined in your YAML boot at session start and stay alive. Tasks are delegated via send_task/wait_result for true parallel execution with inter-agent communication (TCP, Bus, Mesh, Blackboard). |
| Auto Choose Swarm (AI picks config) | The AI reviews all your saved YAML architectures and selects the best one for the current task. After selection, agents boot in realtime mode. |
Tip: Use Auto (AI create architecture) when you want the AI to build the right team for you. The generated YAML is saved to
data/agents/ā click the purple button in chat to open it in the Agent Editor where you can refine and save it for future use.
| Mode | Tool | How |
|---|---|---|
| Spawn Agent (YAML) | spawn_subagent |
Define remote agents in YAML ā orchestrator spawns them by agentId. Each agent runs as a one-shot LLM call and returns a result. |
| Live Session (YAML) | send_task / wait_result |
Persistent agent sessions connected via Socket.io. Agents stay alive and can communicate using protocol tools (TCP, Bus, Mesh, Blackboard). Supports parallel execution. |
| Direct | remote_task |
AI picks a remote instance directly from the available list ā no YAML config needed. |
When using Live Session mode, agents can communicate with each other using these protocols:
| Protocol | Tool | Description |
|---|---|---|
| TCP | proto_tcp_send / proto_tcp_read |
Private 1-on-1 channel between two agents. Use for direct messages, data exchange, and coordination. |
| Bus | proto_bus_publish / proto_bus_subscribe |
Broadcast channel ā all bus-connected agents see messages. Use for announcements, status updates, shared state. |
| Blackboard | bb_propose / bb_bid / bb_award |
P2P auction system ā propose a task, agents bid based on confidence, orchestrator awards the winner, then send_task to execute. |
| Mesh | send_task (any ā any) |
Any mesh-enabled agent can delegate tasks to any other agent directly, without going through the orchestrator. |
The orchestrator chooses which agent to delegate to based on:
- Responsibility ā what tasks the agent is designed to handle (checked first)
- Persona ā the agent's expertise, skills, and personality (fallback)
TigrimOS runs inside a full sandbox on both platforms:
| Layer | macOS | Windows |
|---|---|---|
| Sandbox | Ubuntu 22.04 VM via Virtualization.framework | Ubuntu 22.04 via WSL2 |
| File System | Host files invisible by default | Host files invisible by default |
| Shared Folders | VirtioFS opt-in, read-only default | Symlink opt-in via installer or app UI |
| Write Access | Requires explicit per-folder toggle | Read & write by default (Windows folder permissions apply) |
| Network | NAT ā VM isolated from host network | WSL2 NAT ā isolated from host network |
| Process Isolation | VM processes cannot see host processes | WSL2 processes isolated from Windows |
By default the VM has zero access to your Mac's files. To share a folder:
- Click the Folders tab in TigrimOS
- Click Add Folder ā select a macOS folder
- Default: read-only (VM can read but not modify)
- Toggle to Read & Write if needed (requires VM restart)
- Shared folders appear inside the VM at
/mnt/shared/<name>
There are two ways to connect Windows folders to the sandbox:
From the app (recommended):
- Open the Files page in TigrimOS
- Click Connect Folder
- Enter the Windows path (e.g.
C:\Users\YOU\Documents) - Optionally give it a display name
- Click Connect ā the folder appears under
shared/in the file browser
To disconnect: navigate to shared/, click the x on the linked folder.
During installation:
The installer optionally lets you pick a shared folder. It is linked into the sandbox automatically.
Manual (command line):
wsl -d TigrimOS -u root -- bash -c "mkdir -p /opt/TigrimOS/tiger_cowork/shared && ln -sf /mnt/c/Users/YOU/Documents /opt/TigrimOS/tiger_cowork/shared/docs"āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā TigrimOS.app (macOS) ā
ā ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā SwiftUI + WKWebView (port 3001) ā ā
ā āāāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā ā
ā āāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā Apple Virtualization.framework ā ā
ā ā ā ā
ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā
ā ā ā Ubuntu 22.04 VM ā ā ā
ā ā ā ā ā ā
ā ā ā TigrimOS v1.2.1ā ā ā
ā ā ā āāā Fastify server :3001 ā ā ā
ā ā ā āāā Node.js 20 ā ā ā
ā ā ā āāā Python 3 + numpy/pandas/... ā ā ā
ā ā ā āāā 16 built-in AI tools ā ā ā
ā ā ā ā ā ā
ā ā ā /mnt/shared/ ā VirtioFS (opt-in) ā ā ā
ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā
ā ~/TigrimOS_Shared/ (user-controlled, optional) ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā TigrimOSStart.bat (Windows) ā
ā ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā Edge App Window ā http://localhost:3001 ā ā
ā āāāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā ā
ā āāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā WSL2 (Windows Subsystem for Linux) ā ā
ā ā ā ā
ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā
ā ā ā Ubuntu 22.04 "TigrimOS" distro ā ā ā
ā ā ā ā ā ā
ā ā ā TigrimOS v1.2.1ā ā ā
ā ā ā āāā Fastify server :3001 ā ā ā
ā ā ā āāā Node.js 20 ā ā ā
ā ā ā āāā Python 3 + numpy/pandas/... ā ā ā
ā ā ā āāā 16 built-in AI tools ā ā ā
ā ā ā ā ā ā
ā ā ā shared/ ā symlinks to Windows (opt-in) ā ā ā
ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā
ā C:\Users\YOU\Documents (connected via app UI) ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
| Tab | Description |
|---|---|
| App | TigrimOS web UI embedded in the app |
| Console | VM boot log, provisioning output, service status |
| Folders | Manage which Mac folders the VM can access |
| Button | Action |
|---|---|
| Start | Boot the Ubuntu VM and start TigrimOS |
| Stop | Gracefully shut down the VM |
| Reset VM | Wipe and re-provision from scratch |
| Script | Action |
|---|---|
| TigrimOSStart.bat | Start the WSL2 server and open as a standalone app window |
| TigrimOSStop.bat | Stop the TigrimOS server |
| TigrimOSInstaller.bat | Re-run installer (update or repair) |
| In-App Feature | Description |
|---|---|
| Files ā Connect Folder | Link a Windows folder into the sandbox for reading/writing |
| Files ā shared/ | Browse and manage all connected Windows folders |
"App cannot be opened" on first launch Right-click ā Open, or go to System Settings ā Privacy & Security ā Open Anyway.
VM starts but TigrimOS doesn't load Check the Console tab for errors. Common causes:
- First run provisioning still in progress (wait 5-10 minutes)
- Port 3001 is in use by another app ā stop it first
qemunot installed ā runbrew install qemu
How to reset everything In the app: click Reset VM in the toolbar.
Or manually:
rm -rf ~/Library/Application\ Support/TigrimOS/Where is the VM data stored?
~/Library/Application Support/TigrimOS/
āāā ubuntu-cloud.qcow2 # Downloaded Ubuntu image (cached)
āāā ubuntu-raw.img # Converted raw disk
āāā vmlinuz # Linux kernel
āāā initrd # Initial ramdisk
āāā seed.img # Cloud-init config
āāā shared_folders.json # Your shared folder settings
"WSL2 is not installed or not enabled"
Run TigrimOSInstaller.bat ā it enables WSL2 automatically. You may need to restart your PC after the first run.
Installer says "restart required" WSL2 requires a one-time Windows restart after enabling. Restart and run the installer again.
Installer fails with PowerShell errors The installer requires PowerShell 5.1+ (included with Windows 10). If you see parse errors, make sure you are running the latest Windows updates.
Server doesn't start Check the log inside WSL:
wsl -d TigrimOS -u root -- cat /tmp/tigrimos.logApp window doesn't open (but server is running)
TigrimOS opens as an Edge app-mode window. If Edge is not installed, it falls back to your default browser. You can always access TigrimOS at http://localhost:3001.
Connected folder not visible in file browser
Connected folders appear under shared/ in the file browser. Navigate to the shared directory to see linked Windows folders.
How to reset everything (Windows)
wsl --unregister TigrimOS
wsl --unregister Ubuntu-22.04Then run TigrimOSInstaller.bat again.
Where is WSL data stored?
%LOCALAPPDATA%\TigrimOS\WSL\ # WSL2 virtual disk
TigrimOS/
āāā TigrimOS.app # macOS Apple Silicon app (ready to run)
āāā TigrimOS_i.app # macOS Intel app (ready to run)
āāā TigrimOSInstaller.bat # Windows installer launcher
āāā TigrimOSStart.bat # Windows start script
āāā TigrimOSStop.bat # Windows stop script
āāā install_windows.ps1 # Windows WPF installer (WSL2-based)
āāā src/ # macOS native app source
ā āāā Package.swift
ā āāā TigrimOS/
ā ā āāā TigrimOSApp.swift
ā ā āāā VM/
ā ā ā āāā VMConfig.swift
ā ā ā āāā VMManager.swift
ā ā āāā Views/
ā ā ā āāā ContentView.swift
ā ā ā āāā TigrimOSWebView.swift
ā ā ā āāā ConsoleView.swift
ā ā ā āāā SharedFoldersView.swift
ā ā ā āāā SettingsView.swift
ā ā ā āāā SetupView.swift
ā ā āāā Security/
ā ā ā āāā SandboxManager.swift
ā ā ā āāā FileAccessControl.swift
ā ā āāā Resources/
ā ā āāā AppIcon.icns
ā ā āāā provision.sh
ā ā āāā cloud-init.yaml
ā āāā Scripts/
ā āāā build.sh
ā āāā create-dmg.sh
ā āāā setup-vm.sh
āāā tiger_cowork/ # AI workspace engine (runs inside sandbox)
| Document | Description |
|---|---|
| Platform Architecture | How TigrimOS runs across macOS Silicon, macOS Intel, and Windows ā VM boot, provisioning, file sharing, security |
| Agent & Tools Docs | Agent system, tools, protocols, MCP setup, API endpoints |
| Changelog | Full version history and release notes |
This project is licensed under the MIT License.




