Play Red Alert with AI agents. LLMs, scripted bots, or RL ā your agent commands armies in the classic RTS through a Python API.
Website ⢠Leaderboard ⢠HuggingFace ⢠Docs ⢠Issues
pip install openra-rl
openra-rl playOn first run, an interactive wizard helps you configure your LLM provider (OpenRouter, Ollama, or LM Studio). The CLI pulls the game server Docker image and starts everything automatically.
# Cloud (OpenRouter)
openra-rl play --provider openrouter --api-key sk-or-... --model anthropic/claude-sonnet-4-20250514
# Local (Ollama ā free, no API key)
openra-rl play --provider ollama --model qwen3:32b
# Developer mode (skip Docker, run server locally)
openra-rl play --local --provider ollama --model qwen3:32b
# Reconfigure later
openra-rl config- Docker ā the game server runs in a container
- Python 3.10+
- An LLM endpoint (cloud API key or local model server)
openra-rl play Run the LLM agent (wizard on first use)
openra-rl config Re-run the setup wizard
openra-rl server start | stop | status | logs
openra-rl replay watch | list | copy | stop
openra-rl bench submit Upload results to the leaderboard
openra-rl mcp-server Start MCP stdio server (for OpenClaw / Claude Desktop)
openra-rl doctor Check system prerequisites
openra-rl version Print version
OpenRA-RL exposes all 48 game tools as a standard MCP server:
openra-rl mcp-serverAdd to your MCP client config (e.g. ~/.openclaw/openclaw.json):
{
"mcpServers": {
"openra-rl": {
"command": "openra-rl",
"args": ["mcp-server"]
}
}
}Then chat: "Start a game of Red Alert on easy difficulty, build a base, and defeat the enemy."
| Component | Language | Role |
|---|---|---|
| OpenRA-RL | Python | Environment wrapper, agents, HTTP/WebSocket API |
| OpenRA (submodule) | C# | Modified game engine with embedded gRPC server |
| OpenEnv (pip dep) | Python | Standardized Gymnasium-style environment interface |
Data flow: Agent <-> FastAPI (port 8000) <-> gRPC bridge (port 9999) <-> OpenRA game engine
The game runs at ~25 ticks/sec independent of agent speed. Observations use a DropOldest channel so the agent always sees the latest game state, even if it's slower than real time.
A hardcoded state-machine bot that demonstrates all action types. Deploys MCV, builds a base, trains infantry, and attacks.
python examples/scripted_bot.py --url http://localhost:8000 --verbose --max-steps 2000A planning-aware bot that uses game knowledge tools (tech tree lookups, faction briefings, map analysis) to formulate strategy before playing.
python examples/mcp_bot.py --url http://localhost:8000 --verbose --max-turns 3000An AI agent powered by any OpenAI-compatible model. Supports cloud APIs (OpenRouter, OpenAI) and local model servers (Ollama, LM Studio).
python examples/llm_agent.py \
--config examples/config-openrouter.yaml \
--api-key sk-or-... \
--verbose \
--log-file game.logCLI flags override config file values. See python examples/llm_agent.py --help for all options.
OpenRA-RL uses a unified YAML config system. Settings are resolved with this precedence:
CLI flags > Environment variables > Config file > Built-in defaults
Copy and edit the default config:
cp config.yaml my-config.yaml
# Edit my-config.yaml, then:
python examples/llm_agent.py --config my-config.yamlKey sections:
game:
openra_path: "/opt/openra" # Path to OpenRA installation
map_name: "singles.oramap" # Map to play
headless: true # No GPU rendering
record_replays: false # Save .orarep replay files
opponent:
bot_type: "normal" # AI difficulty: easy, normal, hard
ai_slot: "Multi0" # AI player slot
planning:
enabled: true # Pre-game planning phase
max_turns: 10 # Max planning turns
max_time_s: 60.0 # Planning time limit
llm:
base_url: "https://openrouter.ai/api/v1/chat/completions"
model: "qwen/qwen3-coder-next"
max_tokens: 1500
temperature: null # null = provider default
tools:
categories: # Toggle tool groups on/off
read: true
knowledge: true
movement: true
production: true
# ... see config.yaml for all categories
disabled: [] # Disable specific tools by name
alerts:
under_attack: true
low_power: true
idle_production: true
no_scouting: true
# ... see config.yaml for all alerts| File | Use case |
|---|---|
examples/config-openrouter.yaml |
Cloud LLM via OpenRouter (Claude, GPT, etc.) |
examples/config-ollama.yaml |
Local LLM via Ollama |
examples/config-lmstudio.yaml |
Local LLM via LM Studio |
examples/config-minimal.yaml |
Reduced tool set for limited-context models |
| Variable | Config path | Description |
|---|---|---|
OPENROUTER_API_KEY |
llm.api_key |
API key for OpenRouter |
LLM_API_KEY |
llm.api_key |
Generic LLM API key (overrides OpenRouter key) |
LLM_BASE_URL |
llm.base_url |
LLM endpoint URL |
LLM_MODEL |
llm.model |
Model identifier |
BOT_TYPE |
opponent.bot_type |
AI difficulty: easy, normal, hard |
OPENRA_PATH |
game.openra_path |
Path to OpenRA installation |
RECORD_REPLAYS |
game.record_replays |
Save replay files (true/false) |
PLANNING_ENABLED |
planning.enabled |
Enable planning phase (true/false) |
# Pull a model with tool-calling support
ollama pull qwen3:32b
# For models that need more context (default is often 2048-4096 tokens):
cat > /tmp/Modelfile <<EOF
FROM qwen3:32b
PARAMETER num_ctx 32768
EOF
ollama create qwen3-32k -f /tmp/Modelfile
# Run
openra-rl play --provider ollama --model qwen3-32kNote: Not all Ollama models support tool calling. Check with
ollama show <model>ā the template must include atoolsblock. Models known to work:qwen3:32b,qwen3:4b.
- Load a model in LM Studio and start the local server (default port 1234)
- Run:
openra-rl play --provider lmstudio --model <model-name>openra-rl server start # Start game server container
openra-rl server start --port 9000 # Custom port
openra-rl server status # Check if running
openra-rl server logs --follow # Tail logs
openra-rl server stop # Stop container| Service | Command | Description |
|---|---|---|
openra-rl |
docker compose up openra-rl |
Headless game server (ports 8000, 9999) |
agent |
docker compose up agent |
LLM agent (requires OPENROUTER_API_KEY) |
mcp-bot |
docker compose run mcp-bot |
MCP bot |
# LLM agent via Docker Compose
OPENROUTER_API_KEY=sk-or-... docker compose up agentAfter each game, replays are automatically copied to ~/.openra-rl/replays/. Watch them in your browser:
openra-rl replay watch # Watch the latest replay (opens browser via VNC)
openra-rl replay watch <file> # Watch a specific .orarep file
openra-rl replay list # List replays (Docker + local)
openra-rl replay copy # Copy replays from Docker to local
openra-rl replay stop # Stop the replay viewerThe replay viewer runs inside Docker using the same engine that recorded the game, so replays always play back correctly. The browser connects via noVNC ā no local game install needed.
Version tracking: Each replay records which Docker image version was used. When you upgrade, old replays are still viewable using their original engine version.
For running the game server natively (macOS/Linux):
OpenRA/ is a git submodule ā clone it alongside the repo:
git clone --recurse-submodules https://github.com/yxc20089/OpenRA-RL.gitOr if you already cloned without submodules:
git submodule update --init --recursiveCreate and activate a dedicated environment (Python 3.10 recommended):
Conda:
conda create --name openra python=3.10
conda activate openravenv:
python3.10 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activatepip install -e ".[dev]"macOS (Apple Silicon):
brew install dotnet@8
echo 'export PATH="/opt/homebrew/opt/dotnet@8/bin:$PATH"' >> ~/.profile
source ~/.profilemacOS (Intel): same, but replace /opt/homebrew with /usr/local.
Ubuntu/Debian:
sudo apt install dotnet-sdk-8.0Other Linux: see https://learn.microsoft.com/dotnet/core/install/linux
macOS:
brew install sdl2 openal-soft freetype luajitUbuntu/Debian:
sudo apt install libsdl2-dev libopenal-dev libfreetype-dev libluajit-5.1-devcd OpenRA && make && cd ..Known build issue (macOS + .NET 8): If you see
error CS0121: The call is ambiguousforCryptoUtil.SHA1HashinOpenRA.Game/Map/Map.cs, change line 286 from:return CryptoUtil.SHA1Hash([]);to:
return CryptoUtil.SHA1Hash(Array.Empty<byte>());Then re-run
make.
cp $(brew --prefix sdl2)/lib/libSDL2.dylib OpenRA/bin/SDL2.dylib
cp $(brew --prefix openal-soft)/lib/libopenal.dylib OpenRA/bin/soft_oal.dylib
cp $(brew --prefix freetype)/lib/libfreetype.dylib OpenRA/bin/freetype6.dylib
cp $(brew --prefix luajit)/lib/libluajit-5.1.dylib OpenRA/bin/lua51.dylibOn Linux this step is not needed.
Set game.openra_path to the absolute path of the OpenRA/ directory:
game:
openra_path: "/path/to/OpenRA-RL/OpenRA" # use $(pwd)/OpenRA
headless: true # true = no window, runs anywhere (recommended for agents)
# false = opens a real OpenRA window so you can watch liveUse headless: true for agent training and CI. Use headless: false only if you want to watch the game in a live OpenRA window (requires a display and SDL2).
openra-rl play --local --provider ollama --model qwen3:32bpython openra_env/server/app.pypytestEach tick, the agent receives structured game state:
| Field | Description |
|---|---|
tick |
Current game tick |
cash, ore, power_provided, power_drained |
Economy |
units |
Own units with position, health, type, facing, stance, speed, attack range |
buildings |
Own buildings with production queues, power, rally points |
visible_enemies, visible_enemy_buildings |
Fog-of-war limited enemy intel |
spatial_map |
9-channel spatial tensor (terrain, height, resources, passability, fog, own buildings, own units, enemy buildings, enemy units) |
military |
Kill/death costs, asset value, experience, order count |
available_production |
What can currently be built |
18 action types available through the command API:
| Category | Actions |
|---|---|
| Movement | move, attack_move, attack, stop |
| Production | produce, cancel_production |
| Building | place_building, sell, repair, power_down, set_rally_point, set_primary |
| Unit control | deploy, guard, set_stance, enter_transport, unload, harvest |
The LLM agent interacts through 48 MCP (Model Context Protocol) tools organized into categories:
| Category | Tools | Purpose |
|---|---|---|
| Read | get_game_state, get_economy, get_units, get_buildings, get_enemies, get_production, get_map_info, get_exploration_status |
Query current game state |
| Knowledge | lookup_unit, lookup_building, lookup_tech_tree, lookup_faction |
Static game data reference |
| Bulk Knowledge | get_faction_briefing, get_map_analysis, batch_lookup |
Efficient batch queries |
| Planning | start_planning_phase, end_planning_phase, get_opponent_intel, get_planning_status |
Pre-game strategy planning |
| Game Control | advance |
Advance game ticks |
| Movement | move_units, attack_move, attack_target, stop_units |
Unit movement commands |
| Production | build_unit, build_structure, build_and_place |
Build units and structures |
| Building Actions | place_building, cancel_production, deploy_unit, sell_building, repair_building, set_rally_point, guard_target, set_stance, harvest, power_down, set_primary |
Building and unit management |
| Placement | get_valid_placements |
Query valid building locations |
| Unit Groups | assign_group, add_to_group, get_groups, command_group |
Group management |
| Compound | batch, plan |
Multi-action sequences |
| Utility | get_replay_path, surrender |
Misc |
| Terrain | get_terrain_at |
Terrain queries |
Tools can be toggled per-category or individually via config.yaml.
Game results are automatically submitted to the OpenRA-Bench leaderboard after each game. Disable with BENCH_UPLOAD=false or bench_upload: false in config.
Customize how your agent appears on the leaderboard:
# Environment variables
AGENT_NAME="DeathBot-9000" AGENT_TYPE="RL" openra-rl play
# Or in config.yaml
agent:
agent_name: "DeathBot-9000"
agent_type: "RL"
agent_url: "https://github.com/user/deathbot" # shown as link on leaderboard| Variable | Config path | Description |
|---|---|---|
AGENT_NAME |
agent.agent_name |
Display name (default: model name) |
AGENT_TYPE |
agent.agent_type |
Scripted / LLM / RL (default: auto-detect) |
AGENT_URL |
agent.agent_url |
GitHub/project URL shown on leaderboard |
BENCH_UPLOAD |
agent.bench_upload |
Auto-upload after each game (default: true) |
BENCH_URL |
agent.bench_url |
Leaderboard URL |
Upload a saved result (with optional replay file):
openra-rl bench submit result.json
openra-rl bench submit result.json --replay game.orarep --agent-name "MyBot"If you're building your own agent (RL, CNN, multi-agent, etc.) that doesn't use the built-in LLM agent, use build_bench_export() to create a leaderboard submission from a final observation:
from openra_env.bench_export import build_bench_export
# obs = final observation from env.step()
export = build_bench_export(
obs,
agent_name="DeathBot-9000",
agent_type="RL",
opponent="Normal",
agent_url="https://github.com/user/deathbot",
replay_path="/path/to/replay.orarep",
)
# Saves JSON to ~/.openra-rl/bench-exports/ and returns dict with "path" keyThen submit:
openra-rl bench submit ~/.openra-rl/bench-exports/bench-DeathBot-9000-*.json --replay game.orarepOpenRA-RL/
āāā OpenRA/ # Game engine (git submodule, C#)
āāā openra_env/ # Python package
ā āāā cli/ # CLI entry point (openra-rl command)
ā āāā mcp_server.py # Standard MCP server (stdio transport)
ā āāā client.py # WebSocket client
ā āāā config.py # Unified YAML configuration
ā āāā models.py # Pydantic data models
ā āāā game_data.py # Unit/building stats, tech tree
ā āāā reward.py # Multi-component reward function
ā āāā bench_export.py # Build leaderboard submissions from observations
ā āāā bench_submit.py # Upload results to OpenRA-Bench leaderboard
ā āāā opponent_intel.py # AI opponent profiles
ā āāā mcp_ws_client.py # MCP WebSocket client
ā āāā server/
ā ā āāā app.py # FastAPI application
ā ā āāā openra_environment.py # OpenEnv environment (reset/step/state)
ā ā āāā bridge_client.py # Async gRPC client
ā ā āāā openra_process.py # OpenRA subprocess manager
ā āāā generated/ # Auto-generated protobuf stubs
āāā examples/
ā āāā scripted_bot.py # Hardcoded strategy bot
ā āāā mcp_bot.py # MCP tool-based bot
ā āāā llm_agent.py # LLM-powered agent
ā āāā config-*.yaml # Example configs (ollama, lmstudio, openrouter, minimal)
āāā skill/ # OpenClaw skill definition
āāā proto/ # Protobuf definitions (rl_bridge.proto)
āāā tests/ # Test suite
āāā .github/workflows/ # CI, Docker publish, PyPI publish
āāā config.yaml # Default configuration
āāā docker-compose.yaml # Service orchestration
āāā Dockerfile # Game server image
āāā Dockerfile.agent # Lightweight agent image
| Repository | Description |
|---|---|
| OpenRA-RL | Python environment, agents, MCP server (this repo) |
| OpenRA | Modified C# game engine with gRPC bridge |
| OpenRA-Bench | Leaderboard & benchmark (live) |
| OpenRA-RL-Util | Shared utilities ā reward vectors, damage matrices, rubrics |
| OpenRA-RL-Training | Scenario system, curriculum, GRPO training engine |
| OpenRA-RL-Website | Documentation site (openra-rl.dev) |
| OpenEnv | Gymnasium-style environment framework |

