freshcrate
Home > MCP Servers > Project_Infinity

Project_Infinity

Project Infinity leverages MCP and Graph RAG to turn LLMs into a professional D&D 5e Game Master, governed by a dedicated dice server and a persistent player database for a truly consistent adventure.

Description

Project Infinity leverages MCP and Graph RAG to turn LLMs into a professional D&D 5e Game Master, governed by a dedicated dice server and a persistent player database for a truly consistent adventure.

README

Project Infinity: A Dynamic, Text-Based RPG World Engine

Project Infinity is a sophisticated, procedural world-generation engine and AI agent architecture. It transforms a general-purpose Large Language Model (LLM) into a specialized Game Master by combining a codified agent protocol with an external mechanical authority, ensuring a consistent, fair, and deep RPG experience.

Project Infinity TUI


🎮 How to Play: The Authoritative Experience

This mode utilizes an external Model Context Protocol (MCP) server to act as the absolute authority for game mechanics. By offloading logic to a dedicated server, it eliminates "LLM luck" and hallucinations regarding stats and dice rolls.

The MCP Advantage:

  • Verified Dice: All rolls are performed externally and returned to the AI.
  • State Authority: Player progress is tracked in a real-time SQLite database, preventing "memory drift."
  • Fairness: Every mechanical result is mathematically accurate and transparent.

Requirements:

  • Python 3.8+
  • Ollama installed and running.
  • Supported models: qwen3.5:cloud, qwen3.5:397b-cloud (Recommended), or gemma4:31b-cloud.

Quick Start:

  1. Install dependencies:
    pip install -r requirements.txt
  2. Launch the game:
    python3 play.py
  3. Select your model and world file (.wwf).

🔬 Technical Architecture

Project Infinity ensures game consistency through these authoritative systems:

The Roll Engine

To ensure fairness, the engine splits mechanical outcomes into two distinct layers:

  • Complexity Checks (The d20): Uses perform_check to determine binary success or failure for both players and NPCs against a Difficulty Class (DC).
  • Magnitude & Damage (The Multi-Dice): Uses roll_dice to determine the impact of actions for all participants (players and creatures), including damage, healing, and quantity.
  • Verification: All rolls MUST be output in a transparent formula: {actor} {notation}: {total} ({rolls} + {mod}).

State Authority

To solve the problem of LLM "forgetfulness," the engine implements a dynamic state-tracking system:

  • In-Memory SQLite Engine: Upon boot, the MCP server initializes a queryable database from the player file.
  • Real-Time Synchronization: The Game Master updates the player database via MCP tools immediately as changes occur in the narrative.

🛠 The World Forge

Use the World Forge to create a world tailored to your character.

Run the forge:

python3 main.py

The Forge guides you through character creation and procedurally generates a world knowledge graph (.wwf file) and a corresponding character state file (.player) in the output/ directory. Together, these files serve as the complete source of truth for your adventure.

When you launch play.py, the system feeds the GameMaster_MCP.md protocol and the .wwf file to the LLM to set the stage. Simultaneously, play.py initializes dice_server.py using the .player file to boot the SQLite database.


🌟 The Game Master's Codex

  • Model Selection: Larger models generally produce richer narratives and better adhere to the complex MCP protocols.
  • Model Performance Note: While gemma4:31b-cloud is supported, it may struggle with the complexity of the protocol, often truncating narratives or failing to report dice rolls in the chat despite using the MCP tools correctly. For the best experience and full adherence to the mechanical transparency rules, the qwen3.5 models (especially qwen3.5:397b-cloud) are strongly recommended.
  • Debug Mode: Use the --verbose or -v flag when launching play.py to see detailed MCP tool calls and responses.
  • Note on Model Behavior: The GameMaster may occasionally be forgetful about awarding XP, gold, or syncing the database. If you notice this, simply remind the GameMaster, and it will update the state accordingly.

🛠 Technology Stack

Core Dependencies:

  • mcp: Model Context Protocol for external tool integration.
  • ollama: Local LLM orchestration.
  • rich: High-fidelity Terminal User Interface (TUI).
  • pydantic: Data validation and settings management.
  • numpy: Procedural generation logic.
  • pyyaml: Protocol and schema configuration.

Infrastructure:

  • Python 3
  • SQLite (In-memory engine)
  • Graph RAG architecture

Release History

VersionChangesUrgencyDate
main@2026-04-21Latest activity on main branchHigh4/21/2026
0.0.0No release found — using repo HEADHigh4/11/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

comfy-pilot🤖 Create and modify workflows effortlessly with ComfyUI's AI assistant, enabling natural conversations with agents like Claude and Gemini.main@2026-04-21
seedance-2-ai🎥 Generate AI-driven videos with Seedance 2.0, offering precise physics, lip-sync, and prompt accuracy for seamless content creation.main@2026-04-21
mcp-client-for-ollamaA text-based user interface (TUI) client for interacting with MCP servers using Ollama. Features include agent mode, multi-server, model switching, streaming responses, tool management, human-in-the-lv0.28.0
developers-guide-to-aiThe Developer's Guide to AI - A Field Guide for the Working Developermain@2026-04-09
open-responses-serverWraps any OpenAI API interface as Responses with MCPs support so it supports Codex. Adding any missing stateful features. Ollama and Vllm compliant.v0.4.3