The project is a multi-platform chatbot application with AI/LLM integration. It features a central server and multiple client options (Desktop, Web, Android). It supports various LLM providers (OpenAI, Ollama) and has a plugin system for MCP tools.
- Server Module: Ktor-based backend with SQLite database, user authentication, and LLM provider integration
- Desktop Client: Compose Multiplatform desktop application (Linux, Windows, macOS)
- Web Client: WASM-based web application (planned)
- Android Client: Android application (planned)
- Common Module: Shared business logic and models
- Languages: Kotlin 2.3.10
- UI Framework: Compose Multiplatform 1.10.2 (with Material 3 1.9.0)
- Server: Ktor 3.4.1
- Database: SQLite with Exposed ORM 1.1.1 (Server) and SQLDelight 2.2.1 (App)
- Dependency Injection: Koin 4.1.1
- Functional Programming: Arrow 2.2.2
- Logic: kotlinx.serialization
- Build Tool: Gradle 9.4.0
- User authentication and session management
- Chat sessions with message threading
- LLM provider configuration and model selection. Allows using your own API keys from any OpenAI compatible provider (e.g. OpenAI, Gemini, OpenRouter). Also supports using Ollama for local models.
- Tool execution with local (stdio) MCP server integration. Remote (http) MCP server integration is planned.
- Agentic LLM responses with tool calling. All tool calls need to be user approved before execution. Automatic approval can be configured on a per-tool basis. Allows streaming multiple LLM responses in parallel (from different chat sessions).
- File attachment and reference in messages
- (WIP) Multi-platform support (Desktop, Web, Android)
The project is in active development. The server and desktop client are feature complete in terms of their core functionality. The desktop client is currently the most stable and useable version available. The web client is partially useable, though MCP server integration is not yet functional. The Android client is the least useable, primarily due to its layout being optimized for landscape mode (not portrait), and it includes elements that require mouse hover to be visible, making them inaccessible on touchscreens.
Pre-built packages are available from the Releases page:
- Server: Available for all platforms. Use Docker (recommended) or JDK 21+ to run it locally.
- Desktop Client: Available for Windows and Linux.
- Web Client: Available as a static web app. It must be served over HTTP(S).
# Linux/Mac
<install-path>/start-server.sh
# Windows
<install-path>/start-server.batDocker quick start:
docker run -d \
--name chatbot-server \
-p 8080:8080 \
-v chatbot-config:/app/config \
-v chatbot-data:/app/data \
-v chatbot-logs:/app/logs \
-e SERVER_HOST=0.0.0.0 \
-e SERVER_CONNECTOR_TYPE=HTTP \
--restart unless-stopped \
ghcr.io/torvian-eu/chatbot-server:latestFor full deployment options and configuration details (including Docker Compose + Caddy), see deploy/README.md.
# Windows
<install-path>/Chatbot-with-logs.bat
# Linux/Mac (if building from source, see below)
<install-path>/Chatbot-with-logs.shThe web client is a static web application and must be served over HTTP(S). Opening index.html directly via file:// will not work, because browsers block loading WASM and related assets from local files.
For local testing, you can use any simple static file server.
Example using Python:
cd <web-client-dist-path>
python -m http.server 4000Then open your browser and navigate to:
http://localhost:4000
For production or VPS deployments, serve the same files using a regular web server such as Caddy or nginx.
Login with username admin and password admin123. You will be asked to change the password on first login.
cd <parent-path>
git clone https://github.com/Torvian-eu/chatbot.git./gradlew server:installDistThe files will be installed to server/build/install/server/. You can run the server using the scripts in that folder.
./gradlew app:createDistributableThe files will be installed to app/build/compose/binaries/main/app/Chatbot. You can run the desktop client using the scripts in that folder.
./gradlew app:wasmJsBrowserDistributionThe files will be installed to app/build/dist/wasmJs/productionExecutable. You can serve the web client using any static file server, as described in the "Serve the web client" section above.
These guides provide information on how to configure and use specific features of the chatbot.
- LLM configuration guide - How to configure LLM providers, models and model settings. And how to use them in the chatbot.
- MCP server configuration guide - How to configure and use MCP servers.
For information on deploying the chatbot application to a VPS or server environment, please refer to the Deployment Guide. This includes:
- Quick Docker server startup
- Docker setup with Caddy reverse proxy
- VPS deployment with Docker Compose
- Configuration via environment variables
Additional deployment-related documentation:
- VPS Deployment Guide - Comprehensive guide for VPS deployment
- VPS Basics Guide - Linux basics for VPS management
- Docker Installation Guide - How to install Docker on a VPS
chatbot/
āāā app/ # Desktop client module
āāā server/ # Server module
āāā common/ # Shared code
āāā build-logic/ # Gradle convention plugins
āāā docs/ # Documentation
āāā gradle/ # Gradle wrapper and dependencies
This project is licensed under the MIT License
We welcome community contributions to the Torvian Chatbot! Your feedback, bug reports, feature suggestions, and code contributions are highly valued.
Please see our comprehensive Contributing Guide for detailed information on how to get involved.
For support or general questions, please post a message in the GitHub discussion forum.


