freshcrate
Home > MCP Servers > chatbot

chatbot

Torvian Chatbot is a self-hosted chatbot platform with a Ktor backend and Compose Multiplatform clients, supporting OpenAI-compatible APIs, Ollama local models, and MCP tool calling with per-call user

Description

Torvian Chatbot is a self-hosted chatbot platform with a Ktor backend and Compose Multiplatform clients, supporting OpenAI-compatible APIs, Ollama local models, and MCP tool calling with per-call user approval.

README

Torvian Chatbot

Project Purpose

The project is a multi-platform chatbot application with AI/LLM integration. It features a central server and multiple client options (Desktop, Web, Android). It supports various LLM providers (OpenAI, Ollama) and has a plugin system for MCP tools.

  • Server Module: Ktor-based backend with SQLite database, user authentication, and LLM provider integration
  • Desktop Client: Compose Multiplatform desktop application (Linux, Windows, macOS)
  • Web Client: WASM-based web application (planned)
  • Android Client: Android application (planned)
  • Common Module: Shared business logic and models

Tech Stack

  • Languages: Kotlin 2.3.10
  • UI Framework: Compose Multiplatform 1.10.2 (with Material 3 1.9.0)
  • Server: Ktor 3.4.1
  • Database: SQLite with Exposed ORM 1.1.1 (Server) and SQLDelight 2.2.1 (App)
  • Dependency Injection: Koin 4.1.1
  • Functional Programming: Arrow 2.2.2
  • Logic: kotlinx.serialization
  • Build Tool: Gradle 9.4.0

Features

  • User authentication and session management
  • Chat sessions with message threading
  • LLM provider configuration and model selection. Allows using your own API keys from any OpenAI compatible provider (e.g. OpenAI, Gemini, OpenRouter). Also supports using Ollama for local models.
  • Tool execution with local (stdio) MCP server integration. Remote (http) MCP server integration is planned.
  • Agentic LLM responses with tool calling. All tool calls need to be user approved before execution. Automatic approval can be configured on a per-tool basis. Allows streaming multiple LLM responses in parallel (from different chat sessions).
  • File attachment and reference in messages
  • (WIP) Multi-platform support (Desktop, Web, Android)

Project Status

The project is in active development. The server and desktop client are feature complete in terms of their core functionality. The desktop client is currently the most stable and useable version available. The web client is partially useable, though MCP server integration is not yet functional. The Android client is the least useable, primarily due to its layout being optimized for landscape mode (not portrait), and it includes elements that require mouse hover to be visible, making them inaccessible on touchscreens.

Getting Started

Use pre-built packages

Pre-built packages are available from the Releases page:

  • Server: Available for all platforms. Use Docker (recommended) or JDK 21+ to run it locally.
  • Desktop Client: Available for Windows and Linux.
  • Web Client: Available as a static web app. It must be served over HTTP(S).

Run the server

# Linux/Mac
<install-path>/start-server.sh
# Windows
<install-path>/start-server.bat

Docker quick start:

docker run -d \
  --name chatbot-server \
  -p 8080:8080 \
  -v chatbot-config:/app/config \
  -v chatbot-data:/app/data \
  -v chatbot-logs:/app/logs \
  -e SERVER_HOST=0.0.0.0 \
  -e SERVER_CONNECTOR_TYPE=HTTP \
  --restart unless-stopped \
  ghcr.io/torvian-eu/chatbot-server:latest

For full deployment options and configuration details (including Docker Compose + Caddy), see deploy/README.md.

Run the desktop application

# Windows
<install-path>/Chatbot-with-logs.bat
# Linux/Mac (if building from source, see below)
<install-path>/Chatbot-with-logs.sh

Serve the web client

The web client is a static web application and must be served over HTTP(S). Opening index.html directly via file:// will not work, because browsers block loading WASM and related assets from local files.

For local testing, you can use any simple static file server.

Example using Python:

cd <web-client-dist-path>
python -m http.server 4000

Then open your browser and navigate to:

http://localhost:4000

For production or VPS deployments, serve the same files using a regular web server such as Caddy or nginx.

Login

Login with username admin and password admin123. You will be asked to change the password on first login.

Build from Source

Prerequisites

  • Git
  • JDK 21 or higher
  • Gradle 9.x (included via wrapper)

Clone the repository

cd <parent-path>
git clone https://github.com/Torvian-eu/chatbot.git

Build & Install Server application

./gradlew server:installDist

The files will be installed to server/build/install/server/. You can run the server using the scripts in that folder.

Build & Install Desktop application

./gradlew app:createDistributable

The files will be installed to app/build/compose/binaries/main/app/Chatbot. You can run the desktop client using the scripts in that folder.

Build & Install Web application

./gradlew app:wasmJsBrowserDistribution

The files will be installed to app/build/dist/wasmJs/productionExecutable. You can serve the web client using any static file server, as described in the "Serve the web client" section above.

Guides

These guides provide information on how to configure and use specific features of the chatbot.

Deployment

For information on deploying the chatbot application to a VPS or server environment, please refer to the Deployment Guide. This includes:

  • Quick Docker server startup
  • Docker setup with Caddy reverse proxy
  • VPS deployment with Docker Compose
  • Configuration via environment variables

Additional deployment-related documentation:

Project Structure

chatbot/
ā”œā”€ā”€ app/                    # Desktop client module
ā”œā”€ā”€ server/                 # Server module
ā”œā”€ā”€ common/                 # Shared code
ā”œā”€ā”€ build-logic/            # Gradle convention plugins
ā”œā”€ā”€ docs/                   # Documentation
└── gradle/                 # Gradle wrapper and dependencies

Additional Documentation

License

This project is licensed under the MIT License

Contributing

We welcome community contributions to the Torvian Chatbot! Your feedback, bug reports, feature suggestions, and code contributions are highly valued.

Please see our comprehensive Contributing Guide for detailed information on how to get involved.

Support

For support or general questions, please post a message in the GitHub discussion forum.

Screenshots

  • Desktop app GUI:
  • MCP server configuration:

Release History

VersionChangesUrgencyDate
v0.2.0# Chatbot v0.2.0 ## What is new in plain terms - You can now **test provider connections before saving** and **list available models** directly from settings. - **OpenRouter support** has been added to model discovery. - Self-hosting got much easier with **Docker + VPS deployment guides**, example compose files, and install scripts. - Server configuration is safer and clearer with **explicit CORS settings** and **configurable SSL SANs**. - Packaging and startup behavior were improved sHigh4/5/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

gemini-cliAn open-source AI agent that brings the power of Gemini directly into your terminal.v0.38.2
codeweaverSemantic code search for AI agents — 166+ languages, hybrid search, works offlinev0.1.2
product-management-skill🧠 Enable AI coding agents to adopt product management skills and build user-focused software efficiently.main@2026-04-21
aiA productive AI coworker that learns, self-improves, and ships work.main@2026-04-21
ios-agentic-skillsšŸ” Discover and utilize agentic iOS/watchOS audit skills and playbooks for consistent quality assurance in your applications.master@2026-04-21