freshcrate
Home > Databases > LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents

LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents

AI Agent Development Platform - Supports multiple models (OpenAI/DeepSeek/Wenxin/Tongyi), knowledge base management, workflow automation, and enterprise-grade security. Built with Flask + Vue3 + LangC

Description

AI Agent Development Platform - Supports multiple models (OpenAI/DeepSeek/Wenxin/Tongyi), knowledge base management, workflow automation, and enterprise-grade security. Built with Flask + Vue3 + LangChain, featuring one-click Docker deployment.

README

LLMOps - End-to-End LLM Operations Platform


Overview

LLMOps is a full-stack platform for building and operating AI agent applications. The repository contains:

  • A Flask backend with LangChain and LangGraph orchestration
  • A Vue 3 frontend for agent, workflow, dataset, tool, and conversation management
  • Celery workers for background jobs
  • PostgreSQL, Redis, Weaviate, and Nginx in the Docker stack

The current codebase focuses on:

  • Multi-provider LLM integration
  • Workflow authoring and execution
  • Conversation management and search
  • Public app and workflow publishing
  • Knowledge base, document, and dataset management
  • Built-in tools and notification pipelines

Project Layout

.
β”œβ”€β”€ api/        # Flask backend, services, handlers, tasks, tests
β”œβ”€β”€ ui/         # Vue 3 frontend, components, views, tests
β”œβ”€β”€ docker/     # Docker Compose stack and deployment config
β”œβ”€β”€ docs/       # High-level documentation index and deployment guides
└── README.md   # Project overview

Features

Backend

  • REST APIs built with Flask
  • Service-oriented backend structure
  • JWT, OAuth, and role-based account flows
  • SSE / websocket driven real-time notifications
  • Celery-based background tasks

Frontend

  • Vue 3 + Vite + TypeScript
  • Workflow editor and app management views
  • Conversation history, search, and publishing pages
  • Notification components and live UI updates

Infrastructure

  • Docker Compose deployment
  • PostgreSQL for persistence
  • Redis for cache and task queue
  • Weaviate for vector search
  • Nginx as reverse proxy

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         Frontend (Vue 3)                     β”‚
β”‚  Agent Builder  Workflow Editor  Dataset Manager  Tools UI   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                            ↕ HTTP / SSE / WebSocket
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      Backend (Flask + Celery)                β”‚
β”‚  API Layer   Services   LangChain / LangGraph   Background   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                            ↕
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Infrastructure                            β”‚
β”‚  PostgreSQL   Redis   Weaviate   Nginx                       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Quick Start

Prerequisites

  • Docker 20.10+
  • Docker Compose 2.0+
  • 8GB+ RAM recommended for the full stack

Start with Docker

git clone https://github.com/Haohao-end/LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents.git
cd LMForge-End-to-End-LLMOps-Platform-for-Multi-Model-Agents

cp api/.env.example api/.env
# edit api/.env and fill in the required API keys

cd docker
docker compose up -d --build

Service URLs

Service URL Notes
Frontend http://localhost:3000 Vue 3 web UI
API http://localhost:5001 Flask REST API
Nginx http://localhost Reverse proxy

For more deployment details, see DOCKER_QUICKSTART.md and docker/README.md.


Local Development

Backend

cd api
pip install -r requirements.txt
flask run --port 5001

Run tests:

cd api
pytest

Frontend

cd ui
npm install
npm run serve

Vite serves the frontend on port 5173 by default and proxies /api to http://localhost:5001.

Useful frontend commands:

cd ui
npm run type-check
npm run lint
npm run build
npm run test:unit -- --run

Configuration

Backend environment

Copy api/.env.example to api/.env and set at least one LLM provider key plus the required database and Redis settings.

Docker environment

If you need to customize ports, container passwords, or other infrastructure settings, use the Docker configuration documented in docker/README.md.

Key references


Documentation


Testing

The repository already includes a large automated test suite.

  • Backend: cd api && pytest
  • Frontend: cd ui && npm run test:unit -- --run

License

License terms are not declared in a root LICENSE file at the moment. Add one if you want the licensing to be explicit.

Release History

VersionChangesUrgencyDate
v1.1.0## OpenAgent v1.1.0 This release focuses on public-agent orchestration, natural-language AI app creation, security hardening, deployment reliability, and a complete README/documentation refresh. ### Highlights - Home Assistant is now a stronger AI entry point for two core workflows: - Route user questions to the most relevant published public agents through A2A - Turn natural-language requirements into new AI app / agent creation flows - Improved public agent routing, indexing, aHigh4/12/2026
v1.0.0v1.0.0Low12/30/2025

Dependencies & License Audit

Loading dependencies...

Similar Packages

TV-Show-Recommender-AIπŸ€– Recommend TV shows by matching favorites, averaging embeddings, and finding similar titles using fuzzy search and vector similarity.main@2026-04-21
Auto-Pentest-LLMπŸ” Automate penetration testing with an intelligent agent that organizes security assessments, leveraging local LLMs and Kali Linux for effective exploitation.main@2026-04-21
mcp-audit🌟 Track token consumption in real-time with MCP Audit. Diagnose context bloat and unexpected spikes across MCP servers and tools efficiently.main@2026-04-21
DeepAnalyzeπŸ” Empower data scientists with DeepAnalyze, a tool that leverages large language models for automated data analysis and insights generation.main@2026-04-21
agent-telegram-botProvide a local, privacy-focused AI assistant for Telegram that runs fully on your machine without sending data to the cloud.main@2026-04-21