If you would like to see the current state of the ui, and apps currently available, go check out the preview server (it may take up to 5 minutes to load).
Just clone the repository and enter the directory ("git clone https://github.com/Oxygen-Low/LocalLLM.git", "cd LocalLLM"), then run "npm install", then test with "npm test" (optional), then start it with "npm start".
By default, the server listens on all network interfaces, so you can access it from other devices on your local network (e.g. a phone or tablet on the same Wi-Fi). Open https://<your-ip>:3000 from the other device, where <your-ip> is the local IP address of the machine running the server (e.g. 192.168.1.100).
To restrict the server to localhost only (no LAN access), run:
npm run start:no-lan
Once you start for the first time, you will gain your admin credentials, and you can log in via them to gain access to the admin panel/settings.
If you lose the password to the admin account, just stop the server and run "npm run resetadmin" to receive a new password for the account.
If you want the LLM to act like one of your favourite characters, just go create a universe and that character in the admin panel.
Local Models are supported via HuggingFace transformers. Download models directly from the HuggingFace Hub through the admin panel — no C/C++ build tools required.
Additionally, certain external apps will be automatically detected if installed on the server (note: You need to enable them in the admin settings):
You can either manually copy the data folder to the device or you can copy data to a directory on the host machine via the admin panel/settings (you can also set it up automatically to save/load on start/stop, which is perfect for saving and loading from and to the cloud).
Create an issue if you want direct support from me, or ask a community member from our Discord Server.
See LICENSE for details.
