freshcrate

Search results for "mlx"

Clear filters
9 results found (Python)
npcpy๐Ÿ“v1.4.21๐ŸŒณ Matureโญ1,287

The python library for research and development in NLP, multimodal LLMs, Agents, ML, Knowledge Graphs, and more.

unsloth-buddy๐Ÿ“main@2026-04-15๐ŸŒฟ Growingโญ212

Zero-friction LLM fine-tuning skill for Claude Code, Gemini CLI & any ACP agent. Unsloth on NVIDIA ยท TRL+MPS/MLX on Apple Silicon. Automates env setup, LoRA training (SFT, DPO, GRPO, vision), post-hoc

vllm-mlx๐Ÿ“v0.2.8๐ŸŒฟ Growingโญ798

OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX bac

vmlx๐Ÿ“v1.3.34๐ŸŒฟ Growingโญ198

vMLX - Home of JANG_Q - Cont Batch, Prefix, Paged, KV Cache Quant, VL - Powers MLX Studio. Image gen/edit, OpenAI/Anth

server-nexe๐Ÿ“v1.0.0-beta๐ŸŒฑ Seedlingโญ9

Local AI server with persistent memory, RAG, and multi-backend inference (MLX / llama.cpp / Ollama). Runs entirely on your machine โ€” zero data sent to external services.

aura๐Ÿ“main@2026-04-21๐ŸŒฑ Seedlingโญ47

A sovereign cognitive architecture with IIT 4.0 integrated information, residual-stream affective steering (CAA), Global Workspace Theory, active inference, and 72 consciousness modules โ€” running loca

llm_context_benchmarks๐Ÿ“0.0.0๐ŸŒฑ Seedlingโญ59

๐Ÿ“Š LLM Context Benchmarks - A comprehensive benchmarking tool for testing LLMs with varying context sizes using Ollama. Features dual benchmark modes (API/CLI), automatic hardware detection (optimiz

vikramaditya๐Ÿ“main@2026-04-20๐ŸŒฑ Seedlingโญ5

Autonomous VAPT platform. Give it a target (FQDN, IP, CIDR) โ€” it hunts, it reports. Inspired by the Obsidian Order.