freshcrate
Home > MCP Servers > minutes

minutes

Every meeting, every idea, every voice note — searchable by your AI. Open-source, privacy-first conversation memory layer.

Description

Every meeting, every idea, every voice note — searchable by your AI. Open-source, privacy-first conversation memory layer.

README

minutes

GitHub starsLicense: MIT

Open-source conversation memory. Ā  useminutes.app

Agents have run logs. Humans have conversations. minutes captures the human side — the decisions, the intent, the context that agents need but can't observe — and makes it queryable.

Record a meeting. Capture a voice memo on a walk. Ask Claude "what did I promise Sarah?" — and get an answer. Your AI remembers every conversation you've had.

minutes demo — record, dictate, phone sync, AI recall

Works with

Claude Code • Codex • Gemini CLI • Claude Desktop • Mistral Vibe • Obsidian • Logseq • Phone Voice Memos • Any MCP client

Quick start

# macOS — Desktop app (menu bar, recording UI, AI assistant)
brew install --cask silverstein/tap/minutes

# macOS — CLI only
brew tap silverstein/tap && brew install minutes

# Any platform — from source (requires Rust + cmake; Windows also needs LLVM)
cargo install minutes-cli                          # macOS/Linux
cargo install minutes-cli --no-default-features    # Windows (see install notes below)

# MCP server only — no Rust needed (Claude Code, Codex, Gemini CLI, Claude Desktop, etc.)
npx minutes-mcp
minutes setup --model small   # Download whisper model (466MB, recommended)
minutes record                # Start recording
minutes stop                  # Stop and transcribe

How it works

Audio → Transcribe → Diarize → Summarize → Structured Markdown → Relationship Graph
         (local)     (local)     (LLM)       (decisions,            (people, commitments,
        whisper.cpp  pyannote-rs Claude/       action items,          topics, scores)
        /parakeet    (native)    Ollama/       people, entities)      SQLite index
                                Mistral/OpenAI

Everything runs locally. Your audio never leaves your machine (unless you opt into cloud LLM summarization). Speakers are identified via native diarization. The relationship graph indexes people, commitments, and topics across all meetings for instant queries.

Features

Record meetings

minutes record                                    # Record from mic
minutes record --title "Standup" --context "Sprint 4 blockers"  # With context
minutes record --language ur                      # Force Urdu (ISO 639-1 code)
minutes record --device "AirPods Pro"             # Use specific audio device
minutes stop                                      # Stop from another terminal

Take notes during meetings

minutes note "Alex wants monthly billing not annual billing"          # Timestamped, feeds into summary
minutes note "Logan agreed"                       # LLM weights your notes heavily

Process voice memos

minutes process ~/Downloads/voice-memo.m4a        # Any audio format
minutes watch                                     # Auto-process new files in inbox

Search everything

minutes search "pricing"                          # Full-text search
minutes search "onboarding" -t memo               # Filter by type
minutes actions                                   # Open action items across all meetings
minutes actions --assignee sarah                   # Filter by person
minutes list                                      # Recent recordings

Relationship intelligence

"What did I promise Sarah?" — the query nobody else can answer.

minutes people                                     # Who you talk to, how often, about what
minutes people --rebuild                           # Rebuild the relationship index
minutes commitments                                # All open + overdue commitments
minutes commitments --person alex                   # What did I promise Alex?

Tracks people, commitments, topics, and relationship health across every meeting. Detects when you're losing touch with someone. Suggests duplicate contacts ("Sarah Chen" ↔ "Sarah"). Powered by a SQLite index rebuilt from your markdown in <50ms.

Cross-meeting intelligence

minutes research "pricing strategy"               # Search across all meetings
minutes person "Alex"                              # Build a profile from meeting history
minutes consistency                                # Flag contradicting decisions + stale commitments

Live transcript (real-time coaching)

minutes live                                     # Start real-time transcription
minutes stop                                     # Stop live session

Streams whisper transcription to a JSONL file in real time — any AI agent can read it mid-meeting for live coaching. The MCP read_live_transcript tool provides delta reads (by line cursor or wall-clock duration). Works with Claude Code, Codex, Gemini CLI, or any agent that reads files. The Tauri desktop app has a Live Mode toggle that starts this with one click.

Dictation mode

minutes dictate                                  # Speak → text appears as you talk
minutes dictate --stdout                         # Output to stdout instead of clipboard

Text streams progressively as you speak (partial results every 2 seconds). By default it accumulates across pauses and writes the combined text to clipboard + daily note when dictation ends. Set [dictation] accumulate = false to keep the older per-pause behavior. Local whisper, no cloud.

Command palette (desktop app)

Press āŒ˜ā‡§K from anywhere on macOS to open a keyboard-first palette of every Minutes command. Start a recording, drop a note into the active session, jump to the latest meeting, search transcripts, or rename the meeting open in your assistant — all without leaving the keyboard. Backed by a single typed command registry in minutes-core, so visibility follows real backend state: stop-recording only appears while you're recording, mid-recording dictation rows are hidden, and the list re-fetches automatically when state changes.

Recents float to the top with their original payload intact (re-running a Search transcripts: pricing from history skips the retype). The shortcut defaults on for both fresh installs and upgrades, with a one-time macOS notification on first launch announcing the binding. Disable it from the Settings overlay (Command Palette section) or by setting [palette] shortcut_enabled = false in ~/.config/minutes/config.toml. The Settings dropdown also offers āŒ˜ā‡§O and āŒ˜ā‡§U if āŒ˜ā‡§K collides with your IDE.

Try it without a mic

minutes demo --full                              # Seed 5 sample meetings (Snow Crash theme)
minutes demo --query                             # Cross-meeting intelligence demo
minutes demo --clean                             # Remove sample meetings

The interactive demo seeds interconnected meetings, then lets you pick a thread to explore. Two storylines, five meetings, zero setup.

System diagnostics

minutes health                                   # Check model, mic, calendar, disk
minutes demo                                     # Run a pipeline test (bundled audio, no mic)

Switching from Granola?

Import your meeting history into Minutes' conversation memory. Once imported, your meetings become searchable context for AI agents, feed the relationship graph for meeting prep, and surface action items and decision patterns across months of conversations.

minutes import granola --dry-run    # Preview what will be imported
minutes import granola              # Import all meetings to ~/meetings/

Reads from ~/.granola-archivist/output/. Meetings are converted to Minutes' markdown format with YAML frontmatter. Duplicates are skipped automatically. All your data stays local — no cloud, no $18/mo.

Want transcripts and AI summaries?

granola-to-minutes exports richer data using granola-cli, a community-built CLI tool (not affiliated with Granola Labs) that accesses Granola's internal API:

minutes import granola granola-to-minutes
Data source Local export (~/.granola-archivist/output/) Granola internal API via granola-cli
Notes & transcript āœ“ āœ“
AI-enhanced summaries — āœ“
Action items & decisions — āœ“ (extracted via Claude)
Speaker attribution — āœ“ (speaker_map in frontmatter)
Setup Export from Granola desktop app npm install -g granola-to-minutes
Works on free tier āœ“ āœ“
API stability N/A (local files) Internal API — may change without notice
npx granola-to-minutes export    # Export to ~/meetings/

Output format

Meetings save as markdown with structured YAML frontmatter:

---
title: Q2 Pricing Discussion with Alex
type: meeting
date: 2026-03-17T14:00:00
duration: 42m
context: "Discuss Q2 pricing, follow up on annual billing decision"
action_items:
  - assignee: mat
    task: Send pricing doc
    due: Friday
    status: open
  - assignee: sarah
    task: Review competitor grid
    due: March 21
    status: open
decisions:
  - text: Run pricing experiment at monthly billing with 10 advisors
    topic: pricing experiment
---

## Summary
- Alex proposed lowering API launch timeline from annual billing to monthly billing/mo
- Compromise: run experiment with 10 advisors at monthly billing

## Transcript
[SPEAKER_0 0:00] So let's talk about the pricing...
[SPEAKER_1 4:20] I think monthly billing makes more sense...

Works with Obsidian, grep, or any markdown tool. Action items and decisions are queryable via the CLI and MCP tools.

Phone → desktop voice memo pipeline

No phone app needed. Record a thought on your phone, and it becomes searchable memory on your desktop. Claude even surfaces recent memos proactively — "you had a voice memo about pricing yesterday."

The watcher is folder-agnostic — it processes any audio file that lands in a watched folder. Pick the sync method that matches your setup:

Phone Desktop Sync method
iPhone Mac iCloud Drive (built-in, ~5-30s)
iPhone Windows/Linux iCloud for Windows, or Dropbox/Google Drive
Android Any Dropbox, Google Drive, Syncthing, or any folder sync
Any Any AirDrop, USB, email — drop the file in the watched folder

Setup (one-time)

Step 1: Create a sync folder — pick one that syncs between your phone and desktop:

# macOS + iPhone (iCloud Drive)
mkdir -p ~/Library/Mobile\ Documents/com~apple~CloudDocs/minutes-inbox

# Any platform (Dropbox)
mkdir -p ~/Dropbox/minutes-inbox

# Any platform (Google Drive)
mkdir -p ~/Google\ Drive/minutes-inbox

# Or just use the default inbox (manually drop files into it)
# ~/.minutes/inbox/  ← already exists

Step 2: Add the sync folder to your watch config in ~/.config/minutes/config.toml:

[watch]
paths = [
  "~/.minutes/inbox",
  # Add your sync folder here — uncomment one:
  # "~/Library/Mobile Documents/com~apple~CloudDocs/minutes-inbox",  # iCloud
  # "~/Dropbox/minutes-inbox",                                       # Dropbox
  # "~/Google Drive/minutes-inbox",                                  # Google Drive
]

Step 3: Set up your phone

iPhone (Apple Shortcuts)
  1. Open the Shortcuts app on your iPhone
  2. Tap + → Add Action → search "Save File"
  3. Set destination to iCloud Drive/minutes-inbox/ (or your Dropbox/Google Drive folder)
  4. Turn OFF "Ask Where to Save"
  5. Tap the (i) info button → enable Share Sheet → set to accept Audio
  6. Name it "Save to Minutes"

Now: Voice Memos → Share → Save to Minutes → done.

Android

Use any voice recorder app + your cloud sync of choice:

  • Dropbox: Record with any app → Share → Save to Dropbox → minutes-inbox/
  • Google Drive: Record → Share → Save to Drive → minutes-inbox/
  • Syncthing (no cloud): Set up a Syncthing share between phone and desktop pointing at your watched folder. Fully local, no cloud.
  • Tasker/Automate (power users): Auto-move new recordings from your recorder app to the sync folder.
Manual (any phone)

No sync setup needed — just get the audio file to your desktop's watched folder:

  • AirDrop (Apple): Share → AirDrop to Mac → move to ~/.minutes/inbox/
  • Email: Email the recording to yourself → save attachment to watched folder
  • USB: Transfer directly

Step 4: Start the watcher (or install as a background service):

minutes watch                  # Run in foreground
minutes service install        # Or install as background service (auto-starts on login, macOS)
minutes service restart        # Restart in place (e.g. after upgrading the binary)
minutes service status         # Check if it's running and which PID

Upgrading? macOS launchd holds the running watcher's binary in memory, so a fresh brew upgrade (or any other binary swap) leaves the old version running until you restart it. Run minutes service install again — it's idempotent and will reload launchd with the new binary path. Or use minutes service restart if the plist hasn't changed.

How it works

Phone (any)                   Desktop (any)
───────────                   ─────────────
Record voice memo        →    Cloud sync / manual transfer
Share to sync folder               │
                                   ā–¼
                            minutes watch detects file
                                   │
                            probe duration (<2 min?)
                              ā”œā”€ā”€ yes → memo pipeline (fast, no diarization)
                              └── no  → meeting pipeline (full)
                                   │
                            transcribe → save markdown
                                   │
                            ā”œā”€ā”€ event: VoiceMemoProcessed
                            ā”œā”€ā”€ daily note backlink
                            └── surfaces in next Claude session

Short voice memos (<2 minutes) automatically route through the fast memo pipeline — no diarization, no heavy summarization. Long recordings get the full meeting treatment. The threshold is configurable: dictation_threshold_secs = 120 in [watch].

Optional: sidecar metadata

If your phone workflow also saves a .json file alongside the audio (same name, .json extension), Minutes reads it for enriched metadata:

{"device": "iPhone", "source": "voice-memos", "captured_at": "2026-03-24T08:41:00-07:00"}

This adds device and captured_at to the meeting's frontmatter. Works with any automation tool (Apple Shortcuts, Tasker, etc.).

Supports .m4a, .mp3, .wav, .ogg, .webm. Format conversion is automatic — uses ffmpeg when available (recommended for non-English audio), falls back to symphonia.

Vault sync (Obsidian / Logseq)

minutes vault setup              # Auto-detect vaults, configure sync
minutes vault status             # Check health
minutes vault sync               # Copy existing meetings to vault

Three strategies: symlink (zero-copy), copy (works with iCloud/Obsidian Sync), direct (write to vault). minutes vault setup detects your vault and recommends the right strategy automatically.

Claude integration

minutes is a native extension for the Claude ecosystem. No API keys needed — Claude summarizes your meetings when you ask, using your existing Claude subscription.

You: "Summarize my last meeting"
Claude: [calls get_meeting] → reads transcript → summarizes in conversation

You: "What did Alex say about pricing?"
Claude: [calls search_meetings] → finds matches → synthesizes answer

You: "Any open action items for me?"
Claude: [calls list_meetings] → scans frontmatter → reports open items

Any MCP client (Claude Code, Codex, Gemini CLI, Claude Desktop, or your own agent)

Minutes exposes a standard MCP server. Point any MCP-compatible client at it:

{
  "mcpServers": {
    "minutes": {
      "command": "npx",
      "args": ["minutes-mcp"]
    }
  }
}

26 tools: start_recording, stop_recording, get_status, list_processing_jobs, list_meetings, search_meetings, get_meeting, process_audio, add_note, consistency_report, get_person_profile, research_topic, qmd_collection_status, register_qmd_collection, start_dictation, stop_dictation, track_commitments, relationship_map, list_voices, confirm_speaker, get_meeting_insights, start_live_transcript, read_live_transcript, open_dashboard, ingest_meeting, knowledge_status

7 resources: minutes://meetings/recent, minutes://status, minutes://actions/open, minutes://events/recent, minutes://meetings/{slug}, minutes://ideas/recent, ui://minutes/dashboard

Interactive dashboard (Claude Desktop): Tools render an inline interactive UI via MCP Apps — meeting list with filter/search, detail view with fullscreen + "Send to Claude" context injection, People tab with relationship cards and click-through profiles, consistency reports. Text-only clients see the same data as plain text.

Mistral Vibe

Add Minutes to your ~/.vibe/config.toml:

[[mcp_servers]]
name = "minutes"
transport = "stdio"
command = "npx"
args = ["minutes-mcp"]

All 26 tools are available in Vibe as minutes_* (e.g. minutes_start_recording, minutes_search_meetings).

Claude Code (Plugin)

Install the plugin from the marketplace:

# First-time install
claude plugin marketplace add silverstein/minutes
claude plugin install minutes
# Restart Claude Code to load skills, hooks, and the meeting-analyst agent

Upgrading? claude plugin marketplace add is a no-op when the marketplace is already on disk — it won't fetch new versions. To pick up new skills and hooks after a release, refresh the marketplace mirror first, then update the plugin:

claude plugin marketplace update minutes    # git pulls the local marketplace mirror
claude plugin update minutes@minutes        # installs the new version into the cache
# Restart Claude Code to apply

18 skills, 1 agent, 2 hooks:

ā”œā”€ā”€ Capture:      /minutes-record, note, list, recap, cleanup, verify, setup
ā”œā”€ā”€ Search:       /minutes-search
ā”œā”€ā”€ Lifecycle:    /minutes-brief, prep, debrief, weekly
ā”œā”€ā”€ Coaching:     /minutes-tag, mirror
ā”œā”€ā”€ Knowledge:    /minutes-ideas, lint, ingest
ā”œā”€ā”€ Intelligence: /minutes-graph
ā”œā”€ā”€ Agent:        meeting-analyst (cross-meeting intelligence)
└── Hooks:        SessionStart meeting briefings + PostToolUse recording alerts

Meeting lifecycle skills — inspired by gstack's interactive skill pattern:

/minutes-brief                      → fast one-pager (or fired automatically by hook 15 min before calls)
  ↓
/minutes-prep "call with Alex"      → deeper relationship brief + talking points + goal-setting
  ↓
minutes record → minutes stop       → hook alerts if decisions conflict with prior meetings
  ↓
/minutes-tag won|lost|stalled       → 5-second outcome label (unlocks mirror correlation)
  ↓
/minutes-debrief                    → "You wanted to resolve pricing. Did you?"
  ↓
/minutes-mirror                     → talk-time, hedging, what your winning meetings have in common
  ↓
/minutes-weekly                     → themes, decision arcs, stale items, Monday brief
  ↓
/minutes-graph "everyone who mentioned Stripe"  → cross-meeting entity queries

Minutes Desktop Assistant

The Tauri menu bar app includes a built-in AI Assistant window backed by the same local meeting artifacts. It runs as a singleton assistant session:

  • AI Assistant opens or focuses the persistent assistant window
  • Discuss with AI reuses that same assistant and switches its active meeting focus
  • Auto-updates from GitHub Releases with signed artifacts, never interrupting a recording

Cowork / Dispatch

MCP tools are automatically available in Cowork. From your phone via Dispatch: "Start recording" → Mac captures → Claude processes → summary on your phone.

Optional: automated summarization

# Use your existing Claude Code or Codex subscription (recommended)
[summarization]
engine = "agent"
agent_command = "claude"  # or "codex" for OpenAI Codex users

# Or use Mistral API (requires MISTRAL_API_KEY)
[summarization]
engine = "mistral"
mistral_model = "mistral-large-latest"

# Or use a free local LLM
[summarization]
engine = "ollama"
ollama_model = "llama3.2"

Optional: knowledge base integration

Maintain a living knowledge base from your conversations — person profiles, decision history, and a chronological log that compounds over time. Inspired by Karpathy's LLM Wiki pattern.

[knowledge]
enabled = true
path = "~/wiki"        # or your Obsidian vault, PARA system, etc.
adapter = "wiki"       # "wiki" (flat markdown), "para" (atomic facts), "obsidian" (wiki + [[links]])
engine = "none"        # "none" = structured YAML only (safest), "agent" = LLM extraction
min_confidence = "strong"

After each meeting, structured facts (decisions, action items, commitments) flow into person profiles automatically. Every fact carries provenance back to its source meeting.

minutes ingest --dry-run --all   # Preview what would be extracted
minutes ingest --all              # Backfill existing meetings
minutes ingest ~/meetings/call.md # Process a single meeting

Three output formats:

  • Wiki — people/{slug}.md with facts grouped by category
  • PARA — areas/people/{slug}/items.json with atomic facts (id, status, supersededBy)
  • Obsidian — Wiki format with [[wikilinks]] for cross-references

Safety: default engine = "none" extracts only from parsed YAML frontmatter. No LLM call, zero hallucination risk. Confidence thresholds filter speculative facts. Corrupt data is backed up, never silently destroyed.

Install

macOS

# Desktop app (menu bar, recording UI, AI assistant)
brew install --cask silverstein/tap/minutes

# CLI only (terminal recording, search, vault sync)
brew tap silverstein/tap
brew install minutes

# Or from source (requires Rust + cmake)
export CXXFLAGS="-I$(xcrun --show-sdk-path)/usr/include/c++/v1"
cargo install --path crates/cli

Windows

# Download pre-built binary from GitHub releases, or build from source:
# Requires: Rust, cmake, MSVC build tools, LLVM (for libclang)

# Install LLVM (needed by whisper-rs bindgen):
winget install LLVM.LLVM
[Environment]::SetEnvironmentVariable("LIBCLANG_PATH", "C:\Program Files\LLVM\bin", "User")
# Restart your terminal after setting LIBCLANG_PATH

# Full build (includes speaker diarization):
cargo install --path crates/cli

# Without speaker diarization:
cargo install --path crates/cli --no-default-features

Note: If diarization fails to compile on Windows, use --no-default-features. This is a known upstream issue with pyannote-rs's ONNX Runtime dependency. Everything except speaker labels works without it.

Linux

# Debian/Ubuntu — full dep list:
sudo apt-get install -y \
  build-essential cmake pkg-config \
  clang libclang-dev \
  libasound2-dev libpipewire-0.3-dev libspa-0.2-dev \
  ffmpeg

cargo install minutes-cli
# or, from a checkout:
cargo install --path crates/cli

Why each dep is needed:

  • build-essential, cmake — whisper.cpp build
  • clang, libclang-dev — bindgen (used by whisper-rs and pipewire-sys)
  • libasound2-dev — cpal's ALSA backend
  • libpipewire-0.3-dev, libspa-0.2-dev — cpal's PipeWire backend (compiled unconditionally on Linux)
  • ffmpeg — preferred audio decoder for .m4a/.mp3/.ogg (falls back to pure-Rust symphonia if absent)

Other distros (best-effort — Debian/Ubuntu is the validated path; please open an issue if any package name is wrong on your distro):

  • Fedora/RHEL: sudo dnf install -y gcc-c++ cmake pkgconf-pkg-config clang clang-devel alsa-lib-devel pipewire-devel ffmpeg-free
  • Arch: sudo pacman -S --needed base-devel cmake clang alsa-lib pipewire ffmpeg

GPU acceleration (optional)

Build with GPU support for significantly faster transcription:

Backend Platform Feature flag Prerequisites
Metal macOS metal Xcode Command Line Tools
CoreML macOS coreml Xcode Command Line Tools
CUDA Windows/Linux cuda CUDA Toolkit
ROCm/HIP Linux hipblas ROCm 6.1+ (hipcc, hipblas, rocblas)
Vulkan Windows/Linux vulkan Vulkan SDK (+ vulkan-headers on Arch)

Metal is the only backend that is exercised daily by the maintainer. CUDA, ROCm/HIP, and Vulkan should be considered experimental: they wire through to whisper.cpp via whisper-rs and are expected to work, but have not been validated in CI.

# Apple Metal (macOS)
cargo install --path crates/cli --features metal

# Apple CoreML (macOS Neural Engine)
cargo install --path crates/cli --features coreml

# NVIDIA GPU (Windows/Linux)
cargo install --path crates/cli --features cuda

# AMD GPU via ROCm (Linux — experimental)
cargo install --path crates/cli --features hipblas

# Vulkan (Windows/Linux — experimental)
cargo install --path crates/cli --features vulkan

Windows CUDA users: You may need to set environment variables before building:

$env:CUDA_PATH = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4"
$env:CMAKE_CUDA_COMPILER = "$env:CUDA_PATH\bin\nvcc.exe"
$env:LIBCLANG_PATH = "C:\Program Files\LLVM\bin"
$env:CMAKE_GENERATOR = "NMake Makefiles"

The first CUDA build takes longer than usual (compiling GPU kernels) — this is a one-time cost.

ROCm/HIP users: The build expects ROCm installed at /opt/rocm. If your installation is elsewhere, set HIP_PATH before building:

export HIP_PATH=/path/to/rocm

Vulkan users: On Windows and macOS, set VULKAN_SDK to your SDK install root before building. On Linux, whisper-rs-sys links against the system libvulkan.

Setup (all platforms)

# Download whisper model (also downloads Silero VAD model for non-English audio)
minutes setup --model small   # Recommended (466MB, good accuracy)
minutes setup --model tiny    # Fastest (75MB, but misses quiet audio)
minutes setup --model base    # Middle ground (141MB)

# Install ffmpeg for best transcription quality (strongly recommended for non-English audio)
brew install ffmpeg           # macOS
# apt install ffmpeg          # Linux
# Without ffmpeg, symphonia handles m4a/mp3 decoding — works for English but may
# produce loops on non-English audio. ffmpeg is optional but recommended.

# Enable speaker diarization (optional, ~34MB ONNX models)
minutes setup --diarization

# Alternative: use Parakeet engine (opt-in, lower WER than Whisper)
# Requires parakeet.cpp installed: https://github.com/Frikallo/parakeet.cpp
minutes setup --parakeet                          # English model (tdt-ctc-110m, ~220MB)
minutes setup --parakeet --parakeet-model tdt-600m  # Multilingual (25 EU languages, ~1.2GB)

# Enroll your voice for automatic speaker identification
minutes enroll              # Records 10s of your voice
minutes voices              # View enrolled profiles

Speaker identification

Minutes maps anonymous speaker labels (SPEAKER_1, SPEAKER_2) to real names using four levels of confidence-aware attribution:

Level How Confidence Requires
0 Calendar attendees + identity.name → deterministic mapping for 1-on-1 meetings Medium Calendar access, [identity] name in config
1 LLM analyzes transcript context clues and maps speakers to attendees Medium (capped) Attendees known + summarization engine or agent CLI
2 Your enrolled voice is matched against speaker segments High minutes enroll (one-time 10s recording)
3 You confirm "SPEAKER_1 is Sarah" after a meeting High minutes confirm --meeting <path>

Release History

VersionChangesUrgencyDate
v0.13.20.13.2 adds the mic-mute toggle for passive attendance and rolls up everything that shipped in the 0.13 series this week. If you installed Minutes before 0.13.0 landed two days ago, this is the single version to move to. If you updated to 0.13.0 or 0.13.1 already, you only get the mic-mute change on top of what you have. ## What's new in 0.13.2: mic-mute You're on a webinar, an all-hands, or a panel where you're not going to speak. You want the meeting transcript, but you don't want your breatHigh4/18/2026
v0.12.2# v0.12.2: Live transcription that actually works on quiet audio ## Why this release exists A meeting today produced 11 fragments over 40 minutes of live transcript — mostly whisper placeholder tokens like `[typing]` and `[BLANK_AUDIO]`. Meanwhile the post-recording batch transcript cleanly recovered 2,259 words from the same WAV. Same audio, two different code paths, 200x quality gap. Three problems were working against live mode: 1. The recording sidecar used a simple energy-threshold VAD.High4/15/2026
v0.12.0# v0.12.0: Parakeet is multilingual by default ## What changed This release makes the Parakeet path feel like a first-class part of Minutes instead of an advanced toggle you had to babysit. The biggest change is the default model. If you choose Parakeet and do not pin a model explicitly, Minutes now defaults to `tdt-600m`, the multilingual v3 model, instead of the older English-only `tdt-ctc-110m`. That means better support for bilingual meetings, better WER, and a much more honest default foHigh4/14/2026
v0.11.3# v0.11.3: Long meetings work the way they should ## What changed This release fixes a nasty class of failures in the Parakeet transcription path. Long recordings could fail in a few different ways. Some chunks came back with zero tokens. Some long runs could blow up in the GPU path. The app often surfaced all of that as a vague parsing error. That is fixed now. Long Parakeet runs are chunked before decode, empty chunks are skipped instead of taking down the whole meeting, and the exact real-High4/13/2026
v0.11.2Your AI used to forget what happened after `minutes stop`. Now it doesn't. ## The lifecycle is connected Before v0.11.2, the meeting lifecycle had a gap right in the middle. You'd record a meeting, run `minutes stop`, and... nothing. The tool that knows everything about your conversation history went quiet at the exact moment you needed it most -- right after the meeting, when decisions are fresh and follow-ups haven't slipped yet. Now `minutes stop` nudges you toward the next valuable step. High4/10/2026
plugin-v0.8.0# Minutes Plugin v0.8.0: Four new skills, a proactive brief hook, and bug fixes two review passes deep ## The lifecycle is done. Brief runs before your call, mirror tells you what you did, tag marks the outcome, graph lets you query everyone and everything across your history. All four ship with Python helper scripts that do the counting deterministically so an LLM doesn't have to guess. Before v0.8.0 the plugin covered prep, record, debrief, weekly. After v0.8.0 it covers the full arc: `briefHigh4/9/2026

Dependencies & License Audit

Loading dependencies...

Similar Packages

spaceship-mcpšŸš€ Manage domains, DNS, contacts, and listings with spaceship-mcp, a community-built MCP server for the Spaceship API.main@2026-04-21
product-management-skill🧠 Enable AI coding agents to adopt product management skills and build user-focused software efficiently.main@2026-04-21
comfy-pilotšŸ¤– Create and modify workflows effortlessly with ComfyUI's AI assistant, enabling natural conversations with agents like Claude and Gemini.main@2026-04-21
dingtalk-moltbot-connectoršŸ¤– Connect DingTalk with OpenClaw Gateway for AI-driven chat responses and session management using a bot or DEAP Agent integration.main@2026-04-21
claude-code-hooksšŸŖ Customize and deploy ready-to-use hooks for Claude Code, enhancing safety, automation, and notifications in your projects.main@2026-04-21