Post
mm-ctx β fast, multimodal context for agents.
LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills.
Try it interactively in Spaces: vlm-run/mm-ctx
Readme: https://vlm-run.github.io/mm/
PyPI: https://pypi.org/project/mm-ctx
SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-skill/SKILL.md
mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI.
- mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches
- mm cat <document>.pdf returns a metadata description of the file
- mm cat <photo>.jpg returns a caption of the photo
- mm cat <video>.mp4 returns a caption of the video
A few things we obsessed over:
β‘ Speed: Rust core for the hot paths
π Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V).
π Composable: stdin + structured outputs
π€ Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw.
Weβd love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next.
LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills.
Try it interactively in Spaces: vlm-run/mm-ctx
Readme: https://vlm-run.github.io/mm/
PyPI: https://pypi.org/project/mm-ctx
SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-skill/SKILL.md
mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI.
- mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches
- mm cat <document>.pdf returns a metadata description of the file
- mm cat <photo>.jpg returns a caption of the photo
- mm cat <video>.mp4 returns a caption of the video
A few things we obsessed over:
β‘ Speed: Rust core for the hot paths
π Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V).
π Composable: stdin + structured outputs
π€ Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw.
Weβd love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next.