About this app
Graceful rate limit handling with Ollama fallback. Notifies on rate limits, offers local model switch with confirmation for code tasks.
🔌
Requires ClawBox Hardware
Run LLM Supervisor locally on your own AI hardware — private, fast, no cloud.