About Axon

AI that works for you, not the other way around.

AI tools today are cloud-locked, memory-less, and vendor-dependent. Every conversation starts from scratch. Your data lives on someone else's servers. And when the API changes or the price goes up, you have no recourse.

Axon is the alternative. Self-hosted means your data stays on your machines. Open-source means you own the code. Local LLM support means you can own the inference too.

Axon is not a chatbot. It's a command center — an orchestrator that coordinates specialist advisors around your actual life and work. It remembers what matters, delegates to the right expert, and gets smarter over time.

What makes Axon different

Memory that persists

Neural memory trees store knowledge as Obsidian-compatible markdown files in local vaults. Your AI actually remembers — across conversations, across sessions, across time.

Specialists, not generalists

Organization templates give you curated advisor teams — a startup board, a family organizer, career coaches — each with their own persona, vault, and expertise. Axon routes your request to the right specialist.

Voice-first interface

Talk, don't type. Whisper handles speech-to-text, Piper handles text-to-speech, and each advisor can have their own voice. Conversation feels natural.

Zero cloud dependency

Run entirely on your hardware. Ollama handles local inference for memory operations, and you can use any model provider — or none at all — for reasoning. No API keys required.

Open source, by design

Axon is licensed under the GNU Affero General Public License (AGPL). This isn't MIT — and that's intentional. AGPL ensures that anyone who modifies Axon and offers it as a service must share their changes. The community benefits from every improvement.

Contributions are welcome. Whether it's a bug fix, a new advisor template, or a documentation improvement — every PR makes Axon better for everyone.