Concepts
Deep dives into ark-operator’s architecture and runtime primitives — how agents work, LLM providers, task queue, MCP servers, observability, and more.
Deep dives into how ark-operator works under the hood.
| Page | Description |
|---|---|
| How It Works | The reconcile loop, agent pods, task flow, and mental model |
| Providers | Anthropic, OpenAI, Ollama, and custom LLM providers |
| Task Queue | Redis Streams internals and pluggable backends |
| MCP Servers | Model Context Protocol tool server integration |
| Memory | In-context, Redis, and vector-store memory backends |
| Observability | OTel traces, metrics, and audit events |
| Scaling | Manual scaling, kubectl scale, and daily token budget scale-to-zero |
| Semantic Health Checks | LLM output validation via /readyz |