Getting Started
Requirements
- Kubernetes 1.31+
kubectlconfigured against your cluster- Helm 3+ (or
kubectl applyfor raw manifests) - An LLM API key — OpenAI, Anthropic, or Ollama running in-cluster
1. Deploy Redis
ark-operator uses a pluggable task queue between the operator and agent pods. Redis is the default backend.
kubectl apply -f https://raw.githubusercontent.com/arkonis-dev/ark-operator/v0.11.1/config/prereqs/redis.yaml
kubectl rollout status statefulset/redis -n ark-systemProduction note: The bundled
redis.yamlis for evaluation only — no auth, no persistence, no HA. For production, point the operator at a managed Redis service (ElastiCache, Upstash, Redis Cloud) using thetaskQueueURLHelm value:rediss://:mypassword@my-redis.cache.amazonaws.com:6379/0
rediss://enables TLS. See Task Queue for the full connection string format.
2. Create your API keys Secret
Do not pass API keys via --set flags. They end up in shell history, the process table, and Helm release history in plaintext.
Create a Kubernetes Secret directly instead:
# OpenAI
kubectl create secret generic ark-api-keys \
--from-literal=OPENAI_API_KEY=sk-... \
--namespace ark-system
# Anthropic
kubectl create secret generic ark-api-keys \
--from-literal=ANTHROPIC_API_KEY=sk-ant-... \
--namespace ark-system
# Both providers
kubectl create secret generic ark-api-keys \
--from-literal=OPENAI_API_KEY=sk-... \
--from-literal=ANTHROPIC_API_KEY=sk-ant-... \
--namespace ark-system
Reference this Secret in the Helm install with --set apiKeys.existingSecret=ark-api-keys. The chart will never store the key value itself.
Using Ollama? No API key is needed. Skip this step and pass provider config directly via
agentExtraEnvin the install command:helm install ark-operator arkonis/ark-operator \ --set taskQueueURL=redis.ark-system.svc.cluster.local:6379 \ --set "agentExtraEnv[0].name=AGENT_PROVIDER" \ --set "agentExtraEnv[0].value=openai" \ --set "agentExtraEnv[1].name=OPENAI_BASE_URL" \ --set "agentExtraEnv[1].value=http://ollama.<namespace>.svc.cluster.local:11434/v1" \ --set "agentExtraEnv[2].name=OPENAI_API_KEY" \ --set "agentExtraEnv[2].value=ollama"
3. Install the operator
Helm (recommended)
helm repo add arkonis https://charts.arkonis.dev
helm repo update
helm install ark-operator arkonis/ark-operator \
--version 0.1.4 \
--namespace ark-system \
--create-namespace \
--set taskQueueURL=redis.ark-system.svc.cluster.local:6379 \
--set apiKeys.existingSecret=ark-api-keyskubectl apply
kubectl apply -f https://raw.githubusercontent.com/arkonis-dev/ark-operator/v0.11.1/config/install.yamlConfigure API keys by mounting the Secret into the operator:
kubectl set env deployment/ark-operator \
--from=secret/ark-api-keys \
--namespace ark-system
Verify:
kubectl rollout status deployment/ark-operator -n ark-system
# deployment "ark-operator" successfully rolled out
4. Deploy your first team
Keep agent workloads in a dedicated namespace — don’t put teams in ark-system:
kubectl create namespace ai-workloads
Save as my-team.yaml:
apiVersion: arkonis.dev/v1alpha1
kind: ArkTeam
metadata:
name: research-team
namespace: ai-workloads
spec:
output: "{{ .steps.summarize.output }}"
roles:
- name: research
model: gpt-4o-mini
systemPrompt: "You are a research assistant. Answer thoroughly."
- name: summarize
model: gpt-4o-mini
systemPrompt: "Summarize the input as 3 concise bullet points."
pipeline:
- role: research
inputs:
prompt: "{{ .input.topic }}"
- role: summarize
dependsOn: [research]
inputs:
content: "{{ .steps.research.output }}"
kubectl apply -f my-team.yaml
kubectl get arkteam research-team -n ai-workloads
# NAME PHASE LAST RUN AGE
# research-team Ready — 15s
5. Trigger a run
ark trigger research-team -n ai-workloads \
--input '{"topic": "how Kubernetes controllers work"}'
Watch it complete:
kubectl get arkrun -n ai-workloads -l arkonis.dev/team=research-team -w
# NAME TEAM PHASE TOKENS STARTED
# research-team-run-a1b2c3 research-team Succeeded 1840 18s
Read the output:
RUN=$(kubectl get arkteam research-team -n ai-workloads \
-o jsonpath='{.status.lastRunName}')
kubectl get arkrun "$RUN" -n ai-workloads \
-o jsonpath='{.status.output}'
Teardown
helm uninstall ark-operator -n ark-system
kubectl delete namespace ark-system ai-workloads
Next steps
- Core Concepts — how providers, task queues, and observability work
- Building a Pipeline — conditional steps, loops, typed outputs
- Local Development — iterate without a cluster using
ark run - Security — RBAC footprint, network policies, and API key management
- Helm Values — all chart configuration options