On this page:
Web API
Stateless service with network-based wake and health check proxy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
spec:
replicas: 10
selector:
matchLabels:
app: api-service
template:
metadata:
labels:
app: api-service
annotations:
architect.loopholelabs.io/managed-containers: '["api"]'
architect.loopholelabs.io/scaledown-durations: '{"api":"30s"}'
architect.loopholelabs.io/network-monitor: '{"api":"connections"}'
architect.loopholelabs.io/health-check-proxy: '{"mappings":[{"containerName":"api","appPort":8080,"shadowPort":9080}]}'
spec:
runtimeClassName: runc-architect
containers:
- name: api
image: mycompany/api:v2.1
ports:
- containerPort: 8080
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1000m"Microservices with Sidecar
Only the main container hibernates. Sidecars are excluded from the managed list.
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 15
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
annotations:
architect.loopholelabs.io/managed-containers: '["order-service"]'
architect.loopholelabs.io/scaledown-durations: '{"order-service":"60s"}'
spec:
runtimeClassName: runc-architect
containers:
- name: order-service
image: mycompany/order-service:v1.5
ports:
- containerPort: 8080
- name: logging-agent
image: fluentd:latestDevelopment Environment
Short idle timeout for per-developer pods that are mostly idle.
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-environment
namespace: development
spec:
replicas: 50
selector:
matchLabels:
app: dev-environment
template:
metadata:
labels:
app: dev-environment
annotations:
architect.loopholelabs.io/managed-containers: '["dev-container"]'
architect.loopholelabs.io/scaledown-durations: '{"dev-container":"5s"}'
spec:
runtimeClassName: runc-architect
containers:
- name: dev-container
image: mycompany/dev-env:latest
resources:
requests:
memory: "4Gi"
cpu: "2000m"gVisor with PersistentCheckpoint
runsc-architect creates checkpoints explicitly via PersistentCheckpoint
CRDs while the pod keeps running. Useful for golden images, backup snapshots,
or pre-migration checkpoints.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: valkey-cache
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: valkey
template:
metadata:
labels:
app: valkey
annotations:
architect.loopholelabs.io/managed-pod: "true"
# Restore from PersistentCheckpoint (same namespace; use namespace/name for cross-namespace)
architect.loopholelabs.io/start-from-persistent-checkpoint: "persistent-checkpoint-demo"
spec:
runtimeClassName: runsc-architect
containers:
- name: valkey
image: valkey/valkey:latest
ports:
- containerPort: 6379
resources:
requests:
memory: "256Mi"
cpu: "100m"Creating a Checkpoint
apiVersion: architect.loopholelabs.io/v1
kind: PersistentCheckpoint
metadata:
name: persistent-checkpoint-demo
namespace: default
spec:
podName: valkey-cache-7d7f78c4f7-5f6sskubectl apply -f persistentcheckpoint.yaml
# Verify
kubectl get persistentcheckpoint persistent-checkpoint-demo \
-o jsonpath='{.spec.checkpoint}'Restoring from a Checkpoint
New deployments can restore from an existing checkpoint (same namespace; use
namespace/name for cross-namespace):
apiVersion: apps/v1
kind: Deployment
metadata:
name: valkey-from-checkpoint
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: valkey-restored
template:
metadata:
labels:
app: valkey-restored
annotations:
architect.loopholelabs.io/managed-pod: "true"
# Restore from PersistentCheckpoint (same namespace; use namespace/name for cross-namespace)
architect.loopholelabs.io/start-from-persistent-checkpoint: "persistent-checkpoint-demo"
spec:
runtimeClassName: runsc-architect
containers:
- name: valkey
image: valkey/valkey:latest
ports:
- containerPort: 6379If the referenced PersistentCheckpoint doesn't exist or has no data, the pod
starts fresh.
Deleting a Checkpoint
kubectl delete persistentcheckpoint persistent-checkpoint-demoCheckpoint files are cleaned automatically. PersistentCheckpoint CRDs persist
until explicitly deleted (unlike Checkpoint CRDs, which are consumed by new
pods).