Kubernetes Deployment
Kubernetes Deployment
This guide covers deploying HeliosProxy on Kubernetes, including the Deployment pattern, sidecar pattern, ConfigMap, Service, health probes, and Prometheus ServiceMonitor.
Deployment Patterns
HeliosProxy supports two Kubernetes deployment patterns:
| Pattern | Description | Best For |
|---|---|---|
| Deployment | Dedicated proxy pods behind a Service. Applications connect via the Service. | Shared proxy for multiple applications, centralized management. |
| Sidecar | Proxy container co-located in each application Pod. Applications connect via localhost. | Lowest latency, per-application isolation. |
ConfigMap
The proxy configuration is stored in a ConfigMap and mounted as a volume into the proxy container.
apiVersion: v1kind: ConfigMapmetadata: name: heliosproxy-config namespace: default labels: app.kubernetes.io/name: heliosproxy app.kubernetes.io/component: configdata: config.toml: | listen_address = "0.0.0.0:6432" admin_address = "0.0.0.0:9090" tr_enabled = true tr_mode = "session" write_timeout_secs = 30
[pool_mode] mode = "transaction" max_pool_size = 100 min_idle = 10 idle_timeout_secs = 600 max_lifetime_secs = 3600 acquire_timeout_secs = 5 reset_query = "DISCARD ALL" prepared_statement_mode = "track"
[pool] min_connections = 5 max_connections = 100 idle_timeout_secs = 300 max_lifetime_secs = 1800 acquire_timeout_secs = 30 test_on_acquire = true
[load_balancer] read_strategy = "least_connections" read_write_split = true latency_threshold_ms = 50
[health] check_interval_secs = 5 check_timeout_secs = 3 failure_threshold = 3 success_threshold = 2 check_query = "SELECT 1"
[[nodes]] host = "heliosdb-primary.default.svc.cluster.local" port = 5432 http_port = 8080 role = "primary" weight = 100 name = "primary"
[[nodes]] host = "heliosdb-standby-0.default.svc.cluster.local" port = 5432 http_port = 8080 role = "standby" weight = 100 name = "standby-0"
[[nodes]] host = "heliosdb-standby-1.default.svc.cluster.local" port = 5432 http_port = 8080 role = "standby" weight = 100 name = "standby-1"Deployment Pattern
Deployment
apiVersion: apps/v1kind: Deploymentmetadata: name: heliosproxy namespace: default labels: app.kubernetes.io/name: heliosproxy app.kubernetes.io/component: proxyspec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: heliosproxy strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 template: metadata: labels: app.kubernetes.io/name: heliosproxy app.kubernetes.io/component: proxy annotations: prometheus.io/scrape: "true" prometheus.io/port: "9090" prometheus.io/path: "/metrics/prometheus" spec: serviceAccountName: heliosproxy securityContext: runAsNonRoot: true runAsUser: 65534 fsGroup: 65534 containers: - name: heliosproxy image: heliosdb/proxy:latest args: - "--config" - "/etc/heliosproxy/config.toml" ports: - name: postgres containerPort: 6432 protocol: TCP - name: admin containerPort: 9090 protocol: TCP env: - name: RUST_LOG value: "heliosdb_proxy=info" volumeMounts: - name: config mountPath: /etc/heliosproxy readOnly: true resources: requests: cpu: 250m memory: 128Mi limits: cpu: "2" memory: 512Mi livenessProbe: httpGet: path: /health/live port: admin initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 3 readinessProbe: httpGet: path: /health/ready port: admin initialDelaySeconds: 2 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 3 startupProbe: httpGet: path: /health port: admin initialDelaySeconds: 1 periodSeconds: 2 timeoutSeconds: 3 failureThreshold: 15 volumes: - name: config configMap: name: heliosproxy-config terminationGracePeriodSeconds: 30Service
apiVersion: v1kind: Servicemetadata: name: heliosproxy namespace: default labels: app.kubernetes.io/name: heliosproxy app.kubernetes.io/component: proxyspec: type: ClusterIP selector: app.kubernetes.io/name: heliosproxy ports: - name: postgres port: 6432 targetPort: postgres protocol: TCP - name: admin port: 9090 targetPort: admin protocol: TCPServiceAccount
apiVersion: v1kind: ServiceAccountmetadata: name: heliosproxy namespace: default labels: app.kubernetes.io/name: heliosproxyApplication Connection
Applications connect to the proxy via the Kubernetes Service DNS name:
# In your application Deploymentenv: - name: DATABASE_URL value: "postgres://appuser:password@heliosproxy.default.svc.cluster.local:6432/appdb"Sidecar Pattern
Deploy HeliosProxy as a sidecar container within each application Pod. The application connects to localhost:6432, eliminating network hops.
apiVersion: apps/v1kind: Deploymentmetadata: name: myapp namespace: defaultspec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp annotations: prometheus.io/scrape: "true" prometheus.io/port: "9090" prometheus.io/path: "/metrics/prometheus" spec: containers: # ── Application Container ───────────────────────────── - name: app image: myapp:latest ports: - containerPort: 8080 env: - name: DATABASE_URL value: "postgres://appuser:password@localhost:6432/appdb" resources: requests: cpu: 500m memory: 256Mi
# ── HeliosProxy Sidecar ─────────────────────────────── - name: heliosproxy image: heliosdb/proxy:latest args: - "--config" - "/etc/heliosproxy/config.toml" ports: - name: postgres containerPort: 6432 - name: admin containerPort: 9090 env: - name: RUST_LOG value: "heliosdb_proxy=info" volumeMounts: - name: proxy-config mountPath: /etc/heliosproxy readOnly: true resources: requests: cpu: 100m memory: 64Mi limits: cpu: 500m memory: 256Mi livenessProbe: httpGet: path: /health/live port: admin initialDelaySeconds: 3 periodSeconds: 10 readinessProbe: httpGet: path: /health/ready port: admin initialDelaySeconds: 2 periodSeconds: 5
volumes: - name: proxy-config configMap: name: heliosproxy-configSidecar Advantages
- Zero network latency between application and proxy (localhost).
- Per-application connection pool isolation.
- Application and proxy scale together.
- No shared proxy bottleneck.
Sidecar Considerations
- Each Pod runs its own proxy instance, increasing resource usage.
- Configuration changes require a Pod restart (unless using a ConfigMap watcher).
- Connection pool sizes should be smaller (per-Pod rather than shared).
Health Probes
HeliosProxy provides three health endpoints for Kubernetes probes:
| Probe | Endpoint | Purpose |
|---|---|---|
| Liveness | GET /health/live | Restart the Pod if the proxy process is unresponsive. |
| Readiness | GET /health/ready | Remove from Service endpoints if no healthy backends are available. |
| Startup | GET /health | Allow extra time for initial backend connection establishment. |
Probe Configuration
livenessProbe: httpGet: path: /health/live port: 9090 initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 3
readinessProbe: httpGet: path: /health/ready port: 9090 initialDelaySeconds: 2 periodSeconds: 5 timeoutSeconds: 3 failureThreshold: 3
startupProbe: httpGet: path: /health port: 9090 initialDelaySeconds: 1 periodSeconds: 2 timeoutSeconds: 3 failureThreshold: 15The startup probe allows up to 30 seconds (15 failures at 2-second intervals) for the proxy to establish backend connections before the liveness probe begins.
Prometheus ServiceMonitor
For Prometheus Operator-based monitoring, create a ServiceMonitor to scrape proxy metrics.
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: heliosproxy namespace: default labels: app.kubernetes.io/name: heliosproxy release: prometheusspec: selector: matchLabels: app.kubernetes.io/name: heliosproxy endpoints: - port: admin path: /metrics/prometheus interval: 15s scrapeTimeout: 10sAvailable Metrics
| Metric | Type | Description |
|---|---|---|
heliosdb_proxy_connections_total | Counter | Total client connections accepted. |
heliosdb_proxy_connections_closed | Counter | Total client connections closed. |
heliosdb_proxy_queries_total | Counter | Total queries processed. |
heliosdb_proxy_bytes_received_total | Counter | Total bytes received from clients. |
heliosdb_proxy_bytes_sent_total | Counter | Total bytes sent to clients. |
heliosdb_proxy_failovers_total | Counter | Total failover events. |
Grafana Dashboard
A sample Grafana dashboard JSON can be imported from the project repository. Key panels include:
- Active connections over time
- Query throughput (read vs. write)
- Connection pool utilization per node
- Failover event markers
- Node health status
Pod Disruption Budget
Ensure at least one proxy instance remains available during node maintenance or cluster upgrades.
apiVersion: policy/v1kind: PodDisruptionBudgetmetadata: name: heliosproxy-pdb namespace: defaultspec: minAvailable: 1 selector: matchLabels: app.kubernetes.io/name: heliosproxyHorizontal Pod Autoscaler
Scale the proxy deployment based on CPU utilization.
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: heliosproxy-hpa namespace: defaultspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: heliosproxy minReplicas: 2 maxReplicas: 8 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70Network Policy
Restrict network access to the proxy:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: heliosproxy-network-policy namespace: defaultspec: podSelector: matchLabels: app.kubernetes.io/name: heliosproxy policyTypes: - Ingress - Egress ingress: # Allow PostgreSQL connections from application Pods - from: - podSelector: matchLabels: app.kubernetes.io/part-of: myapp ports: - port: 6432 protocol: TCP # Allow admin/metrics access from monitoring namespace - from: - namespaceSelector: matchLabels: name: monitoring ports: - port: 9090 protocol: TCP egress: # Allow connections to database backends - to: - podSelector: matchLabels: app.kubernetes.io/name: heliosdb ports: - port: 5432 protocol: TCP - port: 8080 protocol: TCP # Allow DNS resolution - to: [] ports: - port: 53 protocol: UDP - port: 53 protocol: TCPTLS with cert-manager
Generate TLS certificates for client-facing connections using cert-manager.
apiVersion: cert-manager.io/v1kind: Certificatemetadata: name: heliosproxy-tls namespace: defaultspec: secretName: heliosproxy-tls-secret issuerRef: name: letsencrypt-prod kind: ClusterIssuer dnsNames: - heliosproxy.default.svc.cluster.local - heliosproxy.example.comMount the TLS secret into the proxy container and reference it in the configuration:
# In the Deployment specvolumeMounts: - name: tls-certs mountPath: /etc/heliosproxy/tls readOnly: truevolumes: - name: tls-certs secret: secretName: heliosproxy-tls-secretUpdate config.toml:
[tls]enabled = truecert_path = "/etc/heliosproxy/tls/tls.crt"key_path = "/etc/heliosproxy/tls/tls.key"Complete Kustomization
For managing all resources together with Kustomize:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomization
namespace: default
resources: - serviceaccount.yaml - configmap.yaml - deployment.yaml - service.yaml - pdb.yaml - hpa.yaml - servicemonitor.yaml - networkpolicy.yaml
commonLabels: app.kubernetes.io/name: heliosproxy app.kubernetes.io/version: "0.3.0" app.kubernetes.io/managed-by: kustomizeDeploy:
kubectl apply -k .Troubleshooting
Proxy Pod in CrashLoopBackOff
# Check container logskubectl logs deployment/heliosproxy -c heliosproxy
# Common causes:# - Invalid configuration file (TOML parse error)# - Backend nodes unreachable (DNS resolution failure)# - Port conflict with another containerReadiness Probe Failing
# Check if backends are reachable from the proxy Podkubectl exec deployment/heliosproxy -- curl -s http://localhost:9090/nodes
# Verify backend DNS resolutionkubectl exec deployment/heliosproxy -- nslookup heliosdb-primary.default.svc.cluster.localConnection Timeouts
# Check proxy metrics for pool exhaustionkubectl exec deployment/heliosproxy -- curl -s http://localhost:9090/pools
# Increase pool size or switch to transaction pooling mode# Edit the ConfigMap and restart the Podskubectl rollout restart deployment/heliosproxy