Production Deployment: Advanced Deployment Scenarios
Production Deployment: Advanced Deployment Scenarios
Part of: Production Deployment Guide
10.1 Hybrid Cloud Deployment
HeliosDB supports hybrid cloud deployments, allowing you to run workloads across on-premise infrastructure and public cloud providers simultaneously.
Architecture:
┌──────────────────────────────────────────────────────────────┐│ Hybrid Cloud Topology │├──────────────────────────────────────────────────────────────┤│ ││ ┌─────────────────┐ ┌─────────────────┐ ││ │ On-Premise DC │◄────VPN─────►│ AWS Cloud │ ││ │ │ │ │ ││ │ Metadata (3) │ │ Compute (5) │ ││ │ Storage (10) │ │ Storage (5) │ ││ └─────────────────┘ └─────────────────┘ ││ ▲ ▲ ││ │ │ ││ └────────Global Load Balancer────┘ ││ │└──────────────────────────────────────────────────────────────┘Configuration:
[deployment.hybrid]enabled = truetopology = "multi-cloud"
[deployment.hybrid.on_premise]enabled = trueregion = "dc-east-1"metadata_nodes = 3storage_nodes = 10data_residency_rules = ["pii", "financial"]
[deployment.hybrid.aws]enabled = trueregion = "us-east-1"compute_nodes = 5storage_nodes = 5workload_types = ["analytics", "ml"]
[deployment.hybrid.network]vpn_type = "ipsec"bandwidth_limit_mbps = 10000encryption = "aes-256"compression = trueVPN Setup:
# AWS VPN Connectionaws ec2 create-vpn-connection \ --type ipsec.1 \ --customer-gateway-id cgw-xxx \ --vpn-gateway-id vgw-xxx \ --options TunnelOptions=[{TunnelInsideCidr=169.254.10.0/30,PreSharedKey=xxx}]
# Download VPN configurationaws ec2 describe-vpn-connections \ --vpn-connection-ids vpn-xxx \ --output text > vpn-config.txt10.2 Air-Gapped Environment
For highly secure environments that require complete isolation from the internet:
Preparation:
# Create offline package bundleheliosdb-cli package create-offline \ --version 6.0.0 \ --include-dependencies \ --include-images \ --output heliosdb-offline-6.0.0.tar.gz
# Transfer to air-gapped environment (USB, dedicated transfer network, etc.)
# On air-gapped system, extract and installtar xzf heliosdb-offline-6.0.0.tar.gzcd heliosdb-offline-6.0.0./install.sh --offline-modeRegistry Setup:
# Set up local Docker registrydocker run -d -p 5000:5000 \ --restart=always \ --name registry \ -v /mnt/registry:/var/lib/registry \ registry:2
# Load imagesdocker load < heliosdb-images.tar
# Tag and push to local registrydocker tag heliosdb/heliosdb:6.0.0 localhost:5000/heliosdb:6.0.0docker push localhost:5000/heliosdb:6.0.0
# Update Kubernetes to use local registrykubectl set image deployment/heliosdb-compute \ compute=localhost:5000/heliosdb:6.0.0 -n heliosdbPackage Repository:
# Create local APT repository (Debian/Ubuntu)mkdir -p /opt/heliosdb-repocp *.deb /opt/heliosdb-repo/cd /opt/heliosdb-repodpkg-scanpackages . /dev/null | gzip -9c > Packages.gz
# Configure APTcat > /etc/apt/sources.list.d/heliosdb.list <<EOFdeb [trusted=yes] file:/opt/heliosdb-repo ./EOF
apt updateapt install heliosdb10.3 Multi-Tenancy Deployment
HeliosDB provides world-class multi-tenancy with complete isolation:
Tenant Isolation Levels:
- Shared: Multiple tenants share resources (cost-effective)
- Isolated: Logical isolation with dedicated resources
- Strict: Physical isolation with separate hardware
Configuration:
[multi_tenancy]enabled = trueisolation_level = "strict"max_tenants = 1000
[multi_tenancy.tenant.acme_corp]tenant_id = "tenant-001"isolation_level = "strict"dedicated_nodes = ["storage-10", "storage-11", "storage-12"]storage_quota_gb = 500compute_quota_cores = 16memory_quota_gb = 64max_connections = 100replication_factor = 3backup_enabled = truebackup_retention_days = 90
[multi_tenancy.tenant.beta_inc]tenant_id = "tenant-002"isolation_level = "isolated"storage_quota_gb = 100compute_quota_cores = 4memory_quota_gb = 16Tenant Provisioning:
# Create new tenantheliosdb-cli tenant create \ --name acme-corp \ --isolation-level strict \ --storage-quota 500GB \ --compute-quota 16 \ --admin-user admin@acme.com \ --admin-password-stdin
# List tenantsheliosdb-cli tenant list
# Get tenant metricsheliosdb-cli tenant metrics --tenant-id tenant-001
# Delete tenant (with data retention period)heliosdb-cli tenant delete \ --tenant-id tenant-001 \ --retention-days 30 \ --backup-data10.4 Edge Computing Deployment
Deploy HeliosDB at the edge for low-latency access:
Edge Node Configuration:
[edge]enabled = truemode = "edge-gateway" # Options: edge-gateway, edge-replica, edge-cache
[edge.gateway]upstream_cluster = "heliosdb-central.example.com:5432"local_cache_size_mb = 2048sync_interval_sec = 30conflict_resolution = "central-wins"offline_mode = trueoffline_max_duration_hours = 24
[edge.replication]enabled = truereplication_lag_target_ms = 100selective_replication = truereplication_filters = ["location = 'edge-1'", "priority = 'high'"]Edge Deployment (ARM64/Raspberry Pi):
# Build for ARM64docker buildx build \ --platform linux/arm64 \ --tag heliosdb/heliosdb:6.0.0-arm64 \ --push .
# Deploy to edge devicedocker run -d \ --name heliosdb-edge \ --restart always \ -p 5432:5432 \ -v /data/heliosdb:/data \ -e EDGE_MODE=true \ -e UPSTREAM_CLUSTER=central.example.com:5432 \ heliosdb/heliosdb:6.0.0-arm6410.5 Disaster Recovery Testing
Regular DR testing is critical for production readiness:
DR Test Plan:
#!/bin/bash# dr-test.sh - Disaster Recovery Test Script
set -e
echo "Starting DR Test: $(date)"
# 1. Verify backup integrityecho "Step 1: Verifying backup integrity..."heliosdb-cli backup verify \ --backup s3://heliosdb-backups-prod/latest \ --checksum
# 2. Spin up DR environmentecho "Step 2: Creating DR environment..."kubectl create namespace heliosdb-dr
# 3. Restore dataecho "Step 3: Restoring data..."heliosdb-cli restore \ --backup s3://heliosdb-backups-prod/latest \ --namespace heliosdb-dr \ --verify
# 4. Verify data integrityecho "Step 4: Verifying data integrity..."heliosdb-cli test data-integrity \ --namespace heliosdb-dr \ --sample-rate 0.1
# 5. Performance testingecho "Step 5: Running performance tests..."heliosdb-cli test performance \ --namespace heliosdb-dr \ --duration 5m \ --rps 1000
# 6. Cleanupecho "Step 6: Cleaning up DR environment..."kubectl delete namespace heliosdb-dr
echo "DR Test Complete: $(date)"echo "All tests passed successfully"Chaos Engineering:
# Install Chaos Meshhelm install chaos-mesh chaos-mesh/chaos-mesh \ --namespace chaos-testing \ --create-namespace
# Pod failure testcat > pod-failure.yaml <<EOFapiVersion: chaos-mesh.org/v1alpha1kind: PodChaosmetadata: name: heliosdb-pod-failure namespace: chaos-testingspec: action: pod-failure mode: one duration: "30s" selector: namespaces: - heliosdb labelSelectors: component: storageEOF
kubectl apply -f pod-failure.yaml
# Network latency testcat > network-delay.yaml <<EOFapiVersion: chaos-mesh.org/v1alpha1kind: NetworkChaosmetadata: name: heliosdb-network-delay namespace: chaos-testingspec: action: delay mode: all selector: namespaces: - heliosdb delay: latency: "100ms" correlation: "100" jitter: "0ms" duration: "5m"EOF
kubectl apply -f network-delay.yaml10.6 Blue-Green Deployment
Zero-downtime deployment strategy:
Deployment Process:
# 1. Deploy green environmentkubectl apply -f green-deployment.yaml
# 2. Wait for green to be readykubectl wait --for=condition=available \ deployment/heliosdb-compute-green -n heliosdb --timeout=300s
# 3. Run smoke testsheliosdb-cli test smoke \ --endpoint heliosdb-compute-green:5432
# 4. Switch traffic to greenkubectl patch service heliosdb-compute -n heliosdb \ -p '{"spec":{"selector":{"version":"green"}}}'
# 5. Monitor for issues (5-10 minutes)watch -n 5 'kubectl get pods -n heliosdb | grep green'
# 6. If successful, scale down bluekubectl scale deployment/heliosdb-compute-blue -n heliosdb --replicas=0
# 7. If issues, rollbackkubectl patch service heliosdb-compute -n heliosdb \ -p '{"spec":{"selector":{"version":"blue"}}}'10.7 Canary Deployment
Gradual rollout with traffic splitting using Istio.
10.8 Geographic Distribution
Optimize for global users with geo-distributed architecture using GeoDNS and cross-region optimization.
Navigation
- Previous: Troubleshooting
- Next: Production Checklist
- Index: Production Deployment Guide