HeliosDB Nano v3.0.0 Production Deployment Guide
HeliosDB Nano v3.0.0 Production Deployment Guide
Version: 3.0.0 Status: Production Ready Last Updated: December 4, 2025
Table of Contents
- Architecture
- System Requirements
- Installation
- Configuration
- Docker Deployment
- Kubernetes Deployment
- Performance Tuning
- High Availability
- Monitoring & Operations
- Troubleshooting
- Upgrade Procedures
Architecture
Embedded Database Architecture
HeliosDB Nano is an embedded database that runs within your application process:
┌─────────────────────────────────────────┐│ Your Application ││ ┌────────────────────────────────────┐ ││ │ HeliosDB Nano (Embedded) │ ││ │ ┌──────────────────────────────┐ │ ││ │ │ SQL Query Processor │ │ ││ │ ├──────────────────────────────┤ │ ││ │ │ Storage Engine │ │ ││ │ ├──────────────────────────────┤ │ ││ │ │ Vector Index (HNSW) │ │ ││ │ ├──────────────────────────────┤ │ ││ │ │ Compression (ALP/FSST) │ │ ││ │ ├──────────────────────────────┤ │ ││ │ │ MVCC Transaction Manager │ │ ││ │ └──────────────────────────────┘ │ ││ └────────────────────────────────────┘ │└─────────────────────────────────────────┘ ↓ Data Files (./data/)Key Characteristics
- In-Process: No separate server process
- Zero Network Overhead: Direct memory access
- Shared Lifecycle: Database lifetime = application lifetime
- Single Writer: One application instance manages database
- Multiple Readers: Read-only replicas supported (via snapshots)
System Requirements
Minimum Requirements
| Component | Requirement |
|---|---|
| CPU | 2 cores minimum |
| Memory | 4 GB minimum |
| Disk | 10 GB minimum |
| OS | Linux, macOS, Windows |
| Rust | 1.70+ (for building from source) |
Recommended Production Setup
| Component | Recommendation |
|---|---|
| CPU | 8+ cores |
| Memory | 16+ GB |
| Disk | 100+ GB SSD |
| OS | Ubuntu 22.04 LTS / CentOS 8+ |
| Filesystem | ext4 or XFS with noatime |
| Network | Gigabit Ethernet |
Disk Space Calculation
Disk Space = Data Size × Compression Factor + WAL Log Space = (1 TB data × 0.3 compression) + 50 GB WAL = 350 GBInstallation
1. From Binary Release
# Download latest releaseVERSION=3.0.0wget https://github.com/dimensigon/HDB-HeliosDB-Nano/releases/download/v${VERSION}/heliosdb-nano-${VERSION}-x86_64-linux.tar.gz
# Extracttar xzf heliosdb-nano-${VERSION}-x86_64-linux.tar.gzsudo mv heliosdb-nano /usr/local/bin/
# Verifyheliosdb-nano --version# Output: heliosdb-nano 3.0.02. From Rust Crates
cargo install heliosdb-nano --version 3.0.03. From Docker
docker pull heliosdb/heliosdb-nano:3.0.04. From Source
git clone https://github.com/dimensigon/HDB-HeliosDB-Nano.gitcd heliosdb-nanogit checkout v3.0.0
cargo build --releasesudo cp target/release/heliosdb-nano /usr/local/bin/Configuration
1. Environment Variables
# Data directoryexport HELIOSDB_DATA_DIR=/data/heliosdbexport HELIOSDB_LOG_LEVEL=infoexport HELIOSDB_MAX_CONNECTIONS=1000export HELIOSDB_QUERY_TIMEOUT=30000
# Performance tuningexport HELIOSDB_VECTOR_CACHE_SIZE=1000000export HELIOSDB_COMPRESSION_ENABLED=trueexport HELIOSDB_WAL_ENABLED=true2. Configuration File
database: data_dir: /data/heliosdb log_level: info
performance: max_connections: 1000 query_timeout_ms: 30000 vector_cache_size: 1000000 compression_enabled: true compression_algorithm: alp # or fsst
storage: wal_enabled: true wal_sync_mode: fsync # or async checkpoint_interval_secs: 3600 snapshot_interval_secs: 600
api: rate_limit_max_requests: 10000 rate_limit_window_secs: 60 rate_limit_burst_capacity: 100
security: tls_enabled: true tls_cert: /etc/heliosdb/cert.pem tls_key: /etc/heliosdb/key.pem3. Runtime Configuration
use heliosdb_nano::{HeliosDB, Config};use std::time::Duration;
let mut config = Config::default();config.data_dir = "/data/heliosdb".into();config.max_connections = 1000;config.query_timeout = Duration::from_secs(30);config.compression_enabled = true;config.wal_enabled = true;
let db = HeliosDB::new_with_config(config)?;Docker Deployment
1. Basic Docker Setup
# DockerfileFROM heliosdb/heliosdb-nano:3.0.0
RUN apt-get update && apt-get install -y \ curl \ && rm -rf /var/lib/apt/lists/*
# Create non-root userRUN useradd -m -u 1000 heliosdbWORKDIR /app
# Copy application codeCOPY . .
# Build applicationRUN cargo build --release
# Switch to non-root userUSER heliosdb:heliosdb
# Health checkHEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8080/health || exit 1
EXPOSE 8080CMD ["./target/release/app"]2. Docker Compose
version: '3.8'
services: heliosdb-app: build: . container_name: heliosdb-prod ports: - "8080:8080" volumes: - heliosdb_data:/data/heliosdb - ./config.yaml:/etc/heliosdb/config.yaml:ro environment: - HELIOSDB_DATA_DIR=/data/heliosdb - HELIOSDB_LOG_LEVEL=info restart: unless-stopped networks: - heliosdb_network
# Optional: Backup sidecar backup: image: heliosdb/heliosdb-nano:3.0.0 container_name: heliosdb-backup volumes: - heliosdb_data:/data/heliosdb:ro - ./backups:/backups environment: - BACKUP_SCHEDULE="0 2 * * *" # Daily at 2 AM restart: unless-stopped networks: - heliosdb_network
volumes: heliosdb_data: driver: local driver_opts: type: none o: bind device: /mnt/data/heliosdb
networks: heliosdb_network: driver: bridgeKubernetes Deployment
1. StatefulSet Deployment
apiVersion: v1kind: ConfigMapmetadata: name: heliosdb-config namespace: defaultdata: config.yaml: | database: data_dir: /data/heliosdb log_level: info performance: max_connections: 1000
---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: heliosdb-pvcspec: accessModes: - ReadWriteOnce storageClassName: fast-ssd resources: requests: storage: 100Gi
---apiVersion: apps/v1kind: StatefulSetmetadata: name: heliosdb namespace: defaultspec: serviceName: heliosdb replicas: 1 selector: matchLabels: app: heliosdb template: metadata: labels: app: heliosdb spec: serviceAccountName: heliosdb securityContext: runAsNonRoot: true runAsUser: 1000 fsGroup: 1000 containers: - name: heliosdb image: heliosdb/heliosdb-nano:3.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8080 name: api protocol: TCP env: - name: HELIOSDB_DATA_DIR value: /data/heliosdb - name: HELIOSDB_LOG_LEVEL value: info volumeMounts: - name: data mountPath: /data/heliosdb - name: config mountPath: /etc/heliosdb readOnly: true resources: requests: cpu: 2 memory: 4Gi limits: cpu: 8 memory: 16Gi livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 10 periodSeconds: 5 volumes: - name: config configMap: name: heliosdb-config - name: data persistentVolumeClaim: claimName: heliosdb-pvc
---apiVersion: v1kind: Servicemetadata: name: heliosdb namespace: defaultspec: clusterIP: None selector: app: heliosdb ports: - port: 8080 targetPort: 8080 name: api2. Horizontal Pod Autoscaling (Read Replicas)
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: heliosdb-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: StatefulSet name: heliosdb-reader # Read replicas only minReplicas: 1 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80Performance Tuning
1. Connection Pool Sizing
// Calculate optimal pool sizelet cpu_cores = num_cpus::get();let max_pool_size = cpu_cores * 4; // 4x cores rule of thumb
let db = HeliosDB::new_with_config(Config { max_connections: max_pool_size, ...})?;2. Memory Configuration
# JVM-like tuning for vector indexingperformance: # Cache for vector searches vector_cache_size: 10000000 # 10M vectors max
# Buffer pool for storage buffer_pool_size_mb: 4096 # 4GB buffer
# Query optimizer budget max_query_complexity: 10003. Compression Tuning
// Benchmark compression trade-offlet compression_config = CompressionConfig { enabled: true, algorithm: CompressionAlgorithm::ALP, compression_level: 6, // 1-9, higher = better ratio chunk_size: 65536, // bytes};4. Storage Engine Tuning
# Filesystem optimization for database# Add to /etc/fstab/dev/nvme0n1p1 /data/heliosdb ext4 noatime,nobarrier,data=writeback 0 0
# Mount with performance optionsmount -t ext4 -o noatime,nobarrier,data=writeback /dev/nvme0n1p1 /data/heliosdbHigh Availability
1. Backup Strategy
#!/bin/bash# Daily backup scriptBACKUP_DIR=/backups/heliosdbRETENTION_DAYS=30TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# Create backuptar czf $BACKUP_DIR/heliosdb_$TIMESTAMP.tar.gz /data/heliosdb/
# Encrypt backupgpg --symmetric --cipher-algo AES256 $BACKUP_DIR/heliosdb_$TIMESTAMP.tar.gz
# Remove old backupsfind $BACKUP_DIR -name "heliosdb_*.tar.gz.gpg" -mtime +$RETENTION_DAYS -delete
# Verify backup integritytar tzf $BACKUP_DIR/heliosdb_$TIMESTAMP.tar.gz > /dev/null && echo "Backup OK"2. Replication (Read Replicas)
// Primary instance (write)let primary_db = HeliosDB::new("/data/heliosdb/primary")?;
// Create read-only snapshot for replicalet snapshot = primary_db.create_snapshot()?;snapshot.save_to("/backups/snapshot-latest")?;
// Replica instance (read-only)let replica_db = HeliosDB::open_readonly("/backups/snapshot-latest")?;3. Disaster Recovery
# RTO: Recovery Time Objective = 5 minutes# RPO: Recovery Point Objective = 1 hour
recovery_procedure: step1: description: "Detect failure" alert: "heartbeat_missed" timeout: 30s
step2: description: "Failover to replica" procedure: "promote_replica_read_write" timeout: 120s
step3: description: "Restore from backup" location: "/backups/snapshot-latest" timeout: 300s
rto: 300s # 5 minutes rpo: 3600s # 1 hourMonitoring & Operations
1. Key Metrics to Monitor
metrics: availability: - database_up: bool (0=down, 1=up) - api_health: bool (0=unhealthy, 1=healthy)
performance: - query_latency_p50: milliseconds - query_latency_p99: milliseconds - queries_per_second: rate - connection_count: gauge
storage: - disk_usage_bytes: gauge - disk_free_bytes: gauge - wal_size_bytes: gauge
errors: - error_rate: rate - auth_failures: counter - rate_limit_exceeded: counter2. Prometheus Scrape Config
global: scrape_interval: 15s
scrape_configs: - job_name: 'heliosdb' static_configs: - targets: ['localhost:8080'] relabel_configs: - source_labels: [__address__] target_label: instance3. Alerting Rules
groups:- name: heliosdb_alerts rules: - alert: HeliosDBDown expr: database_up == 0 for: 1m annotations: summary: "HeliosDB is down"
- alert: HighErrorRate expr: error_rate > 0.05 for: 5m annotations: summary: "High error rate: {{ $value }}"
- alert: LowDiskSpace expr: disk_free_bytes < 10737418240 # 10GB for: 5m annotations: summary: "Low disk space: {{ $value }} bytes remaining"Troubleshooting
Issue: High Memory Usage
# Check memory usageps aux | grep heliosdb
# Solution: Reduce vector_cache_sizeexport HELIOSDB_VECTOR_CACHE_SIZE=5000000
# Monitor memorywatch -n 1 'ps aux | grep heliosdb'Issue: Slow Queries
-- Enable query loggingSELECT * FROM system.query_log ORDER BY execution_time DESC LIMIT 10;
-- Check index usageSELECT table_name, index_name, usage_count FROM system.index_stats;
-- Create missing indexesCREATE INDEX idx_user_id ON users(user_id);Issue: Disk Space Growing Rapidly
# Check WAL sizedu -sh /data/heliosdb/wal/
# Solution: Force checkpointheliosdb-nano --checkpoint /data/heliosdb
# Or restart to trigger checkpointsystemctl restart heliosdb-appUpgrade Procedures
1. In-Place Upgrade
# 1. Backup current databasecp -r /data/heliosdb /data/heliosdb-backup-3.0.0
# 2. Stop applicationsystemctl stop heliosdb-app
# 3. Update binarywget https://github.com/dimensigon/HDB-HeliosDB-Nano/releases/download/v3.0.1/heliosdb-nano-x86_64-linux.tar.gztar xzf heliosdb-nano-x86_64-linux.tar.gzsudo mv heliosdb-nano /usr/local/bin/
# 4. Verify versionheliosdb-nano --version
# 5. Start applicationsystemctl start heliosdb-app
# 6. Verify functionalitycurl http://localhost:8080/health2. Blue-Green Deployment
# Blue (current)- deployment: heliosdb-blue version: 3.0.0 replicas: 3
# Green (new)- deployment: heliosdb-green version: 3.0.1 replicas: 3
# Switch traffickubectl patch service heliosdb -p '{"spec":{"selector":{"version":"3.0.1"}}}'3. Canary Deployment
- deployment: heliosdb-stable version: 3.0.0 replicas: 9
- deployment: heliosdb-canary version: 3.0.1 replicas: 1
# Monitor canary metrics# If stable, promote: replicas 8 + 2# If issues: rollbackSummary
| Aspect | Recommendation |
|---|---|
| Deployment | Kubernetes StatefulSet or Docker |
| Storage | SSD with 3.5x data overhead |
| Backup | Daily encrypted backups, 30-day retention |
| Monitoring | Prometheus + Grafana |
| Alerting | CPU >80%, Memory >90%, Disk <10% |
| Upgrades | Blue-green or canary deployments |
| RTO | 5 minutes (failover + restore) |
| RPO | 1 hour (snapshot frequency) |
Last Updated: December 4, 2025 Version: 3.0.0 Status: Production Ready