Skip to content

Performance Optimization Guide

Performance Optimization Guide

HeliosDB Phase 3B Part 1 - Developer Reference

This guide provides comprehensive documentation for using the performance optimization features delivered in Phase 3B Part 1.


Table of Contents

  1. Query Execution Optimization
  2. Storage Cache Optimization
  3. Transaction Optimization
  4. Network Batching
  5. Memory Pool Management
  6. Parallel Execution
  7. CDC Stream Optimization
  8. Error Handling
  9. Logging & Tracing
  10. Benchmarking

Query Execution Optimization

Overview

The query execution optimizer provides intelligent plan caching and adaptive strategy selection.

Basic Usage

use heliosdb_query::execution_optimizer::{
ExecutionOptimizer, ExecutionOptimizerConfig, ExecutionStrategy
};
// Create optimizer with default config
let optimizer = ExecutionOptimizer::new(ExecutionOptimizerConfig::default());
// Optimize a query
let plan = optimizer.optimize_query(
"SELECT * FROM users WHERE age > 25",
10_000, // estimated rows
50 // estimated memory (MB)
);
println!("Strategy selected: {:?}", plan.strategy);
println!("Estimated cost: {}", plan.estimated_cost);

Configuration Options

// Low-latency configuration
let config = ExecutionOptimizerConfig::low_latency();
// Analytical workload configuration
let config = ExecutionOptimizerConfig::analytical();
// Custom configuration
let config = ExecutionOptimizerConfig {
enable_plan_cache: true,
max_cached_plans: 5000,
enable_adaptive_strategy: true,
enable_query_rewriting: true,
parallel_threshold_rows: 10_000,
streaming_threshold_mb: 512,
plan_cache_ttl: Duration::from_secs(3600),
};

Monitoring Performance

// Get optimizer statistics
let stats = optimizer.stats();
println!("Plan cache hit rate: {:.2}%", stats.plan_cache_hit_rate() * 100.0);
println!("Avg time saved: {:.2}ms", stats.avg_time_saved_ms() / 1000.0);
// Clear cache if needed
optimizer.clear_cache();

Storage Cache Optimization

Overview

Three-tier cache architecture with adaptive prefetching.

Basic Usage

use heliosdb_storage::engine::cache::{StorageCache, StorageCacheConfig};
// Create cache
let config = StorageCacheConfig::default();
let cache = StorageCache::<u64, Vec<u8>>::new(config);
// Insert data
cache.insert(1, vec![1, 2, 3], 100);
// Retrieve data
if let Some(data) = cache.get(&1) {
println!("Cache hit! Data: {:?}", data);
}
// Get statistics
let stats = cache.stats();
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);
println!("Prefetch effectiveness: {:.2}%", stats.prefetch_effectiveness() * 100.0);

Advanced Configuration

let config = StorageCacheConfig {
hot_tier_max_bytes: 256 * 1024 * 1024, // 256 MB
warm_tier_max_bytes: 512 * 1024 * 1024, // 512 MB
cold_tier_max_bytes: 1024 * 1024 * 1024, // 1 GB
enable_prefetch: true,
prefetch_distance: 8,
enable_auto_tiering: true,
tier_management_interval: Duration::from_secs(60),
};

Tier Management

// Manual tier management
cache.manage_tiers();
// Get prefetch queue for custom prefetching
let to_prefetch = cache.get_prefetch_queue();
for key in to_prefetch {
// Prefetch data for key
}
// Check cache size
println!("Total entries: {}", cache.total_entries());

Transaction Optimization

Overview

Lock-free transaction processing with optimistic concurrency control.

Basic Usage

use heliosdb_storage::transaction_optimizer::{
TransactionCoordinator, IsolationLevel
};
// Create coordinator
let coordinator = TransactionCoordinator::new();
// Begin transaction
let txn_id = coordinator.begin_transaction(IsolationLevel::ReadCommitted);
// Record operations
coordinator.record_read(txn_id, "key1".to_string(), 1)?;
coordinator.record_write(txn_id, "key1".to_string(), vec![1, 2, 3])?;
// Commit
coordinator.commit_transaction(txn_id)?;
// Get statistics
let stats = coordinator.stats();
println!("Commit rate: {:.2}%", stats.commit_rate() * 100.0);
println!("Lockfree reads: {}", stats.lockfree_reads.load(Ordering::Relaxed));

Isolation Levels

// Different isolation levels
let txn1 = coordinator.begin_transaction(IsolationLevel::ReadUncommitted);
let txn2 = coordinator.begin_transaction(IsolationLevel::ReadCommitted);
let txn3 = coordinator.begin_transaction(IsolationLevel::RepeatableRead);
let txn4 = coordinator.begin_transaction(IsolationLevel::Serializable);

Lock Management

use heliosdb_storage::transaction_optimizer::LockManager;
let lock_mgr = LockManager::new();
// Acquire shared lock
lock_mgr.acquire_shared(txn_id, "key1".to_string())?;
// Acquire exclusive lock
lock_mgr.acquire_exclusive(txn_id, "key2".to_string())?;
// Release all locks
lock_mgr.release_all(txn_id);
// Check for deadlock
if lock_mgr.check_deadlock(txn_id) {
// Handle deadlock
}

Network Batching

Overview

Intelligent request batching with adaptive compression.

Basic Usage

use heliosdb_network::batching::{
RequestBatcher, BatcherConfig, CompressionAlgorithm
};
// Create batcher
let config = BatcherConfig::default();
let batcher = RequestBatcher::new(config);
// Add requests
batcher.add_request("request1".to_string());
batcher.add_request("request2".to_string());
batcher.add_request("request3".to_string());
// Flush batch
if let Some(batch) = batcher.flush() {
println!("Batch size: {}", batch.len());
println!("Compression: {:?}", batch.compression);
}

Configuration

// High-throughput configuration
let config = BatcherConfig::high_throughput();
// Low-latency configuration
let config = BatcherConfig::low_latency();
// Custom configuration
let config = BatcherConfig {
max_batch_size: 100,
max_batch_wait_ms: 10,
max_batch_bytes: 1024 * 1024,
enable_adaptive_batching: true,
enable_compression: true,
target_network_utilization: 0.8,
max_concurrent_batches: 16,
};

Network Monitoring

// Record network measurement
batcher.record_network_measurement(5000, true); // 5ms RTT, success
// Get statistics
let stats = batcher.stats();
println!("Compression ratio: {:.2}", stats.compression_ratio());
println!("Avg batch size: {:.2}", stats.get_avg_batch_size());
println!("Round trips saved: {}", stats.round_trips_saved.load(Ordering::Relaxed));

Memory Pool Management

Overview

Arena allocators and object pools for efficient memory management.

Arena Allocator

use heliosdb_common::memory_pool::Arena;
// Create arena
let arena = Arena::new(); // 1MB default chunk size
// or
let arena = Arena::with_chunk_size(4 * 1024 * 1024); // 4MB chunks
// Allocate memory
let ptr = arena.allocate(1024, 8); // 1KB, 8-byte aligned
// Get statistics
println!("Total allocated: {} bytes", arena.total_allocated());
println!("Active allocations: {}", arena.active_allocations());
// Reset arena (invalidates all allocations!)
arena.reset();

Object Pool

use heliosdb_common::memory_pool::ObjectPool;
// Create pool
let pool = ObjectPool::new(
100, // max size
|| Vec::<u8>::with_capacity(1024) // factory function
);
// Warm up pool
pool.warm_up(50);
// Acquire object
let mut obj = pool.acquire();
obj.push(1);
obj.push(2);
// Object returned to pool when dropped
// Get statistics
println!("Pool hit rate: {:.2}%", pool.hit_rate() * 100.0);
println!("Pool size: {}", pool.size());

Memory Pressure Monitoring

use heliosdb_common::memory_pool::{
MemoryPressureMonitor, MemoryPressureLevel
};
// Create monitor
let monitor = MemoryPressureMonitor::new(8 * 1024 * 1024 * 1024); // 8GB total
// Register callback
monitor.register_callback(|level| {
match level {
MemoryPressureLevel::Warning => println!("Memory pressure warning"),
MemoryPressureLevel::Critical => println!("Critical memory pressure!"),
MemoryPressureLevel::Emergency => println!("Emergency - OOM imminent!"),
_ => {}
}
});
// Record allocations/deallocations
monitor.record_allocation(1024 * 1024); // 1MB allocated
monitor.record_deallocation(512 * 1024); // 512KB freed
// Check current pressure
let level = monitor.pressure_level();
println!("Current usage: {} bytes", monitor.current_usage());

Parallel Execution

Overview

Work-stealing thread pool with priority-based task execution.

Basic Usage

use heliosdb_compute::parallel_executor::{
ThreadPool, ThreadPoolConfig, TaskPriority
};
// Create thread pool
let pool = ThreadPool::new(ThreadPoolConfig::default());
// Submit tasks
pool.submit(|| {
println!("Task 1 executing");
});
pool.submit(|| {
println!("Task 2 executing");
});
// Submit with priority
pool.submit_with_priority(|| {
println!("High priority task");
}, TaskPriority::High);

Configuration

// High-throughput configuration
let config = ThreadPoolConfig::high_throughput();
// Low-latency configuration
let config = ThreadPoolConfig::low_latency();
// Custom configuration
let config = ThreadPoolConfig {
num_workers: 16,
enable_work_stealing: true,
enable_adaptive_sizing: true,
max_queue_size: 10000,
};

Monitoring

// Get statistics
let stats = pool.stats();
println!("Total submitted: {}", stats.total_submitted.load(Ordering::Relaxed));
println!("Queue depth: {}", pool.queue_depth());
// Print detailed worker statistics
pool.print_stats();
// Shutdown pool
pool.shutdown();

CDC Stream Optimization

Overview

High-performance CDC event processing with intelligent buffering.

Basic Usage

use heliosdb_replication::cdc::stream_optimizer::{
CdcStreamBuffer, StreamBufferConfig, CdcEvent, EventType
};
// Create buffer
let config = StreamBufferConfig::default();
let buffer = CdcStreamBuffer::new(config);
// Add events
let event = CdcEvent::new(
1,
EventType::Insert,
"users".to_string(),
"public".to_string(),
vec![1, 2, 3]
);
buffer.add_event(event)?;
// Check if should flush
if buffer.should_flush() {
let events = buffer.flush();
println!("Flushed {} events", events.len());
}

Configuration

// High-throughput configuration
let config = StreamBufferConfig::high_throughput();
// Low-latency configuration
let config = StreamBufferConfig::low_latency();
// Custom configuration
let config = StreamBufferConfig {
max_buffer_size: 10_000,
max_buffer_memory_bytes: 64 * 1024 * 1024,
flush_interval_ms: 100,
enable_adaptive_buffering: true,
enable_event_coalescing: true,
enable_priority_ordering: true,
backpressure_threshold: 0.8,
target_throughput: 10_000,
};

Monitoring

// Get statistics
let stats = buffer.stats();
println!("Coalescing rate: {:.2}%", stats.coalescing_rate() * 100.0);
println!("Avg events per flush: {:.2}", stats.avg_events_per_flush());
println!("Throughput: {} events/sec", stats.avg_throughput.load(Ordering::Relaxed));
// Check buffer state
println!("Buffer size: {}", buffer.buffer_size());
println!("Memory usage: {} bytes", buffer.buffer_memory_bytes());

Error Handling

Overview

Standardized error handling with context tracking.

Basic Usage

use heliosdb_common::error_utils::{
HeliosError, ErrorCategory, Result, ResultExt, errors
};
// Create errors
let error = errors::storage_error("Disk full");
let error = errors::query_error("Invalid SQL syntax");
let error = errors::transaction_error("Deadlock detected");
// Add context
let error = error
.with_context("table", "users")
.with_context("operation", "insert");
// Use Result extension
fn read_file(path: &str) -> Result<String> {
std::fs::read_to_string(path)
.context("file", path)
.to_helios_error(ErrorCategory::Io)
}

Error Builder

use heliosdb_common::error_utils::ErrorBuilder;
let error = ErrorBuilder::new(ErrorCategory::Query)
.message("Failed to execute query")
.context("query_id", "12345")
.context("table", "users")
.build();

Error Logging

use heliosdb_common::error_utils::ErrorLogger;
// Log error
error.log_error();
// Log entire error chain
error.log_error_chain();
// Check error properties
if error.is_recoverable() {
// Retry operation
}
if error.should_retry() {
// Automatic retry
}

Logging and Tracing

Overview

Structured logging with distributed tracing support.

Initialization

use heliosdb_common::logging_standard::{
init_logging, LoggingConfig
};
// Development configuration
let config = LoggingConfig::development();
init_logging(config)?;
// Production configuration
let config = LoggingConfig::production();
init_logging(config)?;

Span Tracking

use heliosdb_common::logging_standard::SpanTracker;
let span = SpanTracker::new("database_operation")
.with_attribute("user_id", "123")
.with_attribute("operation", "select");
span.record_event("validation_complete");
span.record_event("query_execution_started");
// Span automatically logged when dropped
span.end();
// Or use macro
let span = traced_span!("operation", "user_id" => 123, "table" => "users");

Performance Timing

use heliosdb_common::logging_standard::PerfTimer;
let timer = PerfTimer::start("expensive_operation");
// ... perform operation ...
timer.stop(); // Automatically logs duration
// Or use macro
let result = timed!("query_execution", {
execute_query()
});

Metrics Collection

use heliosdb_common::logging_standard::metrics;
// Increment counter
metrics().increment("queries_executed");
// Record latency
metrics().record_latency("query_latency", 150); // 150µs
// Get metrics
let count = metrics().get_counter("queries_executed");
let avg_latency = metrics().get_avg_latency("query_latency");
let p95_latency = metrics().get_p95_latency("query_latency");
// Log all metrics
metrics().log_metrics();

Benchmarking

Overview

Comprehensive performance benchmark suite.

Running Benchmarks

use performance_baseline::{BenchmarkSuite, BenchmarkConfig};
// Create benchmark suite
let config = BenchmarkConfig {
iterations: 10000,
warmup_iterations: 1000,
data_size: 1_000_000,
concurrency: 32,
verbose: false,
};
let mut suite = BenchmarkSuite::new(config);
// Run all benchmarks
suite.run_all();
// Export results
suite.export_json("benchmark_results.json")?;

Custom Benchmarks

let result = suite.benchmark("custom_operation", || {
// Your operation here
perform_operation();
});
result.print();

Best Practices

1. Query Optimization

  • Enable plan caching for frequently executed queries
  • Use appropriate isolation levels (ReadCommitted for most cases)
  • Monitor cache hit rates and adjust configuration

2. Storage Cache

  • Configure tier sizes based on working set
  • Enable prefetching for sequential access patterns
  • Monitor tier distribution and adjust thresholds

3. Transaction Processing

  • Use lock-free reads whenever possible
  • Batch related operations in single transaction
  • Monitor lock contention and adjust strategy

4. Network Communication

  • Enable adaptive batching for variable workloads
  • Use compression for large payloads
  • Monitor network conditions and adjust batch sizes

5. Memory Management

  • Use arena allocators for bulk allocations
  • Pool frequently allocated objects
  • Monitor memory pressure and adjust limits

6. Parallel Execution

  • Use appropriate task priorities
  • Enable work stealing for load balancing
  • Monitor queue depths and adjust concurrency

7. CDC Processing

  • Enable event coalescing for high-volume streams
  • Configure buffer sizes based on throughput
  • Monitor backpressure and adjust thresholds

Performance Tuning Guide

Latency-Sensitive Workloads

// Query optimizer
let query_config = ExecutionOptimizerConfig::low_latency();
// Thread pool
let pool_config = ThreadPoolConfig::low_latency();
// Batching
let batch_config = BatcherConfig::low_latency();
// CDC
let cdc_config = StreamBufferConfig::low_latency();

Throughput-Optimized Workloads

// Query optimizer
let query_config = ExecutionOptimizerConfig::analytical();
// Thread pool
let pool_config = ThreadPoolConfig::high_throughput();
// Batching
let batch_config = BatcherConfig::high_throughput();
// CDC
let cdc_config = StreamBufferConfig::high_throughput();

Troubleshooting

High Cache Miss Rate

  • Increase cache tier sizes
  • Enable prefetching
  • Review access patterns

Lock Contention

  • Use lower isolation levels
  • Enable lock-free reads
  • Batch operations

Memory Pressure

  • Increase pool sizes
  • Enable compression in cold tier
  • Monitor allocation patterns

Poor Throughput

  • Increase concurrency
  • Enable work stealing
  • Use batching

Migration Guide

Migrating to New Error Handling

Before:

fn operation() -> Result<(), String> {
Err("Error occurred".to_string())
}

After:

use heliosdb_common::error_utils::{Result, errors};
fn operation() -> Result<()> {
Err(errors::internal_error("Error occurred")
.with_context("operation", "example"))
}

Migrating to New Logging

Before:

println!("Query executed in {}ms", duration);

After:

tracing::info!(
duration_ms = duration,
"Query executed"
);

Additional Resources

  • Full Completion Report: /home/claude/HeliosDB/docs/reports/completion/PHASE_3B_PART1_COMPLETION_REPORT.md
  • Executive Summary: /home/claude/HeliosDB/PHASE_3B_PART1_SUMMARY.md
  • Source Code: See individual module files for implementation details
  • Tests: Each module includes comprehensive test coverage

Last Updated: December 9, 2025 Version: Phase 3B Part 1 Status: Production Ready