Skip to content

Migration Guide: Unified Cache

Migration Guide: Unified Cache

Overview

This guide covers migrating from heliosdb-cache and heliosdb-caching to the new unified heliosdb-unified-cache package.

Date: November 2, 2025 Versions Affected: v4.0+ Deprecation Timeline: heliosdb-cache and heliosdb-caching will be deprecated in v4.1 (Q1 2026)

Why Consolidate?

The original packages had overlapping functionality:

  • heliosdb-cache: ML-based eviction, cache warming, partial caching
  • heliosdb-caching: Policy-based eviction, compression, tiered caching

The unified package combines the best of both with:

  • All eviction strategies in one place (ML + policies)
  • Unified API for easier development
  • Better performance through integrated optimization
  • Reduced dependency complexity
  • Hybrid eviction combining ML predictions with policy fallback

Quick Start

Update Cargo.toml

[dependencies]
# Old
# heliosdb-cache = { path = "../heliosdb-cache" }
# heliosdb-caching = { path = "../heliosdb-caching" }
# New
heliosdb-unified-cache = { path = "../heliosdb-unified-cache" }

Migration from heliosdb-cache

Basic Usage

Before:

use heliosdb_cache::{CacheManager, CacheConfig};
let config = CacheConfig {
max_size: 1024 * 1024 * 1024,
enable_ml: true,
..Default::default()
};
let cache = CacheManager::new(config);

After:

use heliosdb_unified_cache::{UnifiedCacheManager, CacheConfig, EvictionPolicyType};
let config = CacheConfig {
max_size: 1024 * 1024 * 1024,
eviction_policy: EvictionPolicyType::Hybrid, // ML + policy fallback
enable_ml: true,
..Default::default()
};
let cache = UnifiedCacheManager::new(config);

ML-Based Eviction

Before:

use heliosdb_cache::MlEvictionPredictor;
let predictor = MlEvictionPredictor::new(ml_config);
let score = predictor.predict_access_probability(&metadata, max_age);

After:

use heliosdb_unified_cache::{MlEvictionPredictor, HybridEvictionStrategy};
// Now integrated into hybrid strategy
let strategy = HybridEvictionStrategy::new(
ml_config,
EvictionPolicyType::Lru, // Fallback policy
max_size,
HybridConfig::default(),
);

Cache Warming

Before:

use heliosdb_cache::CacheWarmer;
let warmer = CacheWarmer::new(config);
warmer.warm_cache(fetch_fn).await?;

After:

use heliosdb_unified_cache::{UnifiedCacheManager, FrequencyBasedWarming};
let cache = UnifiedCacheManager::new(config);
// Add warming strategies
let strategy = FrequencyBasedWarming::new(100);
cache.add_warming_strategy(Box::new(strategy)).await;
// Warm cache
cache.warm(|key| async move {
// Fetch from backend
fetch_from_backend(&key).await
}).await?;

Migration from heliosdb-caching

Policy-Based Eviction

Before:

use heliosdb_caching::{Cache, EvictionPolicy};
let cache = Cache::with_policy(EvictionPolicy::Lru, max_size);

After:

use heliosdb_unified_cache::{UnifiedCacheManager, CacheConfig, EvictionPolicyType};
let config = CacheConfig {
max_size,
eviction_policy: EvictionPolicyType::Lru,
..Default::default()
};
let cache = UnifiedCacheManager::new(config);

Compression

Before:

use heliosdb_caching::{Cache, CompressionType};
let cache = Cache::with_compression(
CompressionType::Lz4,
1024, // threshold
);

After:

use heliosdb_unified_cache::{CacheConfig, CompressionType};
let config = CacheConfig {
enable_compression: true,
compression_type: CompressionType::Lz4,
compression_threshold: 1024,
..Default::default()
};
let cache = UnifiedCacheManager::new(config);

Tiered Caching

Before:

use heliosdb_caching::TieredCache;
let cache = TieredCache::new(l1_size, l2_size, l3_size);

After:

use heliosdb_unified_cache::CacheConfig;
let config = CacheConfig {
enable_tiered: true,
l1_size: 256 * 1024 * 1024,
l2_size: Some(1024 * 1024 * 1024),
..Default::default()
};
let cache = UnifiedCacheManager::new(config);

Key API Changes

Type Renames

Old (heliosdb-cache/caching)New (unified)
CacheManagerUnifiedCacheManager
EvictionPolicy (trait)EvictionPolicyTrait
EvictionPolicy (enum)EvictionPolicyType
CacheUnifiedCacheManager

Method Changes

Old MethodNew MethodNotes
cache.get(key)cache.get(&key).awaitNow async everywhere
cache.insert(key, value)cache.insert(key, value, ttl).awaitAdded TTL parameter
warmer.warm()cache.warm(fetch_fn).awaitIntegrated into manager

Feature Mapping

heliosdb-cache Features

FeatureStatusMigration Path
ML-based evictionIncludedUse EvictionPolicyType::MlBased or Hybrid
Cache warmingIncludedUse cache.add_warming_strategy()
Partial caching⚠ Not implementedPlanned for v4.1
Stampede protectionIncludedUse cache.get_or_fetch()
Coherence⚠ BasicFull coherence in v4.1

heliosdb-caching Features

FeatureStatusMigration Path
LRU evictionIncludedEvictionPolicyType::Lru
LFU evictionIncludedEvictionPolicyType::Lfu
ARC evictionIncludedEvictionPolicyType::Arc
FIFO evictionIncludedEvictionPolicyType::Fifo
Compression (LZ4)IncludedCompressionType::Lz4
Compression (Zstd)IncludedCompressionType::Zstd
Tiered cachingIncludedenable_tiered: true
Distributed (Redis)Includedredis-backend feature

Advanced Features

Hybrid Eviction Strategy

New feature combining ML predictions with policy fallback:

use heliosdb_unified_cache::{CacheConfig, EvictionPolicyType};
let config = CacheConfig {
eviction_policy: EvictionPolicyType::Hybrid,
enable_ml: true,
..Default::default()
};
let cache = UnifiedCacheManager::new(config);
// Hybrid strategy will:
// 1. Use ML predictions when accuracy > threshold
// 2. Fall back to LRU when ML accuracy is low
// 3. Continuously learn from access patterns

Intelligent Prefetching

New feature for learning access patterns:

use heliosdb_unified_cache::UnifiedCacheManager;
let cache = UnifiedCacheManager::new(config);
// Prefetcher learns patterns automatically
// When you access "page1", it predicts "page2" will be needed
cache.get(&CacheKey::new("page1")).await?;
// Stats available
let stats = cache.get_prefetch_stats().await;
println!("Prefetch predictions: {}", stats.total_predictions);

Get-or-Fetch with Stampede Protection

Prevent thundering herd on cache misses:

let data = cache.get_or_fetch(
CacheKey::new("user:123"),
|| async {
// Only one request will call this function
// Others will wait for the result
fetch_user_from_database(123).await
}
).await?;

Performance Comparison

Metricheliosdb-cacheheliosdb-cachingunified-cache
Hit Rate85-90% (ML)75-85% (policy)90-95% (hybrid)
Insert Latency2-5μs1-3μs1-3μs
Get Latency (hit)<1μs<1μs<1μs
Compression RatioN/A0.3-0.70.3-0.7
Memory Overhead~5%~2%~3%

Testing Your Migration

1. Unit Tests

#[tokio::test]
async fn test_basic_cache() {
let cache = UnifiedCacheManager::new(CacheConfig::default());
let key = CacheKey::new("test");
let data = vec![1, 2, 3];
cache.insert(key.clone(), data.clone(), None).await.unwrap();
let result = cache.get(&key).await.unwrap();
assert_eq!(result, Some(data));
}

2. Integration Tests

#[tokio::test]
async fn test_eviction() {
let config = CacheConfig {
max_size: 1000,
eviction_policy: EvictionPolicyType::Lru,
..Default::default()
};
let cache = UnifiedCacheManager::new(config);
// Fill cache beyond capacity
for i in 0..20 {
cache.insert(
CacheKey::new(format!("key{}", i)),
vec![0; 100],
None
).await.unwrap();
}
let stats = cache.get_stats();
assert!(stats.evictions > 0);
assert!(stats.current_size <= 1000);
}

3. Load Tests

use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn cache_benchmark(c: &mut Criterion) {
let rt = tokio::runtime::Runtime::new().unwrap();
let cache = UnifiedCacheManager::new(CacheConfig::default());
c.bench_function("cache_insert", |b| {
b.iter(|| {
rt.block_on(async {
cache.insert(
black_box(CacheKey::new("test")),
black_box(vec![1, 2, 3]),
None
).await
})
});
});
}
criterion_group!(benches, cache_benchmark);
criterion_main!(benches);

Troubleshooting

Common Issues

Issue 1: Compilation errors with EvictionPolicy

error: expected type, found trait `EvictionPolicy`

Solution: Use EvictionPolicyType for the enum, EvictionPolicyTrait for the trait.

Issue 2: Async/await errors

error: `get` is not a future

Solution: All cache operations are now async. Add .await:

// Before
let value = cache.get(key);
// After
let value = cache.get(&key).await?;

Issue 3: Missing TTL parameter

error: this function takes 3 arguments but 2 were supplied

Solution: Add TTL parameter (use None for no expiration):

// Before
cache.insert(key, value);
// After
cache.insert(key, value, None).await?;

Deprecation Timeline

  • v4.0 (Nov 2025): heliosdb-unified-cache introduced
  • v4.1 (Q1 2026): heliosdb-cache and heliosdb-caching marked deprecated
  • v4.2 (Q2 2026): Old packages removed from workspace
  • v5.0 (Q3 2026): Old packages fully removed

Support

For issues or questions:

  1. Check GitHub Issues
  2. Review USER_GUIDE_INDEX.md
  3. See unified cache README

Changelog

November 2, 2025 - Initial Release

Added:

  • UnifiedCacheManager combining both packages
  • Hybrid eviction strategy (ML + policy)
  • Intelligent prefetching
  • Stampede protection with get_or_fetch
  • Unified API for all caching features

Changed:

  • Renamed EvictionPolicy enum to EvictionPolicyType
  • All operations are now async
  • Added TTL parameter to insert()

Deprecated:

  • heliosdb-cache (use heliosdb-unified-cache)
  • heliosdb-caching (use heliosdb-unified-cache)