Migration Guide: Unified Cache
Migration Guide: Unified Cache
Overview
This guide covers migrating from heliosdb-cache and heliosdb-caching to the new unified heliosdb-unified-cache package.
Date: November 2, 2025
Versions Affected: v4.0+
Deprecation Timeline: heliosdb-cache and heliosdb-caching will be deprecated in v4.1 (Q1 2026)
Why Consolidate?
The original packages had overlapping functionality:
- heliosdb-cache: ML-based eviction, cache warming, partial caching
- heliosdb-caching: Policy-based eviction, compression, tiered caching
The unified package combines the best of both with:
- All eviction strategies in one place (ML + policies)
- Unified API for easier development
- Better performance through integrated optimization
- Reduced dependency complexity
- Hybrid eviction combining ML predictions with policy fallback
Quick Start
Update Cargo.toml
[dependencies]# Old# heliosdb-cache = { path = "../heliosdb-cache" }# heliosdb-caching = { path = "../heliosdb-caching" }
# Newheliosdb-unified-cache = { path = "../heliosdb-unified-cache" }Migration from heliosdb-cache
Basic Usage
Before:
use heliosdb_cache::{CacheManager, CacheConfig};
let config = CacheConfig { max_size: 1024 * 1024 * 1024, enable_ml: true, ..Default::default()};
let cache = CacheManager::new(config);After:
use heliosdb_unified_cache::{UnifiedCacheManager, CacheConfig, EvictionPolicyType};
let config = CacheConfig { max_size: 1024 * 1024 * 1024, eviction_policy: EvictionPolicyType::Hybrid, // ML + policy fallback enable_ml: true, ..Default::default()};
let cache = UnifiedCacheManager::new(config);ML-Based Eviction
Before:
use heliosdb_cache::MlEvictionPredictor;
let predictor = MlEvictionPredictor::new(ml_config);let score = predictor.predict_access_probability(&metadata, max_age);After:
use heliosdb_unified_cache::{MlEvictionPredictor, HybridEvictionStrategy};
// Now integrated into hybrid strategylet strategy = HybridEvictionStrategy::new( ml_config, EvictionPolicyType::Lru, // Fallback policy max_size, HybridConfig::default(),);Cache Warming
Before:
use heliosdb_cache::CacheWarmer;
let warmer = CacheWarmer::new(config);warmer.warm_cache(fetch_fn).await?;After:
use heliosdb_unified_cache::{UnifiedCacheManager, FrequencyBasedWarming};
let cache = UnifiedCacheManager::new(config);
// Add warming strategieslet strategy = FrequencyBasedWarming::new(100);cache.add_warming_strategy(Box::new(strategy)).await;
// Warm cachecache.warm(|key| async move { // Fetch from backend fetch_from_backend(&key).await}).await?;Migration from heliosdb-caching
Policy-Based Eviction
Before:
use heliosdb_caching::{Cache, EvictionPolicy};
let cache = Cache::with_policy(EvictionPolicy::Lru, max_size);After:
use heliosdb_unified_cache::{UnifiedCacheManager, CacheConfig, EvictionPolicyType};
let config = CacheConfig { max_size, eviction_policy: EvictionPolicyType::Lru, ..Default::default()};
let cache = UnifiedCacheManager::new(config);Compression
Before:
use heliosdb_caching::{Cache, CompressionType};
let cache = Cache::with_compression( CompressionType::Lz4, 1024, // threshold);After:
use heliosdb_unified_cache::{CacheConfig, CompressionType};
let config = CacheConfig { enable_compression: true, compression_type: CompressionType::Lz4, compression_threshold: 1024, ..Default::default()};
let cache = UnifiedCacheManager::new(config);Tiered Caching
Before:
use heliosdb_caching::TieredCache;
let cache = TieredCache::new(l1_size, l2_size, l3_size);After:
use heliosdb_unified_cache::CacheConfig;
let config = CacheConfig { enable_tiered: true, l1_size: 256 * 1024 * 1024, l2_size: Some(1024 * 1024 * 1024), ..Default::default()};
let cache = UnifiedCacheManager::new(config);Key API Changes
Type Renames
| Old (heliosdb-cache/caching) | New (unified) |
|---|---|
CacheManager | UnifiedCacheManager |
EvictionPolicy (trait) | EvictionPolicyTrait |
EvictionPolicy (enum) | EvictionPolicyType |
Cache | UnifiedCacheManager |
Method Changes
| Old Method | New Method | Notes |
|---|---|---|
cache.get(key) | cache.get(&key).await | Now async everywhere |
cache.insert(key, value) | cache.insert(key, value, ttl).await | Added TTL parameter |
warmer.warm() | cache.warm(fetch_fn).await | Integrated into manager |
Feature Mapping
heliosdb-cache Features
| Feature | Status | Migration Path |
|---|---|---|
| ML-based eviction | Included | Use EvictionPolicyType::MlBased or Hybrid |
| Cache warming | Included | Use cache.add_warming_strategy() |
| Partial caching | ⚠ Not implemented | Planned for v4.1 |
| Stampede protection | Included | Use cache.get_or_fetch() |
| Coherence | ⚠ Basic | Full coherence in v4.1 |
heliosdb-caching Features
| Feature | Status | Migration Path |
|---|---|---|
| LRU eviction | Included | EvictionPolicyType::Lru |
| LFU eviction | Included | EvictionPolicyType::Lfu |
| ARC eviction | Included | EvictionPolicyType::Arc |
| FIFO eviction | Included | EvictionPolicyType::Fifo |
| Compression (LZ4) | Included | CompressionType::Lz4 |
| Compression (Zstd) | Included | CompressionType::Zstd |
| Tiered caching | Included | enable_tiered: true |
| Distributed (Redis) | Included | redis-backend feature |
Advanced Features
Hybrid Eviction Strategy
New feature combining ML predictions with policy fallback:
use heliosdb_unified_cache::{CacheConfig, EvictionPolicyType};
let config = CacheConfig { eviction_policy: EvictionPolicyType::Hybrid, enable_ml: true, ..Default::default()};
let cache = UnifiedCacheManager::new(config);
// Hybrid strategy will:// 1. Use ML predictions when accuracy > threshold// 2. Fall back to LRU when ML accuracy is low// 3. Continuously learn from access patternsIntelligent Prefetching
New feature for learning access patterns:
use heliosdb_unified_cache::UnifiedCacheManager;
let cache = UnifiedCacheManager::new(config);
// Prefetcher learns patterns automatically// When you access "page1", it predicts "page2" will be neededcache.get(&CacheKey::new("page1")).await?;
// Stats availablelet stats = cache.get_prefetch_stats().await;println!("Prefetch predictions: {}", stats.total_predictions);Get-or-Fetch with Stampede Protection
Prevent thundering herd on cache misses:
let data = cache.get_or_fetch( CacheKey::new("user:123"), || async { // Only one request will call this function // Others will wait for the result fetch_user_from_database(123).await }).await?;Performance Comparison
| Metric | heliosdb-cache | heliosdb-caching | unified-cache |
|---|---|---|---|
| Hit Rate | 85-90% (ML) | 75-85% (policy) | 90-95% (hybrid) |
| Insert Latency | 2-5μs | 1-3μs | 1-3μs |
| Get Latency (hit) | <1μs | <1μs | <1μs |
| Compression Ratio | N/A | 0.3-0.7 | 0.3-0.7 |
| Memory Overhead | ~5% | ~2% | ~3% |
Testing Your Migration
1. Unit Tests
#[tokio::test]async fn test_basic_cache() { let cache = UnifiedCacheManager::new(CacheConfig::default());
let key = CacheKey::new("test"); let data = vec![1, 2, 3];
cache.insert(key.clone(), data.clone(), None).await.unwrap(); let result = cache.get(&key).await.unwrap();
assert_eq!(result, Some(data));}2. Integration Tests
#[tokio::test]async fn test_eviction() { let config = CacheConfig { max_size: 1000, eviction_policy: EvictionPolicyType::Lru, ..Default::default() };
let cache = UnifiedCacheManager::new(config);
// Fill cache beyond capacity for i in 0..20 { cache.insert( CacheKey::new(format!("key{}", i)), vec![0; 100], None ).await.unwrap(); }
let stats = cache.get_stats(); assert!(stats.evictions > 0); assert!(stats.current_size <= 1000);}3. Load Tests
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn cache_benchmark(c: &mut Criterion) { let rt = tokio::runtime::Runtime::new().unwrap(); let cache = UnifiedCacheManager::new(CacheConfig::default());
c.bench_function("cache_insert", |b| { b.iter(|| { rt.block_on(async { cache.insert( black_box(CacheKey::new("test")), black_box(vec![1, 2, 3]), None ).await }) }); });}
criterion_group!(benches, cache_benchmark);criterion_main!(benches);Troubleshooting
Common Issues
Issue 1: Compilation errors with EvictionPolicy
error: expected type, found trait `EvictionPolicy`Solution: Use EvictionPolicyType for the enum, EvictionPolicyTrait for the trait.
Issue 2: Async/await errors
error: `get` is not a futureSolution: All cache operations are now async. Add .await:
// Beforelet value = cache.get(key);
// Afterlet value = cache.get(&key).await?;Issue 3: Missing TTL parameter
error: this function takes 3 arguments but 2 were suppliedSolution: Add TTL parameter (use None for no expiration):
// Beforecache.insert(key, value);
// Aftercache.insert(key, value, None).await?;Deprecation Timeline
- v4.0 (Nov 2025): heliosdb-unified-cache introduced
- v4.1 (Q1 2026): heliosdb-cache and heliosdb-caching marked deprecated
- v4.2 (Q2 2026): Old packages removed from workspace
- v5.0 (Q3 2026): Old packages fully removed
Support
For issues or questions:
- Check GitHub Issues
- Review USER_GUIDE_INDEX.md
- See unified cache README
Changelog
November 2, 2025 - Initial Release
Added:
- UnifiedCacheManager combining both packages
- Hybrid eviction strategy (ML + policy)
- Intelligent prefetching
- Stampede protection with get_or_fetch
- Unified API for all caching features
Changed:
- Renamed EvictionPolicy enum to EvictionPolicyType
- All operations are now async
- Added TTL parameter to insert()
Deprecated:
- heliosdb-cache (use heliosdb-unified-cache)
- heliosdb-caching (use heliosdb-unified-cache)