BRANCH STORAGE DIAGRAMS - Part 1 of 2
BRANCH STORAGE DIAGRAMS - Part 1 of 2
Core Diagrams
Navigation: Index | Part 1 | Part 2 →
HeliosDB Nano - Branch Storage Architecture Diagrams
Visual Architecture Reference
Version: 2.2 Created: November 18, 2025 Companion to: BRANCH_STORAGE_ARCHITECTURE.md
Diagram 1: Copy-on-Write Branching
Time: T0 (Branch Creation)══════════════════════════════════════════════════════════════════
main: ┌────────────────────────────────────────────────┐ │ key1:v1 key2:v2 key3:v3 key4:v4 key5:v5 │ └────────────────────────────────────────────────┘ ▲ │ │ References (no copy) │dev: ┌─────────────────────┴──────────────────────────┐ │ (empty - reads delegated to main) │ └────────────────────────────────────────────────┘
Storage: 5 values + 2 branches × 500 bytes = 5 values + 1KB
Time: T1 (First Write to dev)══════════════════════════════════════════════════════════════════
main: ┌────────────────────────────────────────────────┐ │ key1:v1 key2:v2 key3:v3 key4:v4 key5:v5 │ └────────────────────────────────────────────────┘ ▲ ▲ ▲ ▲ │ │ │ │ │ shared │ shared │ shared │ │ │ │ │dev: ┌─────────┴───┬───────┴───────────┴───────────┴──┐ │ key2:v2' │ (key1, key3, key4, key5 from main)│ └─────────────┴────────────────────────────────────┘ ▲ │ └─ COPY-ON-WRITE: Only modified data stored
Storage: 5 values + 1 modified value + 1KB = 6 values + 1KB
Time: T2 (Multiple Writes to dev)══════════════════════════════════════════════════════════════════
main: ┌────────────────────────────────────────────────┐ │ key1:v1 key2:v2 key3:v3 key4:v4 key5:v5 │ └────────────────────────────────────────────────┘ ▲ ▲ │ shared │ │ │dev: ┌───────┬───────┬───┴───┬───────┴───┬───────────┐ │key1:v1'│key2:v2'│ key3 │ key4 │key5:v5' │ └───────┴───────┴───────┴───────────┴───────────┘ ▲ ▲ ▲ │ │ │ └───────┴────────────────────────────┘ Modified data only
Storage: 5 values + 3 modified values + 1KB = 8 values + 1KBEfficiency: 3/5 = 60% duplication (only for modified keys)Diagram 2: Branch Hierarchy and Parent Chain
┌──────────────────────────────────────────────────────────────┐│ Branch Hierarchy DAG │└──────────────────────────────────────────────────────────────┘
main (id:1) snapshot:0 │ │ ┌─────────────────┼─────────────────┐ │ │ │ ▼ ▼ ▼ dev (id:2) staging (id:3) hotfix (id:4) snapshot:100 snapshot:100 snapshot:200 │ │ ├───────────────┐ ▼ ▼ feature-x (id:5) feature-y (id:6) snapshot:150 snapshot:160
Parent Chain Resolution Example:━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Read key1 from feature-x (id:5):
Step 1: Check feature-x (id:5, snapshot:150) └─> Not found
Step 2: Check parent dev (id:2, snapshot:100) └─> Found! Return value at snapshot 100
Parent Chain: [(5, 150), (2, 100), (1, 0)] ▲ ▲ ▲ │ │ │ Current Parent RootDiagram 3: Physical Key Layout in RocksDB
┌──────────────────────────────────────────────────────────────────┐│ RocksDB Key Space (Sorted) │└──────────────────────────────────────────────────────────────────┘
Prefix BranchID UserKey Timestamp Data═══════════════════════════════════════════════════════════════════
branch:meta:dev {...metadata...}branch:meta:main {...metadata...}branch:meta:staging {...metadata...}branch:registry {...registry...}branch:children:1 [2, 3, 4]branch:children:2 [5, 6]
───────────────────────────────────────────────────────────────────
data: 0000000001 :users:001: 0000000100: Alicedata: 0000000001 :users:002: 0000000105: Bobdata: 0000000001 :users:003: 0000000110: Charlie └─────┬────┘ └────┬────┘ └────┬────┘ │ │ │ main branch table:row timestamp
data: 0000000002 :users:001: 0000000150: Alice_devdata: 0000000002 :users:004: 0000000155: David └─────┬────┘ └────┬────┘ └────┬────┘ │ │ │ dev branch table:row timestamp
data: 0000000003 :users:002: 0000000120: Bob_staging └─────┬────┘ └────┬────┘ └────┬────┘ │ │ │ staging branch table:row timestamp
───────────────────────────────────────────────────────────────────
Benefits:✓ Sequential scans within a branch (same BranchID prefix)✓ Efficient range queries✓ Natural clustering for compaction✓ Simple GC (delete by BranchID prefix)Diagram 4: Read Path with Parent Resolution
┌─────────────────────────────────────────────────────────────────┐│ Branch Transaction Read Path │└─────────────────────────────────────────────────────────────────┘
Start: tx.get(key="users:123") in branch "feature-x"
┌─────────────────────────────────────┐ │ 1. Build Physical Key │ │ branch_id = 5 (feature-x) │ │ user_key = "users:123" │ │ snapshot = 150 │ └──────────────┬──────────────────────┘ │ ▼ ┌─────────────────────────────────────┐ │ 2. Check Current Branch │ │ key = "data:5:users:123:150" │ │ result = db.get(key) │ └──────────────┬──────────────────────┘ │ ├─── Found? ────> Return Value ✓ │ └─── Not Found │ ▼ ┌─────────────────────────────────────┐ │ 3. Check Parent Chain │ │ parent = dev (id:2, snap:100) │ └──────────────┬──────────────────────┘ │ ▼ ┌─────────────────────────────────────┐ │ 4. Check Parent Branch │ │ key = "data:2:users:123:100" │ │ result = db.get(key) │ └──────────────┬──────────────────────┘ │ ├─── Found? ────> Return Value ✓ │ └─── Not Found │ ▼ ┌─────────────────────────────────────┐ │ 5. Check Root (main) │ │ parent = main (id:1, snap:0) │ │ key = "data:1:users:123:0" │ │ result = db.get(key) │ └──────────────┬──────────────────────┘ │ ├─── Found? ────> Return Value ✓ │ └─── Not Found ───> Return None
Performance:- Current branch hit: ~0.1ms (1 RocksDB get)- Parent hit: ~0.3ms (2-3 RocksDB gets)- Not found: ~0.5ms (N RocksDB gets, N=depth)
Optimization: Bloom filters + metadata cachingDiagram 5: Write Path with Copy-on-Write
┌─────────────────────────────────────────────────────────────────┐│ Branch Transaction Write Path │└─────────────────────────────────────────────────────────────────┘
Start: tx.put(key="users:123", value="new_value") in branch "dev"
┌─────────────────────────────────────┐ │ 1. Get Current Timestamp │ │ commit_ts = 180 │ └──────────────┬──────────────────────┘ │ ▼ ┌─────────────────────────────────────┐ │ 2. Build Physical Key │ │ branch_id = 2 (dev) │ │ user_key = "users:123" │ │ timestamp = 180 │ │ key = "data:2:users:123:180" │ └──────────────┬──────────────────────┘ │ ▼ ┌─────────────────────────────────────┐ │ 3. Check if Key Exists in Branch │ │ (optional optimization) │ └──────────────┬──────────────────────┘ │ ├─── Exists ───> Overwrite ────┐ │ │ └─── New ───> Copy-on-Write ───┤ │ ▼ ┌─────────────────────────────────┐ │ 4. Write to RocksDB │ │ db.put(key, value) │ │ (buffered in write set) │ └──────────────┬──────────────────┘ │ ▼ ┌─────────────────────────────────┐ │ 5. Update Branch Stats │ │ modified_keys++ │ │ storage_bytes += len │ └──────────────┬──────────────────┘ │ ▼ ┌─────────────────────────────────┐ │ 6. Commit Transaction │ │ Atomic write batch │ └─────────────────────────────────┘
Before Write:main: data:1:users:123:100 -> "old_value"dev: (empty)
After Write:main: data:1:users:123:100 -> "old_value" (unchanged)dev: data:2:users:123:180 -> "new_value" (new version)
Storage: +1 version, ~100 bytes overheadDiagram 6: Three-Way Merge Process
┌─────────────────────────────────────────────────────────────────┐│ Three-Way Merge │└─────────────────────────────────────────────────────────────────┘
Scenario: MERGE BRANCH dev INTO main
Timeline: T0 T100 T150 T200 │ │ │ │main: A ──────────┬─── B ──────┼──── C ──────┤ │ │ │ └─> A ───> D ┘ │dev: (branch) │ │ (merge point)
Merge Base: T100 (common ancestor)
Step 1: Find Changes Since Base━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Base (T100): key1=A, key2=B, key3=C
Source (dev@T150): - key1=A (unchanged) - key2=D (modified) - key4=X (new)
Target (main@T200): - key1=A (unchanged) - key2=C (modified) - key3=C (unchanged)
Step 2: Determine Merge Actions━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Key Base Source Target Action────────────────────────────────────────────────────key1 A A A No changekey2 B D C CONFLICT! ⚠key3 C - C Keep targetkey4 - X - Add from source
Step 3: Resolve Conflicts━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Strategy: branch_wins key2: Use source value (D)
Strategy: target_wins key2: Use target value (C)
Strategy: fail key2: Return error, require manual resolution
Step 4: Apply Merge━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Result (main after merge with branch_wins): key1=A (unchanged) key2=D (from dev) key3=C (unchanged) key4=X (from dev)Diagram 7: Garbage Collection
┌─────────────────────────────────────────────────────────────────┐│ Garbage Collection Process │└─────────────────────────────────────────────────────────────────┘
Initial State:══════════════════════════════════════════════════════════════════
Branches: main (id:1) [Active] dev (id:2) [Active] old (id:3) [Dropped]
Data: data:1:key1:100 -> v1 (main) data:1:key2:105 -> v2 (main) data:2:key1:120 -> v3 (dev) data:3:key1:90 -> v4 (old - dropped) data:3:key3:95 -> v5 (old - dropped)
GC Step 1: Find Dropped Branches━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Query: SELECT * FROM branches WHERE state = 'Dropped'Result: [old (id:3)]
GC Step 2: Check Version Reachability━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
For each version in branch 'old':
data:3:key1:90 ├─ Check if any active branch references this version │ • main: No (has newer version at T100) │ • dev: No (has own version at T120) └─> UNREACHABLE ✓ Delete
data:3:key3:95 ├─ Check if any active branch references this version │ • main: No (doesn't have key3) │ • dev: No (doesn't have key3) └─> UNREACHABLE ✓ Delete
GC Step 3: Delete Unreachable Data━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Delete operations: • data:3:key1:90 • data:3:key3:95 • branch:meta:old • branch:children:3
GC Step 4: Compact Storage━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
RocksDB compaction: • Merge SST files • Remove tombstones • Reclaim disk space
Final State: