The programmable Postgres data-plane. WASM-extensible, signed-plugin runtime, time-travel replay, zero-downtime version upgrades, and a built-in admin console.
24 feature modules + the v0.4 platform layer. Open source under AGPL-3.0.
v0.4.0 is the largest release in HeliosProxy's history. The proxy is no longer a router with feature flags — it is a programmable data-plane with a real WASM runtime, signed plugins distributed via OCI, time-travel replay over a transaction journal, a zero-downtime PostgreSQL major-version upgrade orchestrator, and a built-in admin web console. All 24 feature modules from v0.3.x remain.
Wasmtime executes signed plugins inside the proxy. Five hook points — Authenticate, Route, PreQuery, PostQuery, Validate — each can rewrite, augment, or block. RouteResult::Block denies queries before they hit a backend.
Plugin signature verification at load time. SignatureVerifier wired into the runtime config. Distribution via .tar.gz OCI artefacts pulled from any OCI registry — sign, push, deploy.
Replay any window of historical traffic against the current cluster, a standby, or a candidate. Per-call credential overrides; admin /api/replay; built-in UI form. TransactionJournal is auto-attached in ServerState.
Real SQL stage bodies execute on a tick schedule. Shadow-execution comparator runs traffic against both the old and new backend, diffing results before the cut. The hardest operational task on Postgres, orchestrated by the proxy.
Real PG connection pool, pg_is_in_recovery() polling, pg_promote + WAL probe sync-wait, real statement replay, real cursor recreation, real session migration (SET / PREPARE / CREATE TEMP TABLE). Oracle-grade TAF/TAC with a real backend behind it.
Static admin web UI at /. Topology, plugins, chaos mode, shadow execution, time-travel replay — all first-class panels. No separate admin tool to deploy or secure.
As of v0.4.0, HeliosProxy is licensed under AGPL-3.0-only (previously SSPL-1.0). The change aligns HeliosProxy with HeliosDB Nano and removes ambiguity for managed-service deployment scenarios.
Plugins are not configuration. They are real WebAssembly modules executing inside the proxy with sandboxed wasmtime, fuel-metered execution, and bounded memory. Five hook points cover authentication, routing, query lifecycle, and validation. Authoring is one Rust crate; deployment is one signed OCI artefact.
Authenticate, Route, PreQuery, PostQuery, Validate. Each can short-circuit, rewrite, deny, or augment.SignatureVerifier enforces signatures at load time. cargo helios-plugin sign is the publisher path..tar.gz artefacts pulled from any OCI registry; the loader handles unpacking and signature verification end-to-end.env.sha256_hex. Plugins can do real work without escaping the sandbox.RouteResult::Block denies a query at the routing stage and returns a structured reason to the client.# 1. Author the plugin (Rust crate, compile to wasm32-wasi)
cargo build --release --target wasm32-wasi
# 2. Sign with your Ed25519 key
cargo helios-plugin sign \
--key ./signing.key \
--input target/wasm32-wasi/release/auth_plugin.wasm
# 3. Push to any OCI registry
oras push registry.example.com/my-org/auth-plugin:v1 \
./auth_plugin.wasm.tar.gz
# 4. Reference in heliosproxy.toml
[[plugins]]
name = "auth"
source = "oci://registry.example.com/my-org/auth-plugin:v1"
hooks = ["Authenticate", "PreQuery"]
verify_with = "./trusted-keys.toml"
Every transaction the proxy sees is journalled. Replay any window of historical traffic against the current cluster, a standby, or a brand-new candidate — with optional per-call credential overrides for security validation and migration rehearsal. The shadow-execution comparator runs the same traffic against two backends side-by-side and diffs the result, catching divergence before a cut.
TransactionJournal auto-attached to ServerState; no separate storage to provision.POST /api/replay for automation; the admin UI ships a built-in form for operator-driven replays.A as user B. Useful for security validation and migration rehearsal.# Replay last hour against a candidate node
curl -X POST http://proxy:9090/api/replay \
-H "Content-Type: application/json" \
-d '{
"from": "2026-04-26T10:00:00Z",
"to": "2026-04-26T11:00:00Z",
"target": "candidate-pg17",
"as_user": "audit_replay",
"shadow": true,
"diff_mode": "rows"
}'
# Returns: replay_id, statements_replayed,
# diff_count, divergent_query_ids[]
The hardest operational task on Postgres — major-version upgrades — is now coordinated by the proxy. Real SQL stage bodies execute on a tick schedule against both the old and new instances. The chaos-mode admin endpoint injects controlled failures during rehearsal. The shadow-execution comparator validates that the new backend produces equivalent results before traffic moves.
/api/chaos + UI panel inject failure scenarios so you find issues in pre-prod, not in the cut window.[upgrade]
enabled = true
from_backend = "pg12-prod"
to_backend = "pg17-candidate"
shadow_window = "24h"
cut_threshold = 0 # max diff rows
[[upgrade.stages]]
name = "backfill"
tick = "30s"
sql = """
INSERT INTO pg17.events
SELECT * FROM pg12.events
WHERE id > (SELECT COALESCE(MAX(id), 0) FROM pg17.events)
LIMIT 10000
"""
[[upgrade.stages]]
name = "verify"
tick = "5m"
sql = "SELECT COUNT(*) FROM pg17.events"
A static admin web UI ships with the proxy at the root path. No separate admin service to deploy, no extra service to secure, no second auth model to keep in sync. Topology, plugins, chaos mode, shadow execution, and time-travel replay are all first-class panels.
Auth: shares the proxy's existing admin auth model (JWT or API key). The UI is static HTML/JS served from the same binary — no Node.js, no build step, no second container.
# Topology
GET /api/topology
GET /api/health
# Plugins
GET /plugins
POST /plugins/{id}/reload
POST /plugins/{id}/disable
# Chaos mode
POST /api/chaos
GET /api/chaos/active
# Time-travel replay
POST /api/replay
GET /api/replay/{id}
# Shadow execution
POST /api/shadow/start
GET /api/shadow/{id}/diff
Earlier this month: pool-checkout 28–37% faster, 3.1× faster metrics reads, zero-allocation protocol parsing, concurrent L1 cache hits. Still in the v0.4.0 release. Full v0.3.1 release notes →
Idle-connection pop and semaphore acquire happen outside the map lock. Per-node state is held behind a cheap cloneable Arc<Semaphore>, so concurrent checkouts to different backends no longer contend.
Acquires, timeouts, connections created/closed, recycles, and validation failures are atomic. Reading the full pool.metrics() snapshot is sub-15 ns and never contends with checkouts.
Prepared-statement parameter values are handled as reference-counted slices into the original protocol buffer — no per-parameter heap allocation during parse. C-string reads use a single scan rather than incremental buffer growth.
Cache hits take a read lock and bump an atomic per-entry access counter, so many threads can hit the same cached query in parallel without serialising. Covered by a 16-thread × 500-iteration regression test.
Method: cargo bench --bench pooling -- --quick, same machine, single-threaded, criterion quick mode. Directional indicator of request-path cost — the pool is one component; full-proxy QPS depends on backend, workload, and network.
HeliosProxy transforms HeliosDB into a production-ready platform with connection pooling, intelligent caching, rate limiting, and enterprise resilience — all in one component.
| Challenge | Without Proxy | With HeliosProxy |
|---|---|---|
| Connection limits | App manages pools | Automatic pooling |
| Query caching | Manual Redis / Memcached | Built-in 3-tier cache |
| Rate limiting | External service | Native support |
| Multi-tenancy routing | App logic | Automatic routing |
| Circuit breaking | Manual implementation | Built-in resilience |
| GraphQL API | Separate gateway | Auto-generated |
| Authentication | Custom middleware | JWT, OAuth, LDAP, API Keys |
Three modes to match your workload. Achieve up to 100:1 connection multiplexing, turning 5,000 client connections into just 50 server connections. Eliminate connection storms, reduce database load, and scale your application layer independently.
# Session mode: 1:1 mapping
[pool]
mode = "session"
max_connections = 500
# Transaction mode: 10:1 multiplexing
[pool]
mode = "transaction"
max_connections = 1000
server_connections = 100
# Statement mode: 100:1 multiplexing
[pool]
mode = "statement"
max_connections = 5000
server_connections = 50
Intelligent 3-tier distributed caching system that automatically promotes hot data and evicts cold entries. Reduce database load by up to 95% with sub-millisecond cache hits.
Automatic cache invalidation on writes. Configurable exclusion patterns for INSERT, UPDATE, and DELETE statements.
-- This query hits L1 cache (sub-ms)
SELECT * FROM users
WHERE id = 42;
-- Similar query hits L3 semantic cache
SELECT * FROM users
WHERE user_id = 42;
# Cache configuration
[cache]
enabled = true
[cache.l1]
size = "1GB"
ttl = "60s"
eviction = "lru"
[cache.l2]
type = "redis"
hosts = ["redis-1:6379", "redis-2:6379"]
ttl = "5m"
[cache.semantic]
enabled = true
similarity_threshold = 0.95
Automatic GraphQL schema generation from your database tables. No code generation step, no manual schema definitions — HeliosProxy introspects your database and generates a full GraphQL API in real time.
# Auto-generated from SQL schema
query {
users(where: { active: true }, limit: 10) {
id
email
orders(orderBy: { createdAt: DESC }) {
id
total
items {
product_name
quantity
}
}
}
}
# Enable in config:
# [graphql]
# enabled = true
# endpoint = "/graphql"
# introspection = true
# max_depth = 10
Extend HeliosProxy with custom WebAssembly plugins. Write your logic in Rust, Go, or any language that compiles to WASM, and inject it into the query pipeline at any hook point.
use wasm_bindgen::prelude::*;
// Custom query rewriting plugin
#[wasm_bindgen]
pub fn rewrite_query(
query: &str,
context: &Context
) -> String {
// Add tenant filter to all queries
if !query.contains("tenant_id") {
query.replace(
"WHERE",
&format!(
"WHERE tenant_id = {} AND",
context.tenant_id
)
)
} else {
query.to_string()
}
}
# Plugin configuration
# [plugins.wasm]
# name = "tenant-filter"
# path = "/plugins/tenant_filter.wasm"
# hook = "query_rewrite"
HeliosProxy automatically classifies and routes queries to the optimal backend. Lag-aware read replicas, circuit breaker protection, and workload-based routing ensure every query hits the best target.
# Workload-based routing
[routing]
schema_aware = true
[routing.workload]
enabled = true
# OLTP queries -> primary
oltp_patterns = [
"SELECT * FROM users WHERE id",
"INSERT INTO orders"
]
oltp_target = "primary"
# OLAP queries -> analytics replica
olap_patterns = [
"SELECT.*GROUP BY",
"SELECT.*ORDER BY.*LIMIT"
]
olap_target = "analytics"
# Vector queries -> vector node
vector_patterns = [
"ORDER BY.*<=>",
"embedding.*<->"
]
vector_target = "vector-node"
# Circuit breaker
[circuit_breaker]
enabled = true
failure_threshold = 5
reset_timeout = "30s"
half_open_requests = 3
HeliosProxy inspects and optimizes queries before they reach the database. Automatic rewriting improves performance without any application code changes.
# AI workload optimization
[ai_cache]
enabled = true
# Conversation context cache
conversation_context_ttl = "30m"
conversation_context_size = "512MB"
# RAG chunk cache
rag_chunk_cache_size = "2GB"
rag_chunk_ttl = "1h"
# Semantic query cache
semantic_cache_enabled = true
similarity_threshold = 0.95
# Query cache control
[cache.query]
enabled = true
max_result_size = "10MB"
exclude_patterns = [
"INSERT",
"UPDATE",
"DELETE"
]
Centralized authentication before database access. Support for JWT, OAuth 2.0, LDAP, and API keys — all validated at the proxy layer before any query reaches the database.
[auth]
enabled = true
# JWT validation
[auth.jwt]
enabled = true
issuer = "https://auth.example.com"
audience = "heliosdb"
jwks_url = "https://auth.example.com/.well-known/jwks.json"
# Extract tenant from JWT claims
tenant_claim = "org_id"
role_claim = "db_role"
# API key validation
[auth.api_key]
enabled = true
header = "X-API-Key"
# LDAP integration
[auth.ldap]
enabled = true
server = "ldap://ldap.example.com:389"
base_dn = "dc=example,dc=com"
Flexible tenant isolation strategies to match your architecture. From shared-database with Row-Level Security to dedicated databases per tenant — HeliosProxy handles routing, resource limits, and isolation transparently.
[tenancy]
enabled = true
tenant_header = "X-Tenant-ID"
default_tenant = "public"
# Enterprise tenant: dedicated DB
[[tenancy.tenant]]
id = "enterprise-1"
database = "enterprise_1"
pool_size = 100
cache_size = "2GB"
# Startup tenant: shared DB
[[tenancy.tenant]]
id = "startup-tier"
database = "shared"
pool_size = 10
cache_size = "256MB"
rate_limit_qps = 100
Track and analyze every query pattern flowing through HeliosProxy. Identify slow queries, monitor cache efficiency, and understand your workload — all with built-in analytics.
pool.metrics() snapshots read in sub-15 ns and never contend with checkouts (v0.3.1).-- Top 20 slowest queries
SELECT query_pattern,
avg_duration,
call_count
FROM heliosproxy_query_stats
ORDER BY avg_duration DESC
LIMIT 20;
-- Most frequent queries + cache hits
SELECT query_pattern,
call_count,
cache_hit_rate
FROM heliosproxy_query_stats
ORDER BY call_count DESC
LIMIT 20;
-- Overall cache efficiency
SELECT
SUM(cache_hits) AS hits,
SUM(cache_misses) AS misses,
ROUND(
SUM(cache_hits)::float /
(SUM(cache_hits) + SUM(cache_misses)),
3
) AS hit_rate
FROM heliosproxy_cache_stats;
Protect your database from overload with granular rate limiting at every level. Global limits, per-tenant quotas, per-user throttling, and per-query cost limits — all enforced at the proxy layer.
[rate_limit]
enabled = true
# Global limits
global_qps = 10000
global_connections = 5000
# Enterprise customer
[[rate_limit.tenant]]
id = "enterprise-customer"
qps = 2000
connections = 500
# Free tier
[[rate_limit.tenant]]
id = "free-tier"
qps = 100
connections = 10
burst = 20
# Client-side handling
try:
cursor.execute("SELECT * FROM users")
except HeliosError as e:
if e.code == "RATE_LIMIT_EXCEEDED":
time.sleep(e.retry_after)
Oracle-grade TAF+TAC merged functionality for PostgreSQL. HeliosProxy journals all statements within a transaction and automatically replays them on a new node after failover, with full result verification to guarantee consistency.
# Transaction Replay configuration
[transaction_replay]
enabled = true
verify_results = true
statement_timeout_ms = 30000
retry_on_error = true
max_retries = 3
wait_for_wal_sync = true
max_wal_lag_bytes = 0
[transaction_replay.journal]
max_entries = 10000
max_size = "64MB"
-- Transaction replayed after failover
BEGIN;
INSERT INTO orders (user_id, total)
VALUES (42, 99.99);
SAVEPOINT sp1;
UPDATE inventory SET qty = qty - 1
WHERE product_id = 7;
COMMIT;
-- checksum verified: OK
-- rows matched: OK
Saves and restores cursor state after failover, allowing seamless resumption of result set iteration without losing position. Applications iterating through large result sets survive node failures transparently.
-- Declare a cursor over a large table
DECLARE user_cursor CURSOR FOR
SELECT * FROM users
ORDER BY created_at;
-- Fetch first 500 rows
FETCH 500 FROM user_cursor;
-- position: 500
-- Node fails! HeliosProxy restores:
-- 1. Re-declare cursor on new node
DECLARE user_cursor CURSOR FOR
SELECT * FROM users
ORDER BY created_at;
-- 2. Skip to saved position
MOVE 500 IN user_cursor;
-- 3. App continues seamlessly
FETCH 100 FROM user_cursor;
-- returns rows 501-600
Preserves and restores complete session state during failover, including SET parameters, prepared statements, and optionally temporary tables. Applications maintain their full session context when transparently moved to a new node.
# Session State Migration
[session_migration]
enabled = true
max_sessions = 10000
migrate_temp_tables = false
# Tracked session parameters
# (automatically captured on SET)
# - timezone
# - search_path
# - client_encoding
# - datestyle
# - intervalstyle
# - application_name
# - any custom SET variables
# Example: after failover, proxy runs:
# SET timezone TO 'America/New_York'
# SET search_path TO myapp, public
# SET client_encoding TO 'UTF8'
# SET datestyle TO 'ISO, MDY'
# SET app.tenant_id TO '42'
# PREPARE my_query (integer)
# AS SELECT * FROM users
# WHERE id = $1
Connect to any node — primary or standby — and HeliosProxy automatically routes write operations to the current primary. Applications never need to track topology changes, enabling true connect-anywhere simplicity with zero client-side routing logic.
# Transparent Write Routing
[transparent_write_routing]
enabled = true
sync_mode = "sync"
forward_timeout_ms = 5000
[transparent_write_routing.pool]
max_connections = 50
idle_timeout_ms = 60000
health_check_interval_ms = 5000
-- App connects to ANY node (standby)
-- Reads stay local, writes auto-route
SELECT * FROM orders
WHERE user_id = 42;
-- → executes on standby (local)
INSERT INTO orders (user_id, total)
VALUES (42, 149.99);
-- → auto-forwarded to primary
-- → sync: confirmed on standby
24 v0.3 connection-routing modules + 22 v0.4 platform modules (plugin-host extensions, admin surface, first-party plugins, companion projects). Each module is independently activated via a compile-time feature flag. Enable only what your deployment requires — no unused code ships in your binary.
Session, Transaction, Statement modes with up to 100:1 multiplexing and prepared statement forwarding.
Round-robin, least-connections, or latency-based distribution. Automatic read/write splitting.
Configurable health-check queries, failure thresholds, and automatic node removal/recovery.
PostgreSQL extended query protocol pipelining. Batches Parse, Bind, Execute to reduce round trips. Parameter values parsed as zero-copy slices into the protocol buffer (v0.3.1).
Auto-coalesces individual INSERTs into multi-row batches for higher write throughput.
Automatic failover with promotion policies. Prefers sync standbys, ranks by replication lag.
Oracle-grade TAF+TAC. Journals and replays in-flight transactions on the new primary after failover.
Preserves SET parameters, prepared statements, search paths, and advisory locks across failover.
Saves cursor state and position, recreates on new nodes. FETCH NEXT continues seamlessly.
Buffers incoming queries during failover and drains to the new primary. Increased latency, not errors.
Pluggable topology discovery: PostgreSQL polling, HeliosDB native events, or manual via Admin API.
Write-ahead journal for in-flight transactions with statement-level granularity and parameter capture.
Three-tier result cache: L1 hot (sub-μs), L2 warm (TTL-based), L3 semantic (normalized matching).
Embed routing directives in SQL comments: /*+ route=primary */, /*+ route=nearest */.
Routes reads to replicas within a configurable lag threshold. Includes read-your-writes consistency.
Rule-based SQL transformations: add hints, inject filters, redirect tables, enforce naming conventions.
Fingerprinting, slow query log, intent classification (OLTP/analytics/vector), N+1 detection.
Data temperature classification (hot/warm/cold), workload-type detection, automatic schema discovery.
JWT, OAuth 2.0, LDAP, API key validation, and external identity claim-to-role mapping.
Token bucket, sliding window, and concurrency limiting. Per-tenant, per-user, per-IP, per-query cost.
Adaptive failure detection with closed/open/half-open states and automatic recovery probes.
Tenant identification, pool isolation, schema routing, and per-tenant resource quotas.
Extend with sandboxed WebAssembly plugins. Hot-reloadable, memory-limited, controlled host API.
Auto-generated GraphQL API from schema with DataLoader batching and query validation.
In-process detector for rate spikes (z-score against rolling EWMA), credential-stuffing bursts, six classes of SQL-injection patterns, and novel query shapes. No external SIEM — events stream over /anomalies.
Cache-first proxy for geo-distributed deployments. Local LRU+TTL+version cache on every edge; home broadcasts table-scoped invalidations on writes. Last-write-wins, no consensus.
env.kv_get/kv_set/kv_delete wasmtime imports. Per-plugin namespaced state that survives across hook invocations. Counters, budgets, signatures persist without per-call data round-trips.
env.sha256_hex host import backed by the audited sha2 crate. Plugins compute real SHA-256 without embedding the algorithm (~25 KiB saved per .wasm).
Optional Ed25519 trust root: drop *.pub files in a directory and every loaded .wasm requires a verifying .sig sidecar. Interoperates with openssl/signify.
.tar.gz distribution format with manifest.json + plugin.wasm + optional plugin.sig. The proxy loader detects .gz extension and ingests directly.
RouteResult::Block { reason } ABI variant for hard-rejecting queries from a Route-hook plugin. Wire-compatible with PreQueryResult::Block — clients see one consistent error format.
[plugins].trust_root = "/path/to/keys" in proxy.toml. Auto-attaches the SignatureVerifier when set; defaults to permissive when unset (dev-loop ergonomics preserved).
Single embedded HTML dashboard at / and /ui on the admin port. Ten panels (Nodes, Topology, Plugins, Anomalies, Edge Mode, Chaos Mode, Shadow Execution, Time-Travel Replay, SQL, Traffic). Auto-refresh every 5 s. No build step.
Eight new endpoints surface every v0.4 capability: /topology, /plugins, /anomalies, /api/edge*, /api/chaos, /api/shadow, /api/replay. Operator + UI consume the same JSON.
Per-tenant query cost budgets (minute / hour / day windows). Sliding window stored in the plugin's KV namespace; thresholds per tenant via the operator's TenantQuota CRD.
Detects LLM-generated SQL via application_name keywords, generated-by markers, opt-in attributes. Best-effort agent_id + model_id extraction. Persists per-request tags into KV for downstream plugins.
Per-(agent, model) cost gating for AI traffic. Sliding-window (minute + day) tracked in KV; blocks when budget exhausted. Cost model assumes ~4 bytes per token.
Refuses dangerous SQL from AI traffic: DROP/TRUNCATE, DELETE/UPDATE without WHERE, SELECT without LIMIT against large tables, missing tenant_id filter. Self-contained — useful even without ai-classifier deployed.
Routes pgvector top-K queries (<->, <#>, <=> in ORDER BY) to a topology-tagged vector replica. Falls back to default routing when no tagged node exists.
Per-role column masking via SQL rewriting. SELECT ssn becomes SELECT mask_ssn(ssn) AS ssn when the user lacks the pii_reader role. Idempotent on re-application.
Hash-chained tamper-evident audit log. Every query record embeds SHA-256 of the previous; modifying any entry breaks the chain. Real cryptography via env.sha256_hex (was a placeholder in v0.3.x).
Per-user data-residency routing. Reads user region from helios.region attribute; routes to a tagged in-region replica or returns RouteResult::Block with a proper PG ErrorResponse when no in-region node exists.
helios-plugin CLIPack, inspect, and verify WASM plugin artefacts as portable .tar.gz. Same Ed25519 trust-root format as the proxy loader. Interoperates with openssl/signify.
CRDs for HeliosProxy, PoolProfile, RoutingRule, AuditPolicy, TenantQuota. Reconciler renders ConfigMap + Deployment + Service per CR; polls /topology to populate status. Owned objects auto-clean on kubectl delete.
Five resources mirror the operator CRDs: heliosproxy_instance, _pool_profile, _routing_rule, _audit_policy, _tenant_quota. Schema generated from the operator's Go types via local replace.
Wraps the Terraform provider via pulumi-terraform-bridge. Same five resources surfaced as first-class Pulumi types in TypeScript / Python / Go / .NET.
dimensigon/HDB-HeliosDB-Proxy | Core proxy |
dimensigon/HDB-HeliosDB-Proxy-Operator | Kubernetes operator |
dimensigon/HDB-HeliosDB-Proxy-Plugins | First-party WASM plugins + CLI |
dimensigon/terraform-provider-HDB-HeliosDB-Proxy | Terraform provider |
dimensigon/pulumi-HDB-HeliosDB-Proxy | Pulumi provider |
ghcr.io/dimensigon/hdb-heliosdb-proxy:0.4.0 | Container image (lowercase — GHCR convention) |
HeliosProxy adds minimal latency while dramatically improving throughput, caching, and availability.
vs. 8.2K direct connection. 5x throughput improvement from connection pooling alone.
L1 hot cache serves frequently accessed results in sub-microsecond time.
vs. 12ms direct connection establishment. 40x faster via connection pooling.
Automatic failover with Transaction Replay. Clients experience a brief pause, not an error.
HeliosProxy operates at the wire protocol level. All features work transparently with any PostgreSQL-wire-compatible database.
| Backend | Compatibility | Notes |
|---|---|---|
| PostgreSQL 12+ | Full | Including Amazon RDS, Aurora, Cloud SQL, Azure Database |
| HeliosDB Lite/Full | Full | Native topology integration for instant failover |
| CockroachDB | Full | PostgreSQL wire protocol compatible |
| YugabyteDB | Full | PostgreSQL wire protocol compatible |
| TimescaleDB | Full | Runs as PostgreSQL extension |
| Citus | Full | Runs as PostgreSQL extension |
| AlloyDB | Full | Google Cloud PostgreSQL-compatible |
HeliosProxy isn’t just a connection pooler — it’s an AI-native database platform layer. Here’s how it compares to every alternative in the market.
| Capability | PgBouncer | pgpool-II | PgCat | Odyssey | AWS RDS Proxy | HeliosProxy |
|---|---|---|---|---|---|---|
| Connection Pooling | 3 modes | Yes | Yes | Yes | Yes | 3 modes + AI-aware |
| Multi-threaded | ✗ | ✓ | ✓ (Rust) | ✓ (C99) | N/A | ✓ (Rust) |
| Query Caching | ✗ | Basic | ✗ | ✗ | ✗ | 3-tier distributed |
| Load Balancing | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ + schema-aware |
| Auto Failover | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ + circuit breaker |
| Rate Limiting | ✗ | ✗ | ✗ | ✗ | ✗ | Per-tenant, per-query |
| Multi-Tenancy | ✗ | ✗ | ✗ | ✗ | ✗ | Full isolation (4 modes) |
| Auth Proxy | Basic | Basic | Basic | Basic | IAM only | JWT/OAuth/LDAP/SAML |
| Query Analytics | ✗ | ✗ | ✗ | ✗ | CloudWatch | P50/P90/P99 + cost attr. |
| GraphQL Gateway | ✗ | ✗ | ✗ | ✗ | ✗ | Auto-generated |
| WASM Plugins | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ (sandboxed) |
| Branch-Aware | ✗ | ✗ | ✗ | ✗ | ✗ | Git-like branching |
| AI/RAG Caching | ✗ | ✗ | ✗ | ✗ | ✗ | Semantic + conversation |
| Query Rewriting | ✗ | ✗ | ✗ | ✗ | ✗ | Tenant injection + optimization |
| Transaction Replay | ✗ | ✗ | ✗ | ✗ | ✗ | Oracle-grade TAF+TAC |
| Cursor Restore | ✗ | ✗ | ✗ | ✗ | ✗ | State + position preserved |
| Session Migration | ✗ | ✗ | ✗ | ✗ | ✗ | SET, prepared stmts, temp tables |
| Write Routing | ✗ | ✗ | ✗ | ✗ | ✗ | TWR (any-node connect) |
| Sharding | ✗ | ✗ | ✓ | ✗ | ✗ | Schema-aware routing |
| Vendor Lock-in | None | None | None | None | AWS only | None |
3-tier distributed cache (L1 local <100μs, L2 Redis <5ms, L3 DB) with 85% hit rate in production. No competitor offers multi-tier caching.
10,000 concurrent tenant connections served by 100 database connections. 73% infrastructure cost reduction vs. PgBouncer’s 10:1 ratio.
Deploy anywhere — Docker, Kubernetes, bare metal. Unlike AWS RDS Proxy, HeliosProxy works with any PostgreSQL-compatible database.
Semantic query cache, RAG chunk caching, conversation context, and tool result cache. Purpose-built for AI/Agent database workloads.
WASM plugins, GraphQL gateway, branch-aware caching, AI workload optimization, transaction replay, cursor restore, session migration, write routing, semantic caching, per-tenant analytics, query routing hints, batch coalescing, switchover buffer, transaction journal.
Replaces PgBouncer + Redis + rate limiter + auth proxy + GraphQL server + analytics. One binary, one config file, one deployment.
HeliosProxy is open source under AGPL-3.0 and works with all three HeliosDB editions — Nano, Lite, and Full — plus any PostgreSQL-compatible database.