HeliosDB Multi-Protocol Research Summary
HeliosDB Multi-Protocol Research Summary
Comprehensive Protocol Compatibility Analysis
Researcher Agent: Final Report Date: 2025-10-10 Status: Research Complete
Executive Summary
This research provides a complete analysis of protocol compatibility requirements for HeliosDB’s multi-protocol database support. The goal is to enable zero-friction adoption by accepting connections from mainstream Python client ecosystems without custom drivers.
Research Scope Completed
8 Wire Protocols Analyzed:
- PostgreSQL (libpq) - GOLD target
- MySQL Protocol v10 - GOLD target
- TDS 7.4 (SQL Server) - BRONZE target
- DRDA (DB2) - BRONZE target
- Oracle Net/TTC - BRONZE target
- Snowflake REST API - SILVER target
- Databricks SQL HTTP/Thrift - SILVER target
- Pinecone HTTP/gRPC - SILVER target
15 Python Clients Evaluated:
- PostgreSQL: psycopg2, asyncpg, SQLAlchemy
- MySQL: PyMySQL, mysql-connector-python, SQLAlchemy
- SQL Server: pyodbc, pymssql
- DB2: ibm_db, ibm_db_dbi
- Oracle: oracledb (Thin/Thick modes)
- HTTP APIs: snowflake-connector-python, databricks-sql-connector, pinecone-client
Protocol Autodetection Strategy: Magic bytes, TLS ALPN, handshake patterns
Authentication Flows: SCRAM-SHA-256, caching_sha2_password, TDS LOGIN7, DRDA USRIDPWD, Oracle O3LOGON, JWT/OAuth/API keys
Packet Formats: Complete wire-level specifications for all protocols
Key Findings
1. Implementation Feasibility
| Protocol | Feasibility | Confidence | Rationale |
|---|---|---|---|
| PostgreSQL | HIGH | HIGH | RFC-compliant SCRAM-SHA-256, TLS ALPN, comprehensive docs |
| MySQL | HIGH | HIGH | Server-first handshake (easy detection), clear HandshakeV10 spec |
| Snowflake | HIGH | HIGH | OpenAPI spec, JWT auth well-documented |
| Databricks | HIGH | HIGH | Thrift protocol established, clear REST API |
| Pinecone | HIGH | HIGH | Simple REST/gRPC API, 2025-01 spec available |
| TDS 7.4 | MEDIUM | MEDIUM | Microsoft open spec, TLS wrapping complexity |
| DRDA | MEDIUM | MEDIUM | Open Group standard, EBCDIC handling needed |
| Oracle TNS/TTC | LOW | LOW | Reverse-engineered, complex layered protocol |
2. Protocol Autodetection Patterns
Reliable Detection Methods Identified:
Detection Priority:1. TLS ALPN (if TLS ClientHello detected) - PostgreSQL: "postgresql" (IANA registered) - HTTP: "h2", "http/1.1"
2. Magic Bytes / First Packet Signatures - PostgreSQL: Length(4) + 0x00030000 (protocol version) - MySQL: Server sends HandshakeV10 (0x0a after sequence byte) - TDS: Type=0x12 (PRELOGIN) - DRDA: Magic=0xD0 + Code Point=0x1041 (EXCSAT) - Oracle TNS: Type=0x01 (CONNECT) + version=0x0138 - HTTP: "GET ", "POST ", etc.
3. Fallback - MySQL (server-first handshake)Implementation Recommendation: TLS peek → ALPN check → magic bytes matching → MySQL fallback
3. Parameter Binding Complexity
5 Different Parameter Styles Identified:
| Style | Drivers | Example |
|---|---|---|
%s (format) | psycopg2, PyMySQL, pymssql | WHERE id = %s |
%(name)s (pyformat) | psycopg2, pymssql, snowflake | WHERE id = %(id)s |
$1, $2 (postgres) | asyncpg | WHERE id = $1 |
? (qmark) | pyodbc, ibm_db, mysql-connector (prep) | WHERE id = ? |
:name (named) | oracledb, databricks-sql | WHERE id = :id |
Critical Insight: Unified parameter adapter required to translate between styles
4. Prepared Statement Paradigms
Two Distinct Approaches:
-
Client-Side Escaping (PyMySQL)
- No binary protocol
- String escaping only
- HeliosDB can optimize with internal caching
-
Server-Side Binary Protocol (mysql-connector-python, asyncpg, oracledb)
- True prepared statements
- Binary protocol required
- Statement caching essential
Recommendation: Support both paradigms with protocol-specific handlers
5. Authentication Complexity Ranking
From simplest to most complex:
- Pinecone: API key header (trivial)
- Snowflake: JWT with RS256 (well-documented)
- Databricks: PAT or OAuth (standard)
- MySQL caching_sha2_password: SHA-256 + RSA encryption (manageable)
- PostgreSQL SCRAM-SHA-256: RFC 5802/7677 compliant (moderate)
- TDS 7.4 LOGIN7: TLS wrapping + password obfuscation (complex)
- DRDA USRIDPWD: EXCSAT → ACCSEC → SECCHK flow (moderate)
- Oracle O3LOGON: 3-phase logon, proprietary hashing (complex)
Research Deliverables
1. Protocol Specifications Summary
Location: /home/claude/DMD/.distributed execution/research/protocol_specifications_summary.md
Contents:
- Wire protocol overviews for all 8 protocols
- Packet structures and byte layouts
- Authentication specifications
- TLS integration details
- Implementation requirements and checklists
- Autodetection strategy with pseudocode
- Magic bytes reference table
2. Authentication Flow Diagrams
Location: /home/claude/DMD/.distributed execution/research/authentication_flows.md
Contents:
- Step-by-step authentication flows for each protocol
- PostgreSQL SCRAM-SHA-256 (detailed RFC compliance)
- MySQL caching_sha2_password (fast auth + full auth paths)
- TDS 7.4 PRELOGIN → LOGIN7 (with TLS wrapping)
- DRDA EXCSAT → ACCSEC → SECCHK (security mechanism negotiation)
- Oracle O3LOGON (3-phase logon with password hashing)
- HTTP API authentication (JWT, OAuth, API keys)
- Implementation checklists per protocol
3. Packet Format Specifications
Location: /home/claude/DMD/.distributed execution/research/packet_formats.md
Contents:
- Complete wire-level packet formats for all protocols
- Byte-by-byte field descriptions
- Message type tables
- Data type encoding specifications
- Protocol detection magic bytes reference
- Example hex dumps for key packets
4. Python Client Compatibility Matrix
Location: /home/claude/DMD/.distributed execution/research/python_client_compatibility.md
Contents:
- Detailed analysis of 15 Python client libraries
- Connection parameters, parameter binding styles
- Prepared statement behaviors
- Connection pooling strategies
- Key behavioral differences
- Implementation recommendations
- Testing requirements per driver
Implementation Roadmap
Phase 1: PostgreSQL + MySQL (GOLD) - 4-6 weeks
PostgreSQL (Priority 1):
- Well-documented RFC-based SCRAM-SHA-256
- TLS ALPN support (“postgresql”)
- Extended query protocol (Parse, Bind, Execute)
- ⚠ Handle both psycopg2 (
%s) and asyncpg ($1) parameter styles - ⚠ Auto-BEGIN transaction semantics
MySQL (Priority 2):
- Server-first handshake (easy detection)
- caching_sha2_password documented
- Binary protocol for prepared statements
- ⚠ Client-side (PyMySQL) vs server-side (mysql-connector-python) prepared statements
- ⚠ Capability flag negotiation critical
Deliverables:
- Protocol router with TLS ALPN + magic bytes detection
- PostgreSQL handler (SCRAM-SHA-256, extended query protocol)
- MySQL handler (HandshakeV10, caching_sha2_password, COM_STMT_*)
- Unified parameter adapter (
%s/$1/?→ internal format) - Dialect adapters (LIMIT/OFFSET, identifier quoting, date literals)
- Server-side connection pooling
- Integration tests with psycopg2, asyncpg, PyMySQL, mysql-connector-python
Phase 1.5: HTTP APIs (SILVER) - 3-4 weeks
Snowflake REST:
- OpenAPI spec available
- JWT authentication (RS256)
- Async execution model
- ⚠ Result set pagination
Databricks SQL:
- Thrift protocol
- Statement execution API
- OAuth/PAT authentication
- ⚠ Unity Catalog namespace (catalog.schema.table)
Pinecone:
- Simple REST/gRPC API
- Metadata filtering
- ⚠ SQL translation for vector operations (SELECT → query API)
Deliverables:
- HTTP gateway with path routing (/snowflake/, /dbsql/, /pinecone/*)
- JWT/OAuth token validation
- Async statement execution handling
- Vector operation translator (SQL VECTOR queries → Pinecone API)
- Integration tests with snowflake-connector-python, databricks-sql-connector, pinecone-client
Phase 2: SQL Server + DB2 (BRONZE) - 4-5 weeks
TDS 7.4:
- Microsoft open spec [MS-TDS]
- ⚠ TLS wrapping in PRELOGIN (complex)
- ⚠ Password obfuscation (XOR + nibble swap)
- Focus: Basic connectivity + simple queries
DRDA:
- Open Group standard
- ⚠ EBCDIC encoding (negotiate ASCII early)
- ⚠ Limited client testing (ibm_db dependency)
- Focus: EXCSAT/ACCSEC/ACCRDB flow
Deliverables:
- TDS handler (PRELOGIN, LOGIN7, basic SQL_BATCH)
- DRDA handler (EXCSAT/ACCSEC/SECCHK/ACCRDB)
- EBCDIC ↔ UTF-8 conversion (or ASCII negotiation)
- Integration tests with pyodbc, ibm_db
Phase 3: Oracle/Tibero (BRONZE → SILVER) - 5-6 weeks
Oracle Net/TTC:
- ⚠ Reverse-engineered specs (community docs)
- ⚠ Complex layered protocol (TNS → TTC → TTI)
- ⚠ Limited official documentation
- oracledb Thin mode provides Python reference
- Focus: TNS CONNECT/ACCEPT, basic TTC, simple TTI (OSQL)
Deliverables:
- TNS packet handling (CONNECT, ACCEPT, DATA)
- Basic TTC protocol negotiation
- Simple TTI function calls (OSQL for “SELECT 1 FROM DUAL”)
- Integration test with oracledb Thin mode
- Expand based on demand (Silver: more TTI opcodes, LOBs)
Technical Challenges & Mitigations
Challenge 1: Parameter Style Diversity
Impact: Medium Mitigation:
class UnifiedParameterAdapter: def translate(self, sql, params, source_style, target_style): # Parse SQL for parameter placeholders # Convert: %s → $1 → ? → :name as needed # Return (translated_sql, translated_params)Challenge 2: Prepared Statement Complexity
Impact: Medium Mitigation:
- Implement statement cache per protocol
- For client-side drivers: Accept text protocol, cache internally
- For server-side drivers: Support binary protocol (COM_STMT_*, Parse/Bind/Execute)
- LRU cache with configurable size (default 100 statements)
Challenge 3: TLS Protocol Wrapping (TDS PRELOGIN)
Impact: Medium Mitigation:
- Peek initial bytes to detect TLS ClientHello
- If TLS in PRELOGIN: unwrap, process handshake, re-wrap for LOGIN7
- Use TLS library’s handshake state machine
- Test with real pyodbc client
Challenge 4: EBCDIC Encoding (DRDA)
Impact: Low Mitigation:
- Negotiate ASCII/UTF-8 early via TYPDEFNAM (QTDSQLASC or QTDSQLJVM)
- If EBCDIC required: Use Python
codecslibrary (cp037, cp500) - Document EBCDIC as limited support
Challenge 5: Oracle Protocol Complexity
Impact: High Mitigation:
- Limit to Bronze level initially (basic connectivity)
- Use oracledb Thin mode as reference implementation
- Focus on CONNECT/ACCEPT + simple DATA packets
- “SELECT 1 FROM DUAL” as minimum viable
- Expand incrementally based on user demand
Challenge 6: asyncpg vs psycopg2 Behavioral Differences
Impact: Medium Mitigation:
- Detect client type via connection parameters or initial messages
- Route to appropriate handler (asyncpg: $1 params, psycopg2: %s params)
- Unified backend with protocol-specific frontends
- Integration testing with both clients
Testing Strategy
Unit Testing
Coverage: 90%+ for protocol handlers
Test Categories:
- Packet parsing/encoding (each protocol)
- Authentication flows (all methods)
- Parameter translation (all styles)
- Type conversion (data types)
- Error mapping (protocol errors → generic)
Example:
def test_postgresql_scram_sha256(): auth = PostgreSQLAuthHandler() result = auth.process_sasl_initial_response( mechanism="SCRAM-SHA-256", client_first="n,,n=user,r=client_nonce" ) assert result.contains_server_first_message() assert result.nonce.startswith("client_nonce")Integration Testing
Per-Driver Tests:
# PostgreSQL - psycopg2def test_psycopg2_integration(): conn = psycopg2.connect( host="heliosdb.local", port=5432, user="demo", password="demo", database="demo" ) cursor = conn.cursor() cursor.execute("SELECT 1") assert cursor.fetchone() == (1,)
# Prepared statement cursor.execute("SELECT * FROM users WHERE id = %s", (123,))
# Transaction conn.commit() conn.rollback()
# MySQL - mysql-connector-pythondef test_mysql_connector_prepared(): conn = mysql.connector.connect( host="heliosdb.local", port=3306, user="demo", password="demo", database="demo" ) cursor = conn.cursor(prepared=True) cursor.execute("SELECT * FROM users WHERE id = %s", (123,)) # Verify binary protocol (COM_STMT_EXECUTE)Protocol Compliance Testing
Test Matrix (from /home/claude/DMD/docs/01_PROTOCOL_TEST_MATRIX.md):
| Ecosystem | Driver | Must-Pass Tests |
|---|---|---|
| PostgreSQL | psycopg2, asyncpg | Connect, simple query, prepared stmt, cursor, tx, errors |
| MySQL | PyMySQL, mysql-connector | Same as PG + autocommit semantics |
| Snowflake | snowflake-connector | Session create, query submit, fetch, cancel |
| Databricks | databricks-sql-connector | Connect, query, fetchmany |
| Pinecone | pinecone-client | Upsert vectors, query top-k, filter |
| SQL Server | pyodbc | Connect, simple query, param bind |
| DB2 | ibm_db | Connect, simple query |
| Oracle | oracledb (Thin) | Connect, “SELECT 1 FROM DUAL”, bind |
CI/CD Integration
name: Protocol Compatibility Tests
on: [push, pull_request]
jobs: test-postgresql: runs-on: ubuntu-latest strategy: matrix: driver: [psycopg2, asyncpg] version: [latest] steps: - uses: actions/checkout@v3 - name: Start HeliosDB run: docker-compose up -d heliosdb - name: Install driver run: pip install ${{ matrix.driver }}==${{ matrix.version }} - name: Run integration tests run: pytest tests/integration/postgresql/${{ matrix.driver }}/
test-mysql: runs-on: ubuntu-latest strategy: matrix: driver: [pymysql, mysql-connector-python] version: [latest] steps: - uses: actions/checkout@v3 - name: Start HeliosDB run: docker-compose up -d heliosdb - name: Install driver run: pip install ${{ matrix.driver }}==${{ matrix.version }} - name: Run integration tests run: pytest tests/integration/mysql/${{ matrix.driver }}/
# ... (similar for other protocols)Coordination with Architect & Coder Agents
For Architect Agent
Architectural Decisions Needed:
-
Protocol Router Design:
- TLS ALPN integration strategy
- Magic bytes detection algorithm
- Handler registration and dispatch
-
Unified Abstractions:
- Internal AST/plan API (SQL → logical plan)
- Parameter binding model (protocol-agnostic)
- Type system (PostgreSQL ↔ MySQL ↔ internal)
- Error model (protocol errors → generic → protocol errors)
-
Authentication Framework:
- Pluggable auth adapters (SCRAM, caching_sha2, JWT, OAuth)
- Session management (user, database, role, transaction state)
- TLS certificate handling
-
Connection Management:
- Server-side pooling strategy
- Concurrent connection handling (asyncio)
- Resource limits (max connections per protocol)
-
Dialect Adapter Layer:
- SQL rewriters (LIMIT/OFFSET ↔ TOP, DUAL, CURRENT_TIMESTAMP variants)
- Identifier quoting (”, `, [])
- Parameter style translation
Key Questions:
- How to handle protocol-specific features not in internal model? (e.g., PostgreSQL LISTEN/NOTIFY)
- Should we expose a unified SQL dialect or protocol-specific dialects?
- How to handle distributed transactions across protocols?
For Coder Agent
Implementation Modules Needed:
-
Protocol Handlers (
heliosdb/protocols/):protocols/├── __init__.py├── router.py # Protocol detection & routing├── postgresql/│ ├── __init__.py│ ├── handler.py # PostgreSQL protocol handler│ ├── auth.py # SCRAM-SHA-256 implementation│ ├── messages.py # Message encoding/decoding│ └── types.py # PostgreSQL type system├── mysql/│ ├── __init__.py│ ├── handler.py # MySQL protocol handler│ ├── auth.py # caching_sha2_password implementation│ ├── packets.py # Packet encoding/decoding│ └── types.py # MySQL type system├── tds/├── drda/├── oracle/└── http/├── snowflake.py├── databricks.py└── pinecone.py -
Dialect Adapters (
heliosdb/dialects/):dialects/├── __init__.py├── base.py # Base dialect interface├── postgresql.py # PostgreSQL SQL rewriter├── mysql.py # MySQL SQL rewriter├── sqlserver.py # SQL Server SQL rewriter├── db2.py # DB2 SQL rewriter└── oracle.py # Oracle SQL rewriter -
Authentication (
heliosdb/auth/):auth/├── __init__.py├── base.py # Base auth adapter├── scram.py # SCRAM-SHA-256 (PostgreSQL)├── caching_sha2.py # caching_sha2_password (MySQL)├── jwt.py # JWT (Snowflake, Databricks)├── oauth.py # OAuth (Databricks)└── api_key.py # API key (Pinecone) -
Parameter Handling (
heliosdb/parameters/):parameters/├── __init__.py├── adapter.py # Unified parameter adapter└── styles.py # Parameter style definitions -
Type System (
heliosdb/types/):types/├── __init__.py├── base.py # Internal type system├── mappers.py # Protocol type mappers└── vector.py # Vector type support
Dependencies:
[dependencies]cryptography = "^41.0.0" # For SCRAM, JWT, TLSasyncio = "*" # For async protocol handlersstruct = "*" # For binary packet encodingKey Implementation Notes:
- All protocol handlers should be async (asyncio-based)
- Use
structmodule for binary packet encoding/decoding - Implement prepared statement cache (LRU, max 100 per connection)
- Error handling with protocol-specific error mappers
- Logging at DEBUG level for packet-level diagnostics
Documentation Requirements
User-Facing Documentation
-
DSN Examples (copy-paste ready):
# PostgreSQLimport psycopg2conn = psycopg2.connect("host=heliosdb.local port=5432 dbname=demo user=demo password=demo sslmode=require")# MySQLimport pymysqlconn = pymysql.connect(host="heliosdb.local", port=3306, user="demo", password="demo", database="demo")# ... (all protocols, from specs doc) -
Expected Differences:
- Protocol-specific limitations (e.g., Bronze level for TDS/DRDA)
- Unsupported features per protocol
- Performance characteristics
-
Migration Guide:
- How to switch from PostgreSQL to HeliosDB (zero code changes)
- How to switch from MySQL to HeliosDB (zero code changes)
- What works, what doesn’t (per protocol)
Developer Documentation
-
Protocol Implementation Guide:
- How to add a new protocol handler
- How to implement authentication method
- How to add dialect-specific SQL rewriter
-
Testing Guide:
- How to run protocol compliance tests
- How to add new driver tests
- CI/CD integration
-
Debugging Guide:
- Packet-level logging
- Wireshark dissector usage
- Common protocol errors and fixes
Risk Assessment & Mitigation
High-Risk Areas
-
Oracle Protocol Complexity
- Risk: Incomplete reverse-engineering, missing features
- Mitigation: Limit to Bronze, use oracledb Thin as reference, expand incrementally
- Fallback: Document as “limited support” for Phase 1
-
asyncpg Prepared Statement Cache Compatibility
- Risk: PgBouncer-like issues with statement invalidation
- Mitigation: Test with asyncpg extensively, document pooling mode limitations
- Fallback: Disable prepared statement cache in transaction pooling mode
-
TDS TLS Wrapping
- Risk: Complex TLS handshake in PRELOGIN packets
- Mitigation: Use TLS library state machine, test with real pyodbc
- Fallback: Require TLS 1.2+ (simplify negotiation)
Medium-Risk Areas
-
EBCDIC Encoding (DRDA)
- Risk: Character set conversion issues
- Mitigation: Negotiate ASCII/UTF-8 early, test with ibm_db
- Fallback: Document EBCDIC as unsupported, require ASCII clients
-
Multi-Protocol Type Mapping
- Risk: Type incompatibilities (e.g., PostgreSQL BYTEA vs MySQL VARBINARY)
- Mitigation: Unified internal type system with lossy conversion warnings
- Fallback: Document type conversion rules, provide migration scripts
-
Async/Sync Driver Coexistence
- Risk: Deadlocks or performance issues with mixed async/sync handlers
- Mitigation: Thread pool for blocking operations, asyncio event loop per connection
- Fallback: Recommend async clients for production
Low-Risk Areas
-
HTTP API Integration
- Risk: API version changes (Snowflake, Databricks, Pinecone)
- Mitigation: Version HTTP APIs explicitly, test with latest client versions
- Fallback: Support multiple API versions concurrently
-
Connection Pooling
- Risk: Resource exhaustion under load
- Mitigation: Configurable limits, connection timeout, health checks
- Fallback: Circuit breaker pattern for overload protection
Success Metrics
Phase 1 Success Criteria (PostgreSQL + MySQL)
- psycopg2 can connect, run “SELECT 1”, execute prepared statement, commit transaction
- asyncpg can connect, run “SELECT $1”, use LRU prepared statement cache
- PyMySQL can connect, run “SELECT %s” (client-side escaping)
- mysql-connector-python can connect, use server-side prepared statements (binary protocol)
- All 4 drivers pass integration test suite (100+ tests)
- TLS ALPN detection works for PostgreSQL (“postgresql” identifier)
- Protocol autodetection works (PostgreSQL vs MySQL)
- Parameter style translation works (%s ↔ $1 ↔ ? ↔ :name)
- Error codes mapped correctly (PostgreSQL SQLSTATE, MySQL errno)
- Performance: <10ms latency overhead vs native protocol
Phase 1.5 Success Criteria (HTTP APIs)
- snowflake-connector-python can execute SQL via REST API
- databricks-sql-connector can execute SQL via Thrift API
- pinecone-client can upsert/query vectors
- JWT authentication works for Snowflake
- OAuth authentication works for Databricks
- API key authentication works for Pinecone
- Vector SQL queries translate to Pinecone API calls
- Async execution handles long-running queries
Phase 2 Success Criteria (SQL Server + DB2)
- pyodbc can connect to TDS handler
- ibm_db can connect to DRDA handler
- Basic SQL queries work (SELECT, INSERT, UPDATE, DELETE)
- PRELOGIN/LOGIN7 flow with TLS wrapping
- EXCSAT/ACCSEC/ACCRDB flow with USRIDPWD auth
Phase 3 Success Criteria (Oracle)
- oracledb Thin mode can connect to TNS handler
- “SELECT 1 FROM DUAL” works
- Basic bind variables work (
:nameor:1style) - TNS CONNECT/ACCEPT handshake
- Basic TTC protocol negotiation
Conclusion
This research provides a comprehensive foundation for implementing HeliosDB’s multi-protocol database support. All major protocols have been analyzed at the wire level, with clear implementation paths identified.
Key Takeaways
- PostgreSQL and MySQL are highly feasible with GOLD-level compatibility achievable in Phase 1
- HTTP APIs (Snowflake, Databricks, Pinecone) are well-specified and suitable for SILVER-level compatibility
- SQL Server and DB2 have open specifications making BRONZE-level compatibility feasible
- Oracle remains the most challenging due to reverse-engineered protocols, warranting BRONZE-level focus initially
Immediate Next Steps
- Architect Agent: Design protocol router, unified abstractions, and dialect adapter architecture
- Coder Agent: Implement PostgreSQL handler (SCRAM-SHA-256 + extended query protocol) as proof-of-concept
- Research Agent: Monitor for protocol spec updates, client library releases, and emerging patterns
Research Artifacts
All research artifacts are stored in /home/claude/DMD/.distributed execution/research/:
protocol_specifications_summary.md- Protocol specs and autodetectionauthentication_flows.md- Step-by-step auth flowspacket_formats.md- Wire-level packet formatspython_client_compatibility.md- Client driver analysisRESEARCH_SUMMARY.md- This summary (meta-document)
Research Status: Complete Confidence Level: HIGH (PostgreSQL, MySQL, HTTP APIs), MEDIUM (TDS, DRDA), LOW (Oracle TNS) Recommended Action: Proceed with Phase 1 implementation (PostgreSQL + MySQL) Estimated Phase 1 Timeline: 4-6 weeks with 1 full-time engineer
End of Research Summary