Skip to content

SQL Migration Tips

SQL Migration Tips

Last Updated: January 4, 2026


This guide provides tips and best practices for migrating SQL workloads from other database systems to HeliosDB.

Table of Contents

  1. General Migration Strategy
  2. Migration from PostgreSQL
  3. Migration from MySQL
  4. Migration from Oracle
  5. Migration from SQL Server
  6. Migration from MongoDB
  7. Schema Migration
  8. Data Migration
  9. Query Migration
  10. Testing and Validation

General Migration Strategy

Phase 1: Assessment

  1. Inventory existing schema

    • List all tables, views, indexes
    • Document data types and constraints
    • Identify stored procedures and functions
  2. Analyze query patterns

    • Extract frequent queries
    • Identify performance-critical queries
    • Document transaction patterns
  3. Estimate data volume

    • Current data size
    • Growth rate
    • Retention requirements

Phase 2: Schema Conversion

  1. Map data types (see source-specific sections)
  2. Convert constraints and indexes
  3. Handle source-specific features

Phase 3: Data Migration

  1. Initial bulk load
  2. Verify data integrity
  3. Set up CDC for ongoing sync (if dual-running)

Phase 4: Application Migration

  1. Update connection strings
  2. Convert incompatible queries
  3. Test and validate

Migration from PostgreSQL

Compatibility

HeliosDB uses the PostgreSQL wire protocol, making migration straightforward.

High Compatibility:

  • Data types
  • SQL syntax
  • Client libraries
  • Connection strings

Data Type Mapping

PostgreSQLHeliosDBNotes
serialBIGINT + sequenceUse identity columns
bigserialBIGINT + sequenceUse identity columns
moneyNUMERICConvert to numeric
cidr/inetVARCHARStore as text
macaddrVARCHARStore as text
point/line/boxJSONBStore as JSON
All othersSameDirect compatibility

Connection String

# PostgreSQL
postgresql://user:pass@host:5432/dbname
# HeliosDB (same format)
postgresql://user:pass@host:5432/dbname

Known Differences

1. Extensions: Some PostgreSQL extensions may not be available

-- PostgreSQL (may not work)
CREATE EXTENSION postgis;
-- HeliosDB: Use built-in geospatial types
CREATE TABLE locations (
id BIGINT PRIMARY KEY,
point GEOSPATIAL
);

2. System catalogs: Some pg_catalog views differ

-- May need adjustment for introspection queries
SELECT * FROM pg_stat_activity; -- Limited support

Migration from MySQL

Data Type Mapping

MySQLHeliosDBNotes
TINYINTSMALLINTWider range
INT UNSIGNEDBIGINTUse signed
FLOATREALIEEE 754
DOUBLEDOUBLE PRECISIONStandard name
DATETIMETIMESTAMPWith timezone
ENUMVARCHAR + CHECKUse constraint
SETTEXT[]Array type
BLOBBYTEABinary data
TEXTTEXTSame
JSONJSONBBinary JSON

SQL Syntax Differences

1. String concatenation

-- MySQL
SELECT CONCAT(first_name, ' ', last_name) FROM users;
-- HeliosDB (same, or use ||)
SELECT first_name || ' ' || last_name FROM users;

2. LIMIT with OFFSET

-- MySQL
SELECT * FROM users LIMIT 10, 20; -- offset 10, limit 20
-- HeliosDB
SELECT * FROM users LIMIT 20 OFFSET 10;

3. Date functions

-- MySQL
SELECT DATE_ADD(NOW(), INTERVAL 1 DAY);
-- HeliosDB
SELECT NOW() + INTERVAL '1 day';

4. AUTO_INCREMENT

-- MySQL
CREATE TABLE t (id INT AUTO_INCREMENT PRIMARY KEY);
-- HeliosDB
CREATE TABLE t (id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY);

5. Backticks vs double quotes

-- MySQL
SELECT `column` FROM `table`;
-- HeliosDB
SELECT "column" FROM "table";

Connection String

# MySQL
mysql://user:pass@host:3306/dbname
# HeliosDB (use PostgreSQL format)
postgresql://user:pass@host:5432/dbname

Migration from Oracle

Data Type Mapping

OracleHeliosDBNotes
NUMBER(p)NUMERIC(p)Exact
NUMBERDOUBLE PRECISIONFloat
VARCHAR2(n)VARCHAR(n)Same
CLOBTEXTUnlimited
BLOBBYTEABinary
DATETIMESTAMPIncludes time
TIMESTAMPTIMESTAMPSame
RAWBYTEABinary
ROWIDN/ANot supported

SQL Syntax Differences

1. Dual table

-- Oracle
SELECT SYSDATE FROM DUAL;
-- HeliosDB
SELECT NOW(); -- No FROM clause needed

2. Outer join syntax

-- Oracle (old syntax)
SELECT * FROM a, b WHERE a.id = b.id(+);
-- HeliosDB
SELECT * FROM a LEFT JOIN b ON a.id = b.id;

3. NVL vs COALESCE

-- Oracle
SELECT NVL(column, 'default') FROM table;
-- HeliosDB
SELECT COALESCE(column, 'default') FROM table;

4. DECODE vs CASE

-- Oracle
SELECT DECODE(status, 1, 'Active', 2, 'Inactive', 'Unknown') FROM t;
-- HeliosDB
SELECT CASE status
WHEN 1 THEN 'Active'
WHEN 2 THEN 'Inactive'
ELSE 'Unknown'
END FROM t;

5. Sequences

-- Oracle
CREATE SEQUENCE seq START WITH 1;
SELECT seq.NEXTVAL FROM DUAL;
-- HeliosDB
CREATE SEQUENCE seq START WITH 1;
SELECT nextval('seq');

6. Hierarchical queries

-- Oracle
SELECT * FROM emp START WITH mgr IS NULL CONNECT BY PRIOR emp_id = mgr;
-- HeliosDB (recursive CTE)
WITH RECURSIVE emp_tree AS (
SELECT * FROM emp WHERE mgr IS NULL
UNION ALL
SELECT e.* FROM emp e JOIN emp_tree t ON e.mgr = t.emp_id
)
SELECT * FROM emp_tree;

Migration from SQL Server

Data Type Mapping

SQL ServerHeliosDBNotes
BITBOOLEANTRUE/FALSE
TINYINTSMALLINTWider
MONEYNUMERIC(19,4)Exact
DATETIMETIMESTAMPStandard
DATETIME2TIMESTAMPStandard
NVARCHARVARCHARUTF-8
NTEXTTEXTUTF-8
VARBINARYBYTEABinary
UNIQUEIDENTIFIERUUIDUUID type
XMLTEXT or JSONBStore as text

SQL Syntax Differences

1. TOP vs LIMIT

-- SQL Server
SELECT TOP 10 * FROM users;
-- HeliosDB
SELECT * FROM users LIMIT 10;

2. String functions

-- SQL Server
SELECT LEN(column), CHARINDEX('a', column) FROM t;
-- HeliosDB
SELECT LENGTH(column), POSITION('a' IN column) FROM t;

3. Date functions

-- SQL Server
SELECT GETDATE(), DATEADD(day, 1, GETDATE());
-- HeliosDB
SELECT NOW(), NOW() + INTERVAL '1 day';

4. Identity columns

-- SQL Server
CREATE TABLE t (id INT IDENTITY(1,1) PRIMARY KEY);
-- HeliosDB
CREATE TABLE t (id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY);

5. Square brackets

-- SQL Server
SELECT [column] FROM [table];
-- HeliosDB
SELECT "column" FROM "table";

Migration from MongoDB

Document to Relational Mapping

1. Embedded documents -> Separate tables

// MongoDB
{
_id: ObjectId("..."),
name: "John",
address: {
street: "123 Main St",
city: "Boston"
}
}
-- HeliosDB Option 1: JSONB column
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255),
address JSONB
);
-- HeliosDB Option 2: Normalized
CREATE TABLE users (
id UUID PRIMARY KEY,
name VARCHAR(255)
);
CREATE TABLE addresses (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
street VARCHAR(255),
city VARCHAR(255)
);

2. Arrays -> Array types or junction tables

// MongoDB
{ _id: ..., tags: ["tech", "news"] }
-- HeliosDB Option 1: Array type
CREATE TABLE articles (
id UUID PRIMARY KEY,
tags TEXT[]
);
-- Query
SELECT * FROM articles WHERE 'tech' = ANY(tags);

3. ObjectId -> UUID

-- Use native UUID type
CREATE TABLE documents (
id UUID PRIMARY KEY DEFAULT gen_random_uuid()
);

Query Translation

MongoDBHeliosDB
db.c.find({})SELECT * FROM c
db.c.find({a: 1})SELECT * FROM c WHERE a = 1
db.c.find({a: {$gt: 1}})SELECT * FROM c WHERE a > 1
db.c.find({a: {$in: [1,2]}})SELECT * FROM c WHERE a IN (1, 2)
db.c.find({a: {$regex: 'pat'}})SELECT * FROM c WHERE a ~ 'pat'
db.c.aggregate([{$group: ...}])SELECT ... GROUP BY ...

Schema Migration

Using ETL Pipeline

use heliosdb_etl::AutomatedETLEngine;
async fn migrate_schema() -> Result<()> {
let engine = AutomatedETLEngine::new(Default::default()).await?;
// Extract sample data from source
let source_data = extract_from_source("source_connection").await?;
// Infer schema
let schema = engine.infer_schema("target_table", &source_data).await?;
// Review and adjust schema
let adjusted_schema = review_schema(schema);
// Create table in HeliosDB
create_table(adjusted_schema).await?;
Ok(())
}

Manual Schema Conversion

-- Example: Convert MySQL schema to HeliosDB
-- Original (MySQL)
CREATE TABLE users (
id INT AUTO_INCREMENT PRIMARY KEY,
email VARCHAR(255) NOT NULL UNIQUE,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
metadata JSON
);
-- Converted (HeliosDB)
CREATE TABLE users (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
email VARCHAR(255) NOT NULL UNIQUE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
metadata JSONB
);

Data Migration

Bulk Load with COPY

-- Export from source (PostgreSQL example)
COPY users TO '/tmp/users.csv' WITH CSV HEADER;
-- Import to HeliosDB
COPY users FROM '/tmp/users.csv' WITH CSV HEADER;

Using ETL Pipeline

use heliosdb_etl::{AutomatedETLEngine, CDCProcessor};
async fn migrate_data() -> Result<()> {
let engine = AutomatedETLEngine::new(Default::default()).await?;
// Build pipeline
let pipeline = engine.build_pipeline(source_schema, target_schema).await?;
// Stream data
while let Some(batch) = read_batch().await? {
let result = pipeline.execute(batch).await?;
write_to_heliosdb(result).await?;
}
Ok(())
}

Query Migration

Query Validation Tool

# Compare query results between source and target
import psycopg2
def validate_query(source_conn, target_conn, query):
source_cursor = source_conn.cursor()
target_cursor = target_conn.cursor()
source_cursor.execute(query)
target_cursor.execute(query)
source_results = source_cursor.fetchall()
target_results = target_cursor.fetchall()
assert source_results == target_results, "Results differ!"
print(f"Query validated: {len(source_results)} rows match")

Testing and Validation

Data Validation Checklist

  • Row counts match between source and target
  • Sample data spot checks pass
  • Foreign key relationships preserved
  • Null handling correct
  • Date/time conversions accurate
  • Unicode data preserved
  • Numeric precision maintained

Performance Validation

-- Run benchmark queries
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';
-- Compare with source database performance

Application Testing

  1. Unit tests: Run existing test suite against HeliosDB
  2. Integration tests: Test all application flows
  3. Load tests: Verify performance under load
  4. Failover tests: Test high availability