Over the last few years, I’ve managed several projects that outgrew MySQL’s capabilities, particularly when it came to complex joins and advanced indexing. If you’re looking for how to migrate MySQL to PostgreSQL guide, you’ve likely realized that a simple dump and restore won’t work. The two systems handle data types, constraints, and quoting very differently.
In my experience, the biggest mistake developers make is trying to manually rewrite SQL dumps. It’s a recipe for disaster. Instead, I recommend using specialized tooling that handles the type casting for you. In this tutorial, I’ll walk you through the process using pgLoader, which I’ve found to be the most robust open-source tool for this specific migration.
Prerequisites
Before we start the migration, ensure you have the following ready:
- A running MySQL instance with the data you want to move.
- A fresh PostgreSQL instance (If you’re undecided on hosting, check out my thoughts on should I use Neon serverless Postgres for a modern, scalable approach).
pgLoaderinstalled on a machine that has network access to both databases.- A user in MySQL with
SELECTandDESCRIBEpermissions on all tables.
Step 1: Analyzing the Schema
PostgreSQL is stricter than MySQL. While MySQL might let you get away with implicit type conversions, Postgres will throw an error. Before migrating, I always review my database schema design best practices 2026 to ensure the target schema is optimized for Postgres’s strengths.
Pay close attention to TINYINT, DATETIME, and AUTO_INCREMENT columns, as these map differently to Postgres SMALLINT, TIMESTAMP, and SERIAL types.
Step 2: Installing pgLoader
I prefer running pgLoader via Docker to avoid dependency hell with various library versions. You can pull the image and run it directly:
docker pull dimitri/pgloader
Step 3: Creating the Migration Command
The most powerful way to use pgLoader is via a command file. Create a file named migrate.load. This allows you to define specific casting rules and exclude tables you don’t need.
LOAD DATABASE
FROM mysql://user:password@mysql-host/dbname
INTO postgresql://user:password@postgres-host/dbname
WITH include drop, create tables, create indexes, reset sequences
CAST type mysql=tinyint to postgresql=smallint,
type mysql=datetime to postgresql=timestamp,
type mysql=varchar to postgresql=text
SET maintenance_work_mem to '128MB', work_mem to '64MB';
As shown in the image below, pgLoader doesn’t just move data; it transforms the schema in real-time. The include drop option is useful if you are iterating on your migration script and need to clear the target database between attempts.
Step 4: Executing the Migration
Now, run the migration using the command file you just created:
docker run --rm -v $(pwd):/tmp dimitri/pgloader /tmp/migrate.load
I’ve found that for databases larger than 10GB, you should increase the maintenance_work_mem in the SET clause to speed up index creation.
Pro Tips for a Smooth Migration
- Disable Triggers: If you have complex triggers on the Postgres side, disable them during the load to avoid data duplication or performance bottlenecks.
- Batching: For massive tables, consider migrating them individually rather than as a whole database to avoid locking the target instance.
- Vacuuming: Immediately after the migration, run
VACUUM ANALYZEon your Postgres database to update statistics for the query planner.
Troubleshooting Common Issues
“Invalid byte sequence for encoding ‘UTF8′”
MySQL often allows mixed encodings (like latin1 and utf8) in the same table. Postgres does not. To fix this, I recommend converting your MySQL tables to utf8mb4 before starting the migration.
Sequence Out of Sync
Sometimes SERIAL columns don’t align with the actual max ID of the imported data. I usually run a script to reset the sequences:
SELECT setval(pg_get_serial_sequence('table_name', 'id'), coalesce(max(id), 1)) FROM table_name;
What’s Next?
Once your data is safely in PostgreSQL, you can start leveraging features like JSONB for semi-structured data or GIST indexes for full-text search. If you’re still refining your architecture, I highly suggest revisiting modern schema design patterns to ensure you aren’t just porting MySQL habits into a Postgres world.
Need more help with your data stack? Explore our other Data & Analytics guides for more automation tips.