Skip to content

Database Migrations

As your application grows, your data model evolves. You add new fields to your entities, create new tables, modify relationships, or change column types. Since you’re working with TypeScript entities, making these changes is straightforward - you simply modify your entity classes.

In the early stages of development, this is often enough. You modify your entities, drop and recreate your database from scratch, and keep iterating. MikroORM provides options to reset the entire database schema based on your current entities. There’s no formal tracking of changes because the schema is still fluid and you don’t have important data to preserve.

But at some point, this informal approach becomes problematic:

The key trigger is data preservation. Once you care about keeping existing data, you can’t drop and recreate the database anymore. This happens:

  • When you start having test data you want to keep locally
  • When deploying to staging or production with real user data

Other factors that push toward a more structured approach:

  • You want a versioned history of schema changes (who changed what, when, and why)
  • You need to review schema changes before they’re applied (especially in production)

Some teams use the “drop and recreate” approach for the first few weeks while the schema is highly experimental, then adopt a more formal process once data becomes important. Others use a structured approach from day one for better discipline.

Here’s where database migrations come in (also called “schema migrations”). Instead of recreating everything from scratch, you create versioned files called migrations that describe incremental changes to your schema. Each migration represents a single, specific evolution - adding a column, creating a table, modifying a constraint, etc. These migrations can be applied safely to databases that already contain data, preserving the existing records while updating the structure.

MikroORM (like many modern ORMs) provides a built-in system to automatically generate and apply migrations based on changes to your TypeScript entities. This system relies on three elements that work together:

The snapshot file (.snapshot.json): A local JSON file in your codebase that captures your schema state after each migration generation. It’s used to detect what changed in your entities.

Migration files: Versioned files containing the SQL statements to apply those detected changes to your database. These are generated by comparing your current entities against the snapshot.

The migrations table (mikro_orm_migrations): A table in your database that tracks which migration files have already been executed.

Let’s explore each of these elements in detail.

The snapshot is a JSON file located by default in apps/api/src/modules/db/migrations/.snapshot.json. It’s a crucial concept to understand: the snapshot is MikroORM’s memory of what your entities looked like when you last generated a migration.

Unlike some ORMs that inspect your actual database to detect changes, MikroORM works entirely from your codebase:

  • It never connects to your database to see what’s there
  • It only compares your current TypeScript entities against this snapshot file
  • The snapshot represents the “before” state, your current entities represent the “after” state

This is why committing the snapshot to Git is mandatory - it’s the only way MikroORM knows what changed between migrations.

How it works:

MikroORM provides CLI tools to generate migrations. When you run the migration generation command:

  1. MikroORM scans all your TypeScript entities and builds a representation of the desired schema
  2. It checks if a .snapshot.json file exists:
    • If no snapshot exists yet: MikroORM generates what’s called an initial migration - a special first migration that creates all your tables from scratch. It also creates the snapshot file at the same time, capturing the complete initial schema state.
    • If a snapshot exists: MikroORM loads it (the previous entity state) and compares it with your current entities to generate an incremental migration
  3. It calculates the difference between the two (the “diff”)
  4. It generates a migration file with SQL statements for those changes
  5. It immediately updates the snapshot to reflect the new entity state (so it becomes the baseline for the next migration)

Since MikroORM compares entities against the snapshot file rather than your actual database, you can generate migrations offline without any database connection. The entire process happens in your codebase.

One thing to keep in mind: if someone modifies the database schema manually (outside of migrations), MikroORM won’t detect it. The snapshot only tracks changes you make to your TypeScript entities.

As mentioned above, when you run the MikroORM CLI migration generation command, it automatically creates migration files based on the detected changes. These migration files are TypeScript files with timestamped names (e.g., Migration20250106143052.ts) stored in your migrations folder. Each file represents a specific change to your database schema and contains two methods:

up() method: Contains SQL statements that will be executed directly on your database when you apply the migration. This is the “forward” path that updates your schema to the new state.

async up(): Promise<void> {
this.addSql('ALTER TABLE "user" ADD COLUMN "phone_number" VARCHAR(255);');
}

down() method: Contains SQL statements that will be executed directly on your database when you rollback the migration. This is the “reverse” path that undoes the changes.

async down(): Promise<void> {
this.addSql('ALTER TABLE "user" DROP COLUMN "phone_number";');
}

When you run the migration apply command (CLI), MikroORM executes the up() SQL against your database. When you run the rollback command, it executes the down() SQL.

MikroORM generates these migration files automatically, but you should always review the SQL before applying it. Migrations modify your database schema, and if you’re modifying columns that already contain data, this can be critical. The generated SQL might not be optimal or safe in all cases.

For example, when renaming a field, MikroORM might generate DROP + ADD instead of RENAME, which would destroy your existing data. Adding NOT NULL columns to tables with existing data will fail unless you handle it properly. Complex type changes might need manual data transformation steps.

You can (and should) manually edit the SQL in migration files when needed. Just be careful - this SQL will execute directly on your database, so test your changes thoroughly, especially when working with existing data.

The mikro_orm_migrations table is created automatically in your database when you run your first migration. It tracks which migration files have been executed in that specific database.

When you apply migrations, MikroORM checks this table to identify pending migrations (files that exist in your codebase but aren’t recorded in the table), then executes all of them in chronological order by running their up() methods. Each environment (local, staging, production) has its own migrations table tracking its own state independently.

In development or staging environments, making mistakes with migrations is relatively low-risk. You can drop your database, regenerate everything, or restore from a recent backup. The consequences are limited.

But production is fundamentally different. Production contains real user data, and you obviously don’t want to risk losing or corrupting it. A schema change mistake can have much more serious consequences and can be stressful when applying changes to production. That’s why there are best practices we’ll detail here to protect against these risks.

Never use fresh or migrate:fresh in Production

Section titled “Never use fresh or migrate:fresh in Production”

This might seem obvious, but let’s be explicit: dropping and recreating the schema destroys all data. In production environments, you must always use the migration-based workflow.

Before running any migration in production:

  1. Take a full database backup
  2. Verify the backup is valid and can be restored
  3. Store the backup in a secure location with appropriate access controls and encryption
  4. Document what time the backup was taken and set a retention policy

If the migration fails or causes issues, you can restore from this backup. Some teams automate this by requiring their deployment scripts to create a backup before running migrations.

Valide Migrations locally before deploying

Section titled “Valide Migrations locally before deploying”

This goes without saying, but you should always run your migrations locally before deploying to production.

The typical flow is to come back to a previous state of the project, seed the DB, checkout your branch and run your migrations.

You can also automate this process during CI/CD, the bare minimum is to run all migrations on a empty DB to see if they apply successfully.

When your database is empty or only has test data, almost any schema change works fine. But once you have real data in production, certain types of changes become problematic because you need to preserve that data while modifying the structure.

Here are common scenarios that require extra attention:

Adding a required (NOT NULL) field

If you add a NOT NULL column to a table that already has rows, the database will reject the migration. You need to either add the column as nullable first and fill it with default values, then add the NOT NULL constraint in a second migration, or add the column with a default value directly.

Renaming a field

When you rename a field in your entity, MikroORM generates SQL to drop the old column and create a new one, which destroys the data. You need to manually edit the generated migration to use ALTER TABLE ... RENAME COLUMN instead.

Changing column types

If you change a field from string to number in your entity, the generated SQL might just change the column type. But if your existing data can’t be converted, the migration will fail. You might need a multi-step migration that transforms the data first.

Working with enums

Enums are often tricky with existing data. If you change enum values or add new ones, you need to ensure existing data is compatible. Sometimes you’ll need to migrate the data to new enum values before changing the column definition, or handle the transition in multiple steps.

What to Do When Migrations Fail in Production

Section titled “What to Do When Migrations Fail in Production”

Unfortunately, even with careful planning, migrations can sometimes fail in production. When this happens, take a deep breath and follow this checklist:

  1. Don’t panic and don’t run more commands blindly
  2. Find in which fail case you are: did the migration run without errors, but the API is broken (case A)? Or did the migration fail before running (case B)?

Case A. Your migration ran without errors, but the API is broken. In this case, the SQL is working fine but you have issues with the data, and how your migration handles production data.

  1. Check the migration table (mikro_orm_migrations) to confirm that the migration was marked as executed
  2. Check what state your database is actually in (which tables/columns exist)
  3. Roll back your server version
  4. Restore your database from your backup
  5. Fix the migration file and try again. To test your migration, you’ll probably need to use real data and to list all the cases that failed.

Case B. The migration failed before running. In this case, the SQL is not working and you need to fix the migration file.

  1. If you did everything correctly, the all or nothing option will roll back the transaction, and depending on your infrastructure, the previous container should still be running.
  2. Grab the logs from the failed migration.
  3. Fix the migration file and try again.

Remember that each migration file has both up() and down() methods? It might be tempting to think “if a migration causes problems in production, I’ll just rollback using the down() method.” However, this is rarely a good solution in production.

In development, rollback can be useful - you can apply a migration, realize it’s wrong, roll it back, and try again. But down migrations have serious limitations that make them problematic in production:

  • Data loss: If you drop a column, rolling back recreates the column but the data is gone forever
  • Irreversible transformations: If you transform data (like converting emails to lowercase), rollback can’t restore the original values
  • Complex changes: Some operations (dropping constraints, certain column type changes) can’t be perfectly reversed

In production, rollback is rarely the right solution. If a migration causes problems, it’s usually safer to:

  1. Restore from your backup if data was lost or corrupted
  2. Write a new forward migration that fixes the issue
  3. Only use rollback for immediate issues caught right after deployment, and only if no data was lost