Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Database changes are typically one-way. If your new change includes creating or modifying a table, such that there are new additional columns, and you populate those with data, then downgrading would destroy the changed columns and the data in them. Hence you can't downgrade once you upgrade or you'd potentially be breaking things. To downgrade safely you'd need to backup or snapshot the old database, and then restore your database back to the backup/snapshot, but that's not blue/green.


DB schema migration script frameworks (at least in Python, Ruby & Java lands) do typically support both upgrade and downgrade directions. People skip implementing and testing the downgrade side if the development model doesn't need it but the problem of what happens to the data is controlled by what you put in the "down" migration script.

I'd guess if you can't throw the data away, you won't do a down migration, you'll do an up migration that changes the db to save that data in your preferred way before undoing or reworking the previous schema change.


> DB schema migration script frameworks (at least in Python, Ruby & Java lands) do typically support both upgrade and downgrade directions.

They do, and in every shop I've ever been in these are considered a trap precisely because they don't consider data loss.

Always roll forward. If you have to change migration history, restore a backup and lament past you's hubris.


This is solved more cleanly in declarative schema management systems, where you have a schema repo of CREATE statements, and the tool can auto-generate the correct DDL. You never need to write any migrations at all, up or down. If you need to roll back, you use `git revert` and then auto-generate from there. The history is in Git, and you can fully leverage Git like a proper codebase.

A key component is that the schema management tool must be able to detect and warn/error on destructive changes -- regardless of whether it's a conceptual revert or just a bad change (i.e. altering a column's data type in a lossy way). My declarative tool Skeema [1] has handled this since the first release, among many other safety features.

That all said, schema changes are mostly orthogonal to database version upgrades, so this whole subthread is a bit different than the issue discussed several levels above :) The root of the blue/green no-rollback-after-upgrade issue discussed above is that MySQL logical replication officially supports older-version-primary -> newer-version-replica, but not vice versa. Across different release series, the replication format can change in ways that the older version replicas do not understand or support.

[1] https://github.com/skeema/skeema


I have two theories where people end up wanting them:

(1) Circumstances that for some reason enforce the requirement that sysadmin type ppl always have to be able to downgrade / roll back deployments without "the developers" producing new builds or sw artifacts. A separation of ops and dev teams, where you decide you need to survive an inability to make or procure new software builds on demand, and just dig up old deployment artifacts to use after down migration. There are a lot of wrong reasons to do this in inhouse sw settings, but also I guess the classic "we bought a 3rd party server app and plugged it into our onprem database", like Jira or something.

(2) Systems that are technically unable to recover from errors happening in db migrations (missing transactional schema change feature in db and/or application doing db related stuff that can't be rolled back at deployment time). So the down migration is more like a hand coded rollback for the migration that will be automatically run in the failure case of a deployment.

In both cases I can see how the "what happens to data in new columns" situation might still work out. In the (2) case it's sort of obvious, there's no new data yet. In the (1) case you live with it or choose the backup restore path - I can see scenarios where you decide it'd be much worse to restore from backup and lose people's entered data for couple of days, or however it took to find the showstopper for the upgrade, vs run the down migration and just lose new feature related data. (Which you could also rehearse and test beforehand with backups)


Our in-house schema migration tool supports downgrading, but it won't remove non-empty tables or columns etc.

For us this isn't a big deal though because we're writing our software so it should be able to function as expected on a DB with a newer schema. This makes upgrades much easier to handle has users can run new and old software side-by-side.


Database migrations are always "fail forward" if there's an error you figure out what it was and fix it.


but now you have 2 up paths. and migrations are critical, i would avoid it where possible!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: