There are many ways that teams handle database migrations in continuous deployment environments.
Some teams prefer to run migrations as part of the deployment process, others run them manually, and some have more complete systems featuring downward migrations. Downward migrations provide one more tool for recovery in case of a problem with a recent database schema change. For simplicity, this course runs migrations via our GitHub Actions deployment script.
We'll simply migrate the DB to the latest version every time we deploy. That way, if the code we're deploying requires a new schema, we'll always have it. The only time this can be a problem is if a migration makes backward-incompatible changes to the schema (like dropping a table). If the currently running application needs a table that we drop, it will stop working until the new code is deployed.
That's bad, downtime is bad.
To avoid those scenarios, we could simply roll out the code that stops relying on the hypothetical table first, then in the next deployment remove the table in a migration.
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
# This part
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
I'd recommend running it after the Docker image is built, but before the deployment of the image. That accomplishes two things:
Finally, you can run git diff / git diff HEAD to check for any sensitive credentials like database connection strings, that may have slipped into your source code. We've taken the liberty of .gitignore'ing the .env file already, but git diff can point out credentials mistakes even before you commit code changes.
Now commit the code. Push it to GitHub, then make sure the workflow runs as expected.
Paste the URL of your GitHub repo into the box and run the GitHub checks.