|
|
Postgres migration safety CLI. Know your lock modes, risk levels, and safe rewrite recipes before you merge. ORM-aware migration safety for TypeORM, Prisma, Knex, Drizzle, and Sequelize. |
Your ORM migration just took down production for 47 seconds.
A seemingly innocent ALTER TABLE users ADD COLUMN email_verified BOOLEAN NOT NULL DEFAULT false grabbed an ACCESS EXCLUSIVE lock on your 12M-row users table. Every query queued behind it. Your healthchecks failed. Pods restarted. Customers noticed.
This happens because ORMs hide the Postgres lock semantics from you. You can't fix what you can't see.
pgfence analyzes your SQL migration files before they hit production and tells you:
- What lock mode each DDL statement acquires and what it blocks (reads, writes, or both)
- Risk level for each operation, optionally adjusted by actual table size from your database
- Safe rewrite recipes, the exact expand/contract sequence to run instead
Works with raw SQL, TypeORM, Prisma, Knex, Drizzle, and Sequelize migrations. No Ruby, no Rust, no Go. Just TypeScript.
- Prisma support is real, not aspirational, with a dedicated extractor and tests in the repo.
- TypeORM, Knex, Drizzle, and Sequelize support are also shipped with dedicated extractors and extractor coverage in the repo.
- GitLab Code Quality output is shipped in the repo, with reporter tests covering repeated findings, extraction warnings, and coverage visibility.
- The public changelog records major shipped surfaces such as GitHub PR comments, SARIF, LSP, and trace mode.
- The evidence trail lives in proof-points.md, which separates repo-backed proof from external references.
- The tracked rule-family reference lives in checks-overview.md, and the concrete risky-migration walkthrough lives in examples/pr-review-demo.
$ pgfence analyze migrations/add-email-verified.sql
pgfence - Migration Safety Report
┌─────────────────────────────────────────────────┬──────────────────┬──────────┬────────┐
│ Statement │ Lock Mode │ Blocks │ Risk │
├─────────────────────────────────────────────────┼──────────────────┼──────────┼────────┤
│ ALTER TABLE users ADD COLUMN email_verified │ ACCESS EXCLUSIVE │ R + W │ HIGH │
│ BOOLEAN NOT NULL DEFAULT false │ │ │ │
├─────────────────────────────────────────────────┼──────────────────┼──────────┼────────┤
│ CREATE INDEX idx_users_email ON users(email) │ SHARE │ W │ MEDIUM │
└─────────────────────────────────────────────────┴──────────────────┴──────────┴────────┘
Policy Violations:
✗ Missing SET lock_timeout: add SET lock_timeout = '2s' at the start
Safe Rewrites:
1. ADD COLUMN with NOT NULL + DEFAULT → split into expand/backfill/contract:
• ALTER TABLE users ADD COLUMN IF NOT EXISTS email_verified BOOLEAN;
• Backfill in batches: WITH batch AS (SELECT ctid FROM users WHERE email_verified IS NULL LIMIT 1000 FOR UPDATE SKIP LOCKED) UPDATE users t SET email_verified = <fill_value> FROM batch WHERE t.ctid = batch.ctid;
• ALTER TABLE users ADD CONSTRAINT ... CHECK (email_verified IS NOT NULL) NOT VALID;
• ALTER TABLE users VALIDATE CONSTRAINT ...;
2. CREATE INDEX → use CONCURRENTLY:
• CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_users_email ON users(email);
=== Coverage ===
Analyzed: 2 statements | Unanalyzable: 0 | Coverage: 100%
pgfence defaults to PostgreSQL 14+ assumptions, and several rules are version-aware for older and newer releases where PostgreSQL behavior differs. Use --min-pg-version to override if needed:
pgfence analyze --min-pg-version 12 migrations/*.sqlVersion-sensitive behavior:
ADD COLUMN ... DEFAULT <constant>is instant (metadata-only) on PG 11+ALTER TYPE ADD VALUEis instant on PG 12+REINDEX CONCURRENTLYavailable on PG 12+RENAME COLUMNis instant on PG 14+DETACH PARTITION CONCURRENTLYavailable on PG 14+
pgfence performs static analysis. The following are not supported:
- Dynamic SQL: template literals, string concatenation, runtime-computed table or column names
- PL/pgSQL and stored procedures: DDL inside
DO $$ ... $$blocks is not analyzed - DDL inside functions:
CREATE FUNCTIONbodies are not parsed for migration safety - Non-migration SQL: arbitrary application queries, not just DDL
When dynamic SQL is detected (TypeORM/Knex extractors), pgfence emits a warning rather than silently skipping it. Every report includes a coverage line showing how many statements were analyzed vs. skipped.
To explicitly acknowledge a statement pgfence cannot analyze, add -- pgfence-ignore before it, see Suppressing warnings.
Other tools in this space worth knowing about:
| Tool | Language | Best fit | Focus |
|---|---|---|---|
| Squawk | Rust | Raw SQL teams | SQL linting and SQL authoring tooling |
| Eugene | Rust | Raw SQL teams that want trace-style verification | DDL linting and trace-based verification |
| strong_migrations | Ruby | Rails / ActiveRecord teams | Runtime migration safety checks |
| pgroll | Go | Teams that need a migration executor | Migration execution with rollback support |
| pgfence | TypeScript | Node.js and TypeScript teams using SQL or ORMs | Multi-ORM migration safety, risk scoring, and safe rewrite guidance |
pgfence analyzes ORM migration files (TypeORM, Prisma, Knex, Drizzle, Sequelize) directly, which is the wedge over SQL-only linters. It also provides DB-size-aware risk scoring and safe rewrite guidance for common migration patterns.
pgroll is not a competitor: it is a runtime executor (runs migrations with automatic rollback). pgfence analyzes before you run; pgroll handles how you run. They are complementary.
Get real-time migration safety analysis directly in your editor:
- Inline diagnostics: lock modes, risk levels, and policy violations as you type
- Quick fixes: one-click fixes for supported safe rewrites
- Hover info: lock mode, blocked operations, and safe alternatives
Install from the VS Code Marketplace or search "pgfence" in the Extensions panel. Requires @flvmnt/pgfence installed in your project or globally. See the extension docs for configuration and commands.
If you want to launch the standalone language server directly, use the pgfence-lsp binary. The package ./lsp subpath resolves safely without auto-starting the server when imported.
npm install -g @flvmnt/pgfenceOr with pnpm:
pnpm add -g @flvmnt/pgfencePrevent footguns locally before committing or pushing by installing a git hook.
To install a pre-commit hook:
pgfence init(Automatically detects .husky/ or .git/hooks/ and creates a pre-commit hook.)
If you prefer to run checks only when pushing to remote, simply rename the generated file:
# Standard Git
mv .git/hooks/pre-commit .git/hooks/pre-push
# Husky
mv .husky/pre-commit .husky/pre-pushpgfence analyze migrations/*.sqlpgfence analyze --format typeorm src/migrations/*.tspgfence analyze --format prisma prisma/migrations/**/migration.sqlpgfence analyze --format knex migrations/*.tspgfence analyze migrations/* # detects format from file contentYou can provide table stats in two ways:
- Live connection: pgfence connects to your database and queries
pg_stat_user_tables:
pgfence analyze --db-url postgres://readonly@replica:5432/mydb migrations/*.sql- Stats snapshot file: use a pre-generated JSON file (e.g. from your CI) so pgfence never needs DB credentials:
pgfence analyze --stats-file pgfence-stats.json migrations/*.sqlIf both --db-url and --stats-file are provided, --db-url is used and the stats file is ignored.
When stats are available (from either source), pgfence adjusts risk levels as follows:
| Table Size | Risk Adjustment |
|---|---|
| < 10K rows | No change |
| 10K - 1M rows | +1 level |
| 1M - 10M rows | +2 levels |
| > 10M rows | CRITICAL |
# Terminal table (default)
pgfence analyze migrations/*.sql
# Machine-readable JSON
pgfence analyze --output json migrations/*.sql
# GitHub PR comment markdown
pgfence analyze --output github migrations/*.sql# Exit 1 if any check exceeds MEDIUM risk
pgfence analyze --ci --max-risk medium migrations/*.sqlAdd an inline comment immediately before a statement to suppress checks for it:
-- pgfence-ignore
DROP TABLE old_sessions; -- all checks suppressed for this statement
-- pgfence-ignore: drop-table
DROP TABLE old_logs; -- only the drop-table check suppressed; others still fire
-- pgfence-ignore: drop-table, prefer-robust-drop-table
DROP TABLE old_queue; -- multiple rules suppressed, comma-separatedThe directive applies to the single statement immediately following the comment.
| Syntax | Effect |
|---|---|
-- pgfence-ignore |
Suppress all checks for the next statement |
-- pgfence-ignore: <ruleId> |
Suppress one specific rule |
-- pgfence-ignore: <ruleId>, <ruleId> |
Suppress multiple specific rules |
-- pgfence: ignore <ruleId> |
Legacy syntax, still supported |
Use --output json to see ruleId values for any check you want to suppress.
pgfence checks a broad set of DDL patterns against Postgres's lock mode semantics:
| # | Pattern | Lock Mode | Risk | Safe Alternative |
|---|---|---|---|---|
| 1 | ADD COLUMN ... NOT NULL (no DEFAULT) |
ACCESS EXCLUSIVE | HIGH | Add nullable, backfill, SET NOT NULL |
| 2 | ADD COLUMN ... DEFAULT <volatile> |
ACCESS EXCLUSIVE | HIGH | Add without default, backfill in batches |
| 3 | ADD COLUMN ... DEFAULT <constant> |
ACCESS EXCLUSIVE (instant) | LOW | Safe on PG11+ (metadata-only) |
| 4 | ADD COLUMN ... GENERATED STORED |
ACCESS EXCLUSIVE | HIGH | Add regular column + trigger + backfill |
| 5 | CREATE INDEX (non-concurrent) |
SHARE | MEDIUM | CREATE INDEX CONCURRENTLY |
| 6 | DROP INDEX (non-concurrent) |
ACCESS EXCLUSIVE | MEDIUM | DROP INDEX CONCURRENTLY |
| 7 | ALTER COLUMN TYPE (text/varchar widening) |
ACCESS EXCLUSIVE | LOW | Metadata-only, no table rewrite |
ALTER COLUMN TYPE varchar(N) |
ACCESS EXCLUSIVE | MEDIUM | Safe if widening; verify with schema | |
ALTER COLUMN TYPE (cross-family) |
ACCESS EXCLUSIVE | HIGH | Expand/contract pattern | |
| 8 | ALTER COLUMN SET NOT NULL |
ACCESS EXCLUSIVE | MEDIUM | CHECK constraint NOT VALID + validate |
| 9 | ADD CONSTRAINT ... FOREIGN KEY |
SHARE ROW EXCLUSIVE | HIGH | NOT VALID + VALIDATE CONSTRAINT |
| 10 | ADD CONSTRAINT ... CHECK |
SHARE ROW EXCLUSIVE | MEDIUM | NOT VALID + VALIDATE CONSTRAINT |
| 11 | ADD CONSTRAINT ... UNIQUE |
SHARE ROW EXCLUSIVE | HIGH | CONCURRENTLY unique index + USING INDEX |
ADD CONSTRAINT ... UNIQUE USING INDEX |
SHARE UPDATE EXCLUSIVE | LOW | Instant, attaches pre-built index | |
| 12 | ADD CONSTRAINT ... EXCLUDE |
SHARE ROW EXCLUSIVE | HIGH | No concurrent alternative; use lock_timeout |
| 13 | DROP TABLE |
ACCESS EXCLUSIVE | CRITICAL | Separate release |
| 14 | DROP COLUMN |
ACCESS EXCLUSIVE | HIGH | Remove app references first, then drop |
| 15 | TRUNCATE |
ACCESS EXCLUSIVE | CRITICAL | Batched DELETE |
| 16 | TRUNCATE ... CASCADE |
ACCESS EXCLUSIVE | CRITICAL | Explicit per-table truncate or batched DELETE |
| 17 | RENAME COLUMN |
ACCESS EXCLUSIVE | LOW | Instant on PG14+ |
| 18 | RENAME TABLE |
ACCESS EXCLUSIVE | HIGH | Rename + create view for backwards compat |
| 19 | VACUUM FULL |
ACCESS EXCLUSIVE | HIGH | Use pg_repack |
| 20 | ALTER TYPE ... ADD VALUE (PG < 12) |
ACCESS EXCLUSIVE | MEDIUM | Upgrade to PG12+ for instant enum adds |
ALTER TYPE ... ADD VALUE (PG12+) |
EXCLUSIVE (instant) | LOW | Safe; cannot run inside transaction | |
| 21 | ATTACH PARTITION (PG < 12) |
ACCESS EXCLUSIVE | HIGH | Create matching CHECK constraint first |
ATTACH PARTITION (PG12+) |
SHARE UPDATE EXCLUSIVE | MEDIUM | Briefly locks parent; CHECK constraint helps | |
| 22 | DETACH PARTITION (non-concurrent) |
ACCESS EXCLUSIVE | HIGH | DETACH PARTITION CONCURRENTLY (PG14+) |
| 23 | REFRESH MATERIALIZED VIEW |
ACCESS EXCLUSIVE | HIGH | REFRESH MATERIALIZED VIEW CONCURRENTLY |
REFRESH MATERIALIZED VIEW CONCURRENTLY |
EXCLUSIVE | MEDIUM | Blocks writes; requires unique index | |
| 24a | REINDEX TABLE (non-concurrent) |
SHARE | HIGH | REINDEX TABLE CONCURRENTLY (PG12+) |
| 24b | REINDEX INDEX (non-concurrent) |
ACCESS EXCLUSIVE | HIGH | REINDEX INDEX CONCURRENTLY (PG12+) |
| 24c | REINDEX SCHEMA/DATABASE (non-concurrent) |
ACCESS EXCLUSIVE | CRITICAL | REINDEX CONCURRENTLY (PG12+) |
| 25 | CREATE TRIGGER |
SHARE ROW EXCLUSIVE | MEDIUM | Use lock_timeout to bound lock wait |
| 26 | DROP TRIGGER |
ACCESS EXCLUSIVE | MEDIUM | Use lock_timeout to bound lock wait |
| 27 | ENABLE/DISABLE TRIGGER |
SHARE ROW EXCLUSIVE | LOW | Blocks concurrent DDL only |
| 28 | SET LOGGED/UNLOGGED |
ACCESS EXCLUSIVE | HIGH | Full table rewrite; no non-blocking alternative |
| # | Pattern | Risk | Suggestion |
|---|---|---|---|
| 29 | ADD COLUMN ... json |
LOW | Use jsonb, json has no equality operator |
| 30 | ADD COLUMN ... serial |
MEDIUM | Use GENERATED ALWAYS AS IDENTITY |
| 31 | integer / int columns |
LOW | Use bigint to avoid future overflow + rewrite |
| 32 | varchar(N) columns |
LOW | Use text, changing varchar length requires ACCESS EXCLUSIVE |
| 33 | timestamp without time zone |
LOW | Use timestamptz to avoid timezone bugs |
| 34 | char(N) / character(N) columns |
LOW | Use text, char pads with spaces and length changes require rewrite |
| 35 | serial / bigserial / smallserial |
LOW | Use IDENTITY columns, cleaner semantics |
| # | Pattern | Lock Mode | Risk | Safe Alternative |
|---|---|---|---|---|
| 36 | DROP DATABASE |
ACCESS EXCLUSIVE | CRITICAL | Irreversible, requires separate process |
| 37 | ALTER DOMAIN ADD CONSTRAINT |
SHARE | HIGH | Validates against all tables using domain |
| 38 | CREATE DOMAIN with constraint |
ACCESS SHARE | LOW | Use table-level CHECK constraints instead |
| # | Pattern | Severity |
|---|---|---|
| 39 | NOT VALID + VALIDATE CONSTRAINT in same transaction | error |
| 40 | Multiple ACCESS EXCLUSIVE statements compounding | warning |
| 41 | CREATE INDEX CONCURRENTLY inside transaction |
error |
| 42 | Bulk UPDATE without WHERE in migration |
warning |
Beyond DDL analysis, pgfence enforces operational best practices:
- Missing
SET lock_timeout: prevents lock queue death spirals - Missing
SET statement_timeout: safety net for long operations - Missing
SET application_name: enablespg_stat_activityvisibility - Missing
SET idle_in_transaction_session_timeout: prevents orphaned locks CREATE INDEX CONCURRENTLYinside transaction: will fail at runtime- NOT VALID + VALIDATE in same transaction: defeats the purpose of NOT VALID
- Multiple ACCESS EXCLUSIVE statements: compounding lock duration
- Bulk
UPDATEwithoutWHERE: should run out-of-band in batches - Inline ignore:
-- pgfence: ignore <ruleId>to suppress specific checks - Visibility logic: skips warnings for tables created in the same migration
When pgfence detects a dangerous pattern, it outputs the exact safe alternative:
Dangerous:
ALTER TABLE users ADD COLUMN email_verified BOOLEAN NOT NULL DEFAULT false;
-- ACCESS EXCLUSIVE lock on entire table for duration of rewriteSafe (expand/contract):
-- Migration 1: Add nullable column (instant, no lock)
ALTER TABLE users ADD COLUMN IF NOT EXISTS email_verified BOOLEAN;
-- Migration 2: Create index (non-blocking)
CREATE INDEX CONCURRENTLY idx_users_email_verified ON users(email_verified);
-- Out-of-band backfill job (not in migration, repeat until 0 rows updated):
-- WITH batch AS (
-- SELECT ctid FROM users WHERE email_verified IS NULL LIMIT 1000 FOR UPDATE SKIP LOCKED
-- )
-- UPDATE users t SET email_verified = false FROM batch WHERE t.ctid = batch.ctid;
-- Migration 3: Add NOT NULL constraint
ALTER TABLE users ADD CONSTRAINT chk_email_verified CHECK (email_verified IS NOT NULL) NOT VALID;
ALTER TABLE users VALIDATE CONSTRAINT chk_email_verified;
ALTER TABLE users ALTER COLUMN email_verified SET NOT NULL;
ALTER TABLE users DROP CONSTRAINT chk_email_verified;Use a concrete migration path or a glob here. The Action expands path itself, so both migrations/add-users.sql and migrations/*.sql work.
- name: Check migration safety
uses: flvmnt/pgfence@v1
with:
path: migrations/add-users.sql
max-risk: medium- name: Analyze migrations
run: |
npx @flvmnt/pgfence analyze --output github migrations/*.sql > pgfence-report.md
- name: Comment on PR
uses: marocchino/sticky-pull-request-comment@v2
with:
path: pgfence-report.mdUpload pgfence findings to GitHub Code Scanning for inline PR annotations:
- name: Analyze migrations
run: npx @flvmnt/pgfence analyze --output sarif migrations/*.sql > pgfence.sarif
- name: Upload to GitHub Code Scanning
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: pgfence.sarifUpload pgfence findings to GitLab's Code Quality widget:
The GitLab reporter emits finding entries, extraction warnings, and a coverage summary entry so unanalyzable SQL still stays visible in CI.
# .gitlab-ci.yml
pgfence:
script:
- npx @flvmnt/pgfence analyze --output gitlab migrations/*.sql > gl-code-quality-report.json
artifacts:
reports:
codequality: gl-code-quality-report.jsonRun migrations against a real Postgres instance to verify pgfence's static analysis:
pgfence trace migrations/*.sqlTrace mode spins up a disposable Docker Postgres container, executes each statement, and compares actual lock behavior against pgfence's predictions. No credentials needed, no risk to real data.
# Specific PG version
pgfence trace --pg-version 14 migrations/*.sql
# Custom Docker image (for PostGIS, pgvector, etc.)
pgfence trace --docker-image postgis/postgis:17 migrations/*.sql
# CI mode (also fails on mismatches between static and traced locks)
pgfence trace --ci --max-risk medium migrations/*.sqlEach statement gets a verification status:
- Confirmed: static prediction matches actual Postgres behavior
- Mismatch: static prediction was wrong (trace result takes precedence)
- Trace-only: trace found something static analysis missed (e.g., table rewrite)
- Static-only: policy/best-practice check that trace cannot verify
Requires Docker. Use pgfence analyze for static-only analysis without Docker.
pgfence Cloud is currently being shaped with design partners around team-grade migration governance:
- Approval workflows for higher-risk migrations before merge
- Exemptions with justification and expiry
- Centralized policies for shared safety rules
- Audit history around analyses, approvals, and bypasses
- Schema drift and migration history views
The planned governance layer is intended to avoid requiring production database credentials. Today, DB-size-aware scoring already works through a stats snapshot: your CI can query a read replica, output a JSON file, and pgfence consumes it locally.
Learn more at pgfence.com.
The open-source CLI works on its own today, with no account, login, or API key required. Any future cloud features are additive rather than required for local analysis.
pgfence supports custom rules via a plugin system. Create a module that exports rule or policy functions, then reference it in your config:
pgfence analyze --plugin ./my-rules.js migrations/*.sqlPlugin rule IDs are namespaced with plugin: to avoid collisions with built-in checks.
For rules that need to know your actual column types (e.g., distinguishing safe varchar widenings from cross-type rewrites), pgfence can load a schema snapshot:
pgfence analyze --snapshot pgfence-snapshot.json migrations/*.sqlThis replaces heuristic guesses with definitive type classification from your database. Generate the snapshot with:
pgfence snapshot --db-url postgres://readonly@replica:5432/mydb --output pgfence-snapshot.json- Create
src/rules/your-rule.tsimplementing the check function - Add it to the rule pipeline in
src/analyzer.ts - Add test fixtures in
tests/fixtures/ - Add tests in
tests/analyzer.test.ts
pnpm install
pnpm test # Run tests
pnpm typecheck # Type checking
pnpm lint # Lint
pnpm build # CompileMIT © Munteanu Flavius-Ioan