All articles
Lukas Mateffy·Apr 8, 2026·7 min read
POSTGRESQL INSIGHTS

EXPLAIN ANALYZE reveals the truth

Query plans show exactly how PostgreSQL executes your SQL. Find missing indexes, sequential scans, and inefficient joins before they hurt production.

51xfaster with proper index

PostgreSQL's query planner is sophisticated—it considers dozens of strategies for executing your SQL and picks what it thinks is fastest. But the planner only knows what it knows. Outdated statistics, missing indexes, or unusual data distributions can lead it to choose a terrible strategy.

EXPLAIN ANALYZE executes the query and reports what actually happened. Not estimates—actual execution times, actual row counts, actual buffer usage. This is the difference between guessing about performance and knowing.

What the plan reveals

Seq Scan

Full table scan. Reads every row. Slow on large tables.

Fix: Add index on WHERE columns or ORDER BY
Index Scan

Uses index to find rows. Fast lookup with minimal I/O.

Fix: Verify with ANALYZE that stats are current
Rows ≠ Estimates

Planner guessed wrong on row counts. Strategy may be suboptimal.

Fix: Run ANALYZE to update table statistics
Buffers: shared read

Reading from disk instead of memory cache. Slow I/O.

Fix: Increase shared_buffers or optimize query to reduce data needed

Each node in a PostgreSQL query plan represents an operation: scanning a table, joining two datasets, sorting results, aggregating groups. The planner nests these operations, and EXPLAIN ANALYZE shows you the nesting depth along with the actual cost at each level.

The "Actual Rows" vs "Plan Rows" mismatch is particularly telling. When the planner thinks a table has 100 rows but it actually has 100,000, its cost calculations are wrong. This often happens after bulk data imports or major table changes before ANALYZE has run.

Run EXPLAIN from the terminal

Reference any captured query by its 12-character hash from perf:query output

bash
$ php artisan perf:explain --hash=a1b2c3d4
[
  {
    "Plan": {
      "Node Type": "Seq Scan",
      "Relation Name": "contacts",
      "Actual Rows": 4721,
      "Actual Total Time": 8432.11,
      "Filter": "(email ~~* '%gmail%'::text)",
      "Rows Removed by Filter": 15279
    }
  }
]

The --hash flag references a query from a previous perf:query output. This is powerful because you don't need to manually copy-paste SQL—you just reference the problematic query by its identifier. The hash is deterministic based on the normalized SQL pattern, so the same query always has the same hash across sessions.

You can also pass raw SQL with --sql="SELECT ...", but referencing by hash is usually more convenient since you've already identified the slow query through perf:query.

Adding an index

Same query, before and after optimization

BEFORE
PlanSeq Scan
Time8,432 ms
Rows4,721

Scanned all 20,000 rows to find matches. Filter removed 15,279 rows after reading them.

AFTER
PlanIndex Scan
Time165 ms
Rows4,721

Used trigram index on email column. Read only the matching rows.

Understanding the improvement

The 51x speedup comes from eliminating full table reads. Without an index, PostgreSQL reads every row, checks if it matches the filter, and keeps the matches. With an index—specifically a trigram (GIN) index for pattern matching—it can jump directly to the matching rows.

The Rows Removed by Filter: 15279 is a dead giveaway. PostgreSQL read 20,000 rows just to find 4,721 matches. That's 75% wasted work. An index turns this into direct lookups.

Note that indexes aren't free—they slow down writes and consume disk space. But for read-heavy tables, especially those with large row counts and selective WHERE clauses, the tradeoff is almost always worth it.

The optimization workflow

1

Find slow queries

Run perf:query --slow=100 to identify queries over 100ms

2

Get the query plan

Run perf:explain --hash=... to see execution details

3

Identify scan type

Seq Scan means investigate. Index Scan usually means good.

4

Add appropriate index

CREATE INDEX CONCURRENTLY to avoid locking the table

5

Verify improvement

Re-run EXPLAIN and compare times. Document the change.

The CREATE INDEX CONCURRENTLY syntax is crucial in production. Regular CREATE INDEX locks the table for writes while building, which can cause downtime on large tables. The CONCURRENTLY option builds the index without locking, though it takes longer and uses more resources.

After creating an index, PostgreSQL won't immediately start using it. You may need to run ANALYZE table_name to update statistics so the planner knows the index exists and is selective.

Pro tip: Multi-tenant databases

When analyzing performance issues for specific tenants, you need to look at their actual data distribution. The same query can have wildly different plans depending on tenant data volume and selectivity.

php artisan perf:explain --hash=abc123 --db=tenant_acme_prod

Multi-tenant setups often use separate databases or schemas per tenant. The --db flag lets you override the database connection at runtime without touching config files or environment variables. This is perfect for investigating performance issues reported by specific tenants.

Because laraperf uses Laravel's connection system, it works with any tenancy approach—separate databases, schema-based, or row-level tenancy with database prefixes. Just specify the connection name and database, and you're analyzing production performance safely.

Get started

Two ways to install — manual or let your agent handle it.

Manual install

1Install via Composer
composer require mateffy/laraperf --dev
2Using Laravel Boost?

If you have Laravel Boost installed, run the following after installing laraperf — the skill is automatically added.

php artisan boost:update

Let your agent do it

Install the skill permanently with the CLI, or paste a prompt for a one-shot setup.

npx skills add mateffy/laraperf

Or paste this prompt for a quick one-shot:

Using Laravel Boost? Run php artisan boost:update after installing — the skill is added automatically.