Capture real-world query patterns safely
Production data shows true performance. Real user behavior. Real data volumes. Real concurrency patterns that development environments can't replicate.
Development performance testing has a fundamental problem: your local database has 100 records and 1 concurrent user. Production has 100,000 records and 500 concurrent users. The performance characteristics are completely different.
Query plans change based on data volume. What uses an index locally might sequential scan in production because the planner thinks it's cheaper. What runs in 10ms locally might take 10 seconds with real data. Production captures reveal these issues before they become user-facing outages.
Designed for production safety
Non-blocking writes
Queries append to JSON files using atomic operations. No database locks. No transaction overhead. The overhead of logging a query is less than 1% of the query execution time itself.
Detached workers
Capture runs in a separate process that doesn't block your web workers. If the capture process dies, your app keeps running. If your app gets busy, the capture keeps logging.
Minimal memory footprint
Queries stream to disk immediately. There's no in-memory buffer that could cause memory issues. Even a 10,000 query capture session uses less than 50MB of RAM.
Time-boxed by default
Captures auto-stop after the configured duration (default 5 minutes). Even if you forget to stop it, it won't run forever consuming disk space.
The safety design reflects lessons learned from production monitoring tools that caused more problems than they solved. We've all seen monitoring systems that consume 30% of CPU just to tell you the CPU is busy. laraperf takes a different approach: do the minimum work necessary, do it asynchronously, and always prioritize application availability over data completeness.
If disk space runs low, the capture degrades gracefully—it might miss some queries, but it won't crash your app. If the JSON file grows large, rotation happens automatically. These failure modes are designed to be safe by default.
Start a production capture
Detach and capture for 5 minutes with a descriptive tag for later reference
$ php artisan perf:watch --seconds=300 --tag=peak-hours✓ session=session-20260416-143201-xK9mQp
pid=47821
duration=300s
tag=peak-hours
Watcher detached. PID written to storage/perf/.session-20260416-143201-xK9mQp.pid
Use perf:stop to end early, or wait for timeout.The --tag flag is invaluable for organizing captures. You might run --tag=checkout-flow when testing the checkout process, or --tag=monday-morning to capture peak traffic patterns. When you run perf:query --session=last, the tag shows up in the metadata so you remember what you were investigating.
The detached mode is the default because you typically want to exercise the app while capturing. You might run your test suite, click through the UI manually, or run a load test. The capture continues in the background regardless of what you're doing in the foreground.
Best practices for production capture
- ✓Start with 30-60 second windows until you understand the overhead
- ✓Use descriptive tags for every capture (you'll forget why you ran it)
- ✓Run captures during actual peak traffic hours, not off-peak
- ✓Clear old sessions weekly—storage/perf/ doesn't auto-cleanup
- ✓Monitor disk space—each query is small but 1M queries add up
- ×Don't capture for hours continuously (minutes are usually sufficient)
- ×Don't forget to clear old sessions—disk space is finite
- ×Don't enable on high-throughput endpoints without testing first
- ×Don't run EXPLAIN on write-heavy transactions (they'll rollback)
- ×Don't ignore capture overhead—monitor it, don't assume it's zero
When to capture vs when to avoid
Capture when:
You have a specific performance issue to investigate (slow page load, timeout reports). You're deploying a change and want to verify no regression. Users report intermittent slowness you can't reproduce locally. You want to establish a performance baseline before major changes.
Avoid capturing when:
The system is already under stress—capture adds load, even if minimal. You're running bulk operations that aren't representative like one-time imports or migrations. Database is in recovery or backup mode with unusual performance characteristics. You haven't tested the overhead in a non-production environment first.
What production captures reveal
Cache effectiveness
Buffer hit ratios under real load often differ dramatically from development. A query that uses the cache locally might be a cache miss in production due to memory pressure.
Concurrent query patterns
Queries that conflict when run simultaneously. Lock contention, deadlocks, and serialization issues only show up with real concurrency.
Data volume impact
Queries fast with 1K rows can be glacial with 1M rows. The PostgreSQL query planner changes strategies based on table statistics you can't replicate locally.
Missing indexes
Sequential scans that weren't obvious in development because small tables fit in memory. Production I/O constraints make these painfully obvious.
The most valuable production captures are often the ones you run when investigating specific issues. A user reports "the report page is slow between 9-10 AM." You SSH in at 9:05, run perf:watch --seconds=600, and capture exactly what's happening during the problem window.
This targeted approach is more valuable than continuous monitoring because it captures the specific conditions causing problems. You see the query plans that execute slowly, the N+1 patterns that emerge under load, and the exact line numbers in your code where optimizations will have the most impact.
Multi-tenant by design
Override the database at runtime without config changes or environment variable juggling
php artisan perf:explain --hash=abc123 --db=tenant_acme_prodWorks with any tenancy setup—separate databases, schemas, or row-level. No package-specific integrations required.
Multi-tenant applications have unique debugging challenges. A query that's fast for tenant A might be slow for tenant B due to different data volumes or distributions. The --db flag lets you investigate specific tenant issues without switching connection configurations.
This is particularly useful for support scenarios. A tenant reports slowness on a specific feature. You can capture their specific database, analyze their specific query patterns, and give them specific optimization advice rather than generic best practices.
Session management
Session files accumulate in storage/perf/. Clean up old captures to reclaim disk space.
$ php artisan perf:clear --forceThere's no automatic cleanup because we believe in explicit data management. Your query history might be valuable for trend analysis, and we don't want to delete data you might need. But sessions do accumulate—especially if you're doing frequent captures during debugging—so periodic cleanup is recommended.
The --force flag skips the confirmation prompt, making it suitable for cron jobs. A weekly 0 0 * * 0 cd /var/www && php artisan perf:clear --force keeps storage clean without manual intervention.
Get started
Two ways to install — manual or let your agent handle it.
Manual install
composer require mateffy/laraperf --devIf you have Laravel Boost installed, run the following after installing laraperf — the skill is automatically added.
php artisan boost:updateLet your agent do it
Install the skill permanently with the CLI, or paste a prompt for a one-shot setup.
npx skills add mateffy/laraperfOr paste this prompt for a quick one-shot:
Using Laravel Boost? Run php artisan boost:update after installing — the skill is added automatically.