**TL;DR — Why WooCommerce dies at midnight:** WooCommerce schedules huge batches of background jobs (subscription renewals, stock sync, abandoned cart emails) through Action Scheduler. On big stores, the queue silently grows for hours. At midnight WP-Cron tries to drain it all in one PHP request, saturates PHP-FPM, and the site falls over. Fix: disable WP-Cron, run wp-cron.php from system cron every minute, lower the Action Scheduler batch size, and split heavy actions onto their own hooks.
A client called me about a recurring outage. Their WooCommerce store would go down at almost exactly the same time every night, sometimes for ten minutes, sometimes for half an hour. No deploy. No traffic spike. No marketing campaign that could explain it. Just a server falling on its face every night around midnight UTC, then quietly recovering on its own.
The store was on a beefy VPS, not shared hosting. 8 cores, 16 GB of RAM, plenty of headroom. Whatever was killing it wasn't capacity in the usual sense. It was something the site was doing to itself.
What the symptoms actually looked like
I got SSH access and started watching the box during the next outage window. About 23:58 UTC the load average climbed from a comfortable 0.6 to 14, then 22, then 31. PHP-FPM workers all went into use. Nginx started returning 502s. CPU was pinned. After about twelve minutes everything calmed down on its own and the load drifted back to normal.
Two things ruled out a lot of theories in one go. First, the site's actual visitor traffic at midnight was the lowest of the day, so this wasn't a flash crowd. Second, WordPress's debug.log was empty during the incident. Whatever was happening, it wasn't a PHP fatal in front-end code.
How does WooCommerce actually run background jobs?
WooCommerce, like most WordPress plugins, uses Action Scheduler to handle background work. Subscription renewals, stock sync, queued emails, webhook deliveries, abandoned cart reminders — all of it gets pushed onto a queue stored in the database (the `wp_actions` and `wp_actionscheduler_*` tables) and processed later.
The way "later" gets triggered is the interesting part. WordPress doesn't have a real scheduler. It has WP-Cron, which is just a check that runs on every page load asking "is anything due to run?" If the answer is yes, WordPress kicks off a PHP request to itself, which then runs as many actions as it can within the time and batch limits.
On a busy site this works fine. Pageviews are constant, so WP-Cron fires constantly, and the queue gets drained in small bites. On a quieter site it falls apart. Between, say, 22:00 and 23:50, almost nobody visits the site, so WP-Cron never fires. The queue piles up. Subscription renewals pile up. Email sends pile up. Webhook deliveries pile up. By midnight there might be 4,000 pending actions sitting in the database.
Then someone visits the homepage at 00:01 and WP-Cron finally fires. WordPress kicks off a single PHP request to drain the queue. Action Scheduler tries to process its default batch of 25 actions per run, immediately schedules another run because there's still work pending, and so on, in a tight loop. Every PHP-FPM worker ends up running an Action Scheduler batch. Every batch is doing real work — sending emails through SMTP, hitting payment APIs, updating the database. The site has no workers left for actual visitors. Hello, 502s.
Confirming the queue size
WP-CLI has a built-in helper for this. Run it just before midnight on the affected site:
wp action-scheduler list --status=pending --format=countOn this client's site the answer was 3,847. Three thousand pending actions waiting to fire as soon as WP-Cron got around to noticing them. For comparison, a healthy site usually has fewer than 50 pending actions at any given moment.
I also wanted to see what kinds of actions were waiting. Action Scheduler lets you group them by hook:
wp db query "SELECT hook, COUNT(*) FROM wp_actionscheduler_actions WHERE status='pending' GROUP BY hook ORDER BY 2 DESC LIMIT 10;"The top three were `woocommerce_scheduled_subscription_payment` (1,420), `woocommerce_cleanup_logs` (612), and `wc_admin_unsnooze_admin_notes` (380). The subscription renewals were the heaviest by far. Each one of those triggers a payment gateway API call, which is slow.
Fix 1: Stop relying on WP-Cron entirely
The first change was to stop pretending WP-Cron is a scheduler. Add this to wp-config.php:
define('DISABLE_WP_CRON', true);Then add a real cron entry on the server that hits wp-cron.php every minute:
* * * * * wget -q -O - https://example.com/wp-cron.php?doing_wp_cron > /dev/null 2>&1Now the queue is being drained every 60 seconds whether anyone visits the site or not. Instead of one massive batch at midnight, you get 1,440 small batches per day, evenly spaced. The queue stays close to zero almost all the time.
If you have shell access, `wp cron event run --due-now` is a slightly better alternative to hitting wp-cron.php over HTTP — it doesn't go through PHP-FPM at all, so it can't compete with web traffic for workers. Same crontab, just `cd /var/www/html && wp cron event run --due-now`.
Fix 2: Cap the Action Scheduler batch size
Even with the system cron change, you can still get a thundering herd if a single cron firing happens to find 1,000 actions waiting. Action Scheduler has filters for this. Drop this into a small mu-plugin:
add_filter('action_scheduler_queue_runner_batch_size', function() {
return 5;
});
add_filter('action_scheduler_queue_runner_concurrent_batches', function() {
return 1;
});This caps each cron run to 5 actions at a time, with no concurrent batches. On a site doing payment API calls, 5 per run with a 60-second cron interval is 300 actions per hour, which comfortably exceeds normal subscription renewal volume. The queue stays managed and the site never sees a worker spike from background jobs again.
What I actually wish I'd done first
If I'm being honest, I should have started by graphing the Action Scheduler queue size over time before changing anything. A simple line chart with `pending actions` on Y and time on X tells you everything: where the spikes are, when they recover, whether your fix actually worked. I added that exact graph to the client's monitoring after the incident, and it's now the first thing I check on any WooCommerce store with a vaguely similar shape.
The midnight crash was the symptom. The actual problem was running a busy WooCommerce site without monitoring the one queue that holds all the work. If you run WooCommerce in production, that's the metric to graph. Everything else is downstream of it.

