**TL;DR — High swap with low RAM use isn't a problem.** Linux lazily moves long-idle pages to swap to free RAM for actively used pages and disk cache. High swap *usage* is normal. The number that matters is `si`/`so` (swap in/out per second from `vmstat`). If those are near zero, your server isn't swapping right now and you can ignore the dashboard. If they're consistently above ~5-10 MB/s, you have a real memory problem.
I get pinged about this one regularly. Someone looks at their monitoring dashboard, sees swap usage at 90%, and assumes the server is dying. Then they look at the same dashboard's RAM panel and see 8 GB free, get confused, and either restart something they shouldn't or open a panic ticket.
Both numbers can be true at the same time, and in most cases nothing is wrong. To explain why, you have to know how Linux actually treats memory, which is different from how it gets explained in most monitoring tutorials.
What does Linux actually do with your RAM?
Free memory, in the way most people imagine it ("unused RAM ready for something to claim"), is wasted memory from Linux's point of view. The kernel's job is to make sure RAM is doing useful work, which usually means caching disk reads, holding application memory, or staying available for sudden allocation requests.
When the kernel sees an application that has memory pages it hasn't touched in hours — say, a long-idle Apache worker holding state from a request that finished at lunch — it can do one of two things. Keep that page in RAM forever, or move it to swap and reclaim the RAM for something more active, like the page cache that's making your database queries fast.
The kernel will choose the second option when it's smart enough to. That's not a sign of memory pressure. It's the kernel doing exactly what it should be doing. The pages got moved to swap *not because RAM ran out*, but because the kernel correctly identified them as cold and decided RAM had better uses.
How to tell if swap is actually a problem
The first command to run is `free -m`:
free -mOutput on the server in question looked like this:
total used free shared buff/cache available
Mem: 15998 4823 812 312 10363 10547
Swap: 4095 3759 336Swap is 92% used (3,759 of 4,095 MB). RAM looks like it has only 812 MB free, but the column you actually want is `available` — that's 10,547 MB. The kernel is using over 10 GB of RAM for `buff/cache`, which is mostly disk cache. If something needs that memory, the kernel will instantly drop the cache and hand it over. "Free" RAM is a useless number on Linux. "Available" is the real one.
Now the actually important question: is the system swapping *right now*? That's what `vmstat` answers:
vmstat 2 5This prints memory stats every 2 seconds, 5 times. The two columns to watch are `si` and `so` — swap in (KB/s being read from swap into RAM) and swap out (KB/s being written from RAM to swap):
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 3848192 831420 41280 10538988 0 0 2 6 204 327 3 1 96 0 0
0 0 3848192 831420 41280 10538988 0 0 0 14 198 295 2 0 97 0 0
1 0 3848192 829440 41280 10538988 0 0 0 8 221 340 3 1 96 0 0`si` and `so` are both 0. The server isn't actively moving pages in or out of swap. Those 3.7 GB sitting in swap got there at some point in the past — possibly weeks ago after a one-time memory pressure event — and the kernel has no reason to read them back into RAM because nothing is asking for them. That's the whole story.
When swap is actually bad
There's a real version of this problem too, and it looks completely different. If `vmstat 2` is showing `so` consistently above ~1,000-5,000 KB/s and `si` is also non-zero, the kernel is actively moving pages between RAM and disk in both directions. That's called "thrashing" and it murders performance. Disk is roughly 1,000x slower than RAM, so an application that has its working set spread across both is having a very bad day.
If you're thrashing, the answer is almost never "add more swap." It's "figure out which process is hungry and either give it more RAM or kill it." `top` sorted by `RES` (resident set size, i.e. actual RAM use) will tell you who's eating the memory.
What about vm.swappiness?
This is the knob most internet advice tells you to lower. The default is 60. The argument is that lower values "prefer to keep things in RAM," so you should set it to 10 or even 1 on database servers.
It's not wrong, but it's overrated. `vm.swappiness` doesn't mean "use less swap." It means "how aggressively to swap *anonymous* pages versus reclaiming page cache." Lowering it tells the kernel to reclaim cache before swapping anonymous pages out. On a database server where you want hot rows kept in cache, that's sometimes the right call. On most general-purpose web servers, the default is fine and changing it makes basically no measurable difference.
The bigger win is usually buying enough RAM that swap is purely a safety net, then ignoring the dashboard's swap usage panel completely. Watch `si`/`so` and `available` instead.
If you want to clear swap once (without restarting), `swapoff -a && swapon -a` will do it. Don't run that on a busy server during peak — it forces every swapped page back into RAM, which is slow and can briefly spike load. It's a thing to do at 3am when nothing is happening, not as a fix during an incident.
What I tell clients about swap dashboards
Honestly, I tell most clients to remove the swap usage panel from their main dashboard entirely. It generates more confused tickets than real signal. Replace it with `swap in/out per second` — that's the metric that actually correlates with bad outcomes. Anyone who's ever woken up at 3am to investigate a "high swap usage" alert that turned out to be normal kernel behavior knows exactly what I mean.

