Loadavg is one of the single most important metrics used for understanding how your server is reacting to the load that its being put under. Essentially, it contains information about the system load.

Load averages are the three numbers shown with the uptime and top commands – they look like this:

what_is_loadavg

load average: 0.09, 0.05, 0.01

Most people have an idea of what the load average means: the three numbers represent averages over progressively longer periods of time (one, five, and fifteen minute averages), and lower numbers are better.

This data is actually read from /proc/loadavg, to view the output of this file simply type

cat /proc/loadavg

Example output: 0.55 0.47 0.43 1/210 12437

The first three numbers represent the number of active tasks on the system – processes that are actually running – averaged over the last 1, 5, and 15 minutes. The next entry shows the instantaneous current number of runnable tasks – processes that are currently scheduled to run rather than being blocked in a system call – and the total number of processes on the system. The final entry is the process ID of the process that most recently ran.

But, what’s the the threshold? What constitutes “good” and “bad” load average values? When should you be concerned over a load average value, and when should you scramble to fix it ASAP? Essentially, higher numbers represent a problem or an overloaded machine however the scale of these numbers is related to the processing power of your system and the number of processors or cores.

The traffic analogy

A single-core CPU is like a single lane of traffic. Imagine you are a bridge operator, sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time. If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they’re in for delays.

So, essentially:

0.00 means there’s no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there’s no backup, and an arriving car will just go right on.

1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.

over 1.00 means there’s backup. How much? Well, 2.00 means that there are two lanes worth of cars total — one lane’s worth on the bridge, and one lane’s worth waiting. 3.00 means there are three lane’s worth total — one lane’s worth on the bridge, and two lanes’ worth waiting. Etc.

This is basically what CPU load is. “Cars” are processes using a slice of CPU time (“crossing the bridge”) or queued up to use the CPU. Unix refers to this as the run-queue length: the sum of the number of processes that are currently running plus the number that are waiting (queued) to run.

Like the bridge operator, you’d like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also like the bridge operator, you are still ok if you get some temporary spikes above 1.00 … but when you’re consistently above 1.00, you need to worry.

Multi-core systems

To calculate the maximum load value on multi-core systems, the load is relative to the number of cores. On a multicore system, your load should not exceed the number of cores available.

How the cores are spread out over CPUs doesn’t matter. Two quad-cores is the same as four dual-cores which is the same as eight single-cores. It’s all eight cores for these purposes and so the maximum load wold be 8.

The “100% utilization” mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.

Rule of Thumb : The “number of cores = max load”