[rescue] techie BSD-related q (*BSD)

James Lockwood james at foonly.com
Tue Jul 30 11:06:41 CDT 2002


On Tue, 30 Jul 2002, it was written:

> I am doing some testing on disk IO and other performance measurements
> with various systems.
>
> Turns out that using the load average is bad, since it is calculated
> differently for EACH OS.  That is, a load average of 1.0 under Linux
> is not equivalent to a load average of 1.0 under OpenBSD on the exact
> same machine.

Load average should be calculated similarly, modulo weird implementations
like Dynix that divided by the number of CPUs in a system.  It should be
the number of processes "ready to run" at 1, 5 and 15 minute intervals.
If you're seeing different behaviour under Linux and OpenBSD it would be
interesting to see why.

Note that load average is _lousy_ for determining how "active" your CPUs
are at any given point, it doesn't indicate bursty load at all.  It does
let you know when you are thoroughly computationally bound and how badly.

> I have been looking at vmstat, which measures disk in terms of "disk
> transactions".  But, I have not been able to find out what "disk
> transaction" means, exactly.

vmstat is probably not what you want.  See iostat, mpstat and sar.  Make
sure you define what you want to measure before you start analyzing data.

Are you trying to find out why things are "slow", or are you trying to do
capacity planning?  The statistics needed to analyze these two problems
are quite different.  If you just want to graph a whole bunch of data
points so you can look at pretty pictures and try to see trends, I
recommend Orca:

http://www.orcaware.com/orca/

It's most effective when tied into the SE Toolkit but you can throw data
in from almost any other tool.

-James



More information about the rescue mailing list