[rescue] Weird I/O behavior

Gregory Leblanc gleblanc at linuxweasel.com
Mon Jan 28 16:27:59 CST 2002


On Mon, 2002-01-28 at 06:38, Chris Byrne wrote:
> Guys,
> 
> I'm seeing something strange here. I'm doing some I/O throughput testing on
> large file transfers from local storage to a SAN.

Whee, I love this shit!

> I've got three boxes here all connected to a fiber channel storage array
> with a proven sustained throughput to a single raid head with a single large
> data stream of around 15MB/s and a peak of around 20-25MB/sec.

Man, that's some wimpy-assed throughput, I sure hope it maintains that
speed per thread for a bunch of threads...

> The boxes are as follows
> 
> E450, 2x400-4meg, 1g ram, 1g swap, 2 emulex 64 bit 66mhz (in the right
> slots) FC hbas, DMP, default setup on sd and lpfc etc...
> E450, 2x400-2meg, 1g ram, 1g swap, 2 emulex 64 bit 66mhz (in the right
> slots) FC hbas, DMP, default setup on sd and lpfc etc...
> SF-280R, 2x750-8mb 4g ram, 4g swap, 2 emulex 64 bit 66mhz (in the right
> slots) FC hbas, DMP, default setup on sd and lpfc etc...
> 
> The SunFire is seeing sustained writes of between 15 and 18MB/sec with peaks
> of around 20 and dips into the 10-12 range. This is what we would expect.
> The 450's on the other hand are seeing sustained throughputs in the 6-9
> MB/sec range with peaks at 12 or so and troughs into the 4 or 5 range. This
> is clearly unacceptable write throughput from machines that should easily be
> able to fill the 15MB/sec available.

Are the machines doing something with (processing, manipulating,
whatever) the data, or just dumping it to disk?  Sure seems like those
E450s should be able to sustain quite a bit more than that.

> Read performance for all of the systems is identical with sustained reads
> around 20MB/sec with peaks of just under 30 and troughs of around 15
> 
> All are going to the same filesystem (not at the same time of course), a
> 14x72 gig 0+1 store, broken into several testing LUNs running VxFS and
> volume manager with default install parameters and no optimisations. Similar
> though slightly faster result occur for a raid 5 store. All raid is done in

Faster writes for the RAID 5 array?  Did you use a wildly different
block size?

> the array none in software.
> 
> There are no application loads on any of the systems during the testing
> procedures. Memory utilization doesn't change significantly and the cpus are
> fairly quiet. Actually almost no system load is imposed by the testing
> itself other than in the I/O subsystem.

Oh, I guess they're just dumping to disk then.  Hmm...

> 
> The test itself does use large files. In particular files over 1GB in size,
> significantly exceeding the physical memory of the 450's. The thing is there
> isn't significant virtual memory activity on the 450's to indicate massive
> swapping etc...

Ahh, here's the meat...  I'd be betting that it's something about it
hitting swap.  Can you test with larger files on the SunFire (ideally in
excess of 4GB), or see what happens if the SunFire only has 1GB of ram? 
I think that the performance will react similarly, though I can't back
it up with any real world data.

> I've verified the OS configs on the systems are substantially identical
> minus hardware specific differences. No real performance related parameters
> are different.
	Greg

-- 
Portland, Oregon, USA.



More information about the rescue mailing list