RAID (was: RE: [geeks] New Itanium machines from SGI)

Gregory Leblanc gleblanc at linuxweasel.com
Thu Jan 9 00:44:16 CST 2003


Blah, I so wish this client did something useful with mime-digests.

On Wed, 2003-01-08 at 21:28, geeks-request at sunhelp.org wrote:
> 
> ______________________________________________________________________
> From: Dan Sikorski <me at dansikorski.com>
> To: The Geeks List <geeks at sunhelp.org>
> Subject: RE: [geeks] New Itanium machines from SGI
> Date: 09 Jan 2003 00:14:25 -0500
> 
> On Wed, 2003-01-08 at 23:53, Chris Byrne wrote:
> <snip some good food>
> > But the thing is those things are relatively easy to do, VS changin the
> > way we program and the way our processors work which are both rather
> > difficult. Basically what Ive described is similar in concept to the way
> > a mainframe works, only taken to its hopefully logical extreme. 
> > 
> > It also creates a decentralized system where the CPU itself is no longer
> > the most important factor in performance. Which was kind of the point
> > ;-)
> 
> The one thing about this whole system that you describe that sticks out
> in my mind is that it seems the EXACT OPPOSITE of where PC's are headed
> now.  Consider the USB mouse.  At least from what i've seen, a USB
> pointing device will use more CPU than a PS2 mouse, some IDE controllers
> (especially IDE RAID) do more in software than they do in hardware. 

Heh, yeah, most of the IDE RAID controllers are just a custom "driver"
that writes data to both drives.  Pretty crappy, IMO.  The only time
that they'd be even remotely useful is for operating systems that don't
have native support for software RAID, or at least bootable software
RAID.  As it turns out, all of the ones in that category don't have
drivers for the IDE RAID controllers either, so you're pretty much SOL.

> Software RAID is a perfect example, i know a guy (whom i disagree with
> on this point) that is convinced that hardware RAID is a waste of money
> because that money you spent on the controller would be better spent on
> a faster CPU to do it in software.  Sound cards are another example. 

Hmm, I actually think that Software RAID on Linux kicks some serious
ass.  The only other implementation that I've used heavily is on NT, and
that one blows goats.  Anyway, the Linux software RAID implementation is
faster than Dave chasing a pretty girl.  It easily outstripes any and
all of the PCI based RAID solutions that I've been able to get people to
run benchmarks on.  Using RAID 5 on modern CPUs uses code tuned to fit
into L1 cache, making the checksumming FAR faster than you can hope to
deliver from disks.  RAID 1, 0, and 10 don't show up as being any faster
or slower than the PCI based RAID cards (the differences are within the
margin of error for my testing).  Booting from RAID 1 works and is
pretty well tested.  Software RAID also offers you the flexibility of
working with slices (err, partitions, whatever) instead of whole disks
(I've never seen a hardware-based solution that worked this way).
Anyway, I think software RAID is a clear winner for -small- RAID arrays,
meaning no more than a couple dozen disks, and maybe a TB of moderately
use.
	Greg

-- 
Gregory Leblanc <gleblanc at linuxweasel.com>


More information about the geeks mailing list