[rescue] Any Minneapolis area listers ?

Jonathan C. Patschke jp at celestrion.net
Thu Jan 11 13:12:44 CST 2007


On Wed, 10 Jan 2007, Patrick Finnegan wrote:

> What have you used for hardware and software raid?

Hardware RAID:
   3Ware cards in PCs.
   Sun StorEdge A1000 and 3310 arrays
   AC&NC JetStor Arrays.
   DEC HS**0 Arrays.
   Some 5U LSI/StorageTek thing which I can't recall.
   Some large IBM thing with which I didn't directly interface.

Software RAID:
   Windows in beefy PCs
   Linux in moderately beefy PCs
   Solaris in near-current Suns
   Vinum on FreeBSD
   RAIDframe on OpenBSD
   SATA "hardware RAID" on Linux, Windows, and FreeBSD.

> I've used a AlphaPC164LX with a 600MHz 21164A, with 1GB FC-AL and FC-AL
> drives, and was getting 70MB+/sec read and write speeds on it, with
> Linux software raid set up to do an 11+1 RAID-5.
>
> The $40,000+ IBM DS4500 we've got at work, at best performed at about
> 140MB/sec write speeds, to a RAID5 setup.
>
> Twice the speed, for 6-year newer hardware, and a much higher price
> tag is just not worth it, in my opinion.

Speed is not the only measure of a storage system.  Many failures on the
IBM can be corrected while the system is online.  A controller failure,
for example, can be corrected without interruption.

That said, the speed of the LSI system far outstripped anything I'd seen
in a standalone system.  We were seeing 100MB/s sustained writes from
multiple Oracle sytems concurrently, nearly to the point where the FC
link and PCI slots in the servers were greater bottlenecks than the RAID
head itself.

The JetStor boxes take an unreal pounding from $ork.  It's just
absolutely amazing what those cheap little boxes can do.  That said,
they're not all -that- reliable, so I wouldn't recommend them for a
high-abuse environment that has mission-critical data.

> It's really easy, with linux software raid, to blow away and
> re-initialize the metadata on the raid, without losing any data.

Yes, but then I'd have to run Linux, and I get more than quite enough of
that at work, thank you.

> Running a broken and frequently-crashing E3000 in my office, I
> occasionally had to do this, when the machine overheated, and marked
> multiple drives "dirty".  Try and fix that sort of error with hardware
> raid, where you don't have access to the moving pieces.

Been there and done that.  Almost every RAID appliance has an option for
forcing failed hardware online.  I believe this particular instance was
on the LSI unit when some field circus idiot knocked the power cables
out of a disk shelf.

That said, if you were running a real hardware RAID appliance, it would
likely have had sufficient cooling so as to prevent that from happening
or would've had adequate thermal monitoring to realize the actual
failure condition and power itself down as a protection measure.

-- 
Jonathan Patschke  ) "Some people grow out of the petty theft of
Elgin, TX         (   childhood.  Others grow up to be CEOs and
USA                )  politicians."              --Forrest Black



More information about the rescue mailing list