[rescue] The aesthetics of rescue

Skeezics Boondoggle skeezics at q7.com
Wed Oct 2 01:57:47 CDT 2002


On Mon, 30 Sep 2002 23:48:23 -0400, Joshua D Boyd wrote:

> On Mon, Sep 30, 2002 at 01:50:43PM -0700, Skeezics Boondoggle wrote:
> 
> > so the 900-series machines will have more/better/faster everything -
> > cpus, memory, nvram, pci bandwidth - as you'd expect in a new product
> > introduction. :-)  damn nda's... it should all be out there tomorrow.  
/.../
> So for those of us not professionally connected to NetApps, would you
> mind posting a quick run down tomorrow?

phew, long day.  reply to multiple threads...

snipped from spec.org's freshly updated sfs97_r1 results page, the new
netapp filers report:

Network Appliance, Inc  FAS940                    1   TCP    17419  0.99
Network Appliance, Inc  FAS960                    2   TCP    25135  0.99

that's nfsv3/tcp ops on the 1-cpu 940 and 2-cpu 960... 25K nfs ops/sec
with .99ms latency.  kick ass.  nobody else has published results with
sub-msec response times.  bet they're 'spennnnsive, though.  but they're
SHINY. :-)

anyway, the 960 is a dual 2.2ghz p4, 6gb of memory, 256mb of nvram, 6 pci
buses (10 slots?  it's a lot like an E450 :-) plus a private bus for the
nvram card, and handles up to 18-24TB of fc-al disks on one "head".  
cluster two of 'em (over an infiniband link, apparently!) and you get 48K
nfs/ops across the clustered pair.

the main thing about netapps, though, is that it takes no time at all to
learn to admin one.  my production filer (F760, about a TB of disk in
three volumes, 2 for oracle and 1 for our log data) just about rolled its
nfs ops snmp counter twice after 318 days of uptime - an a/c failure
forced me to shut it down while they repaired a leak in the refrigerant
lines.  damn!  the previous run was over 300 days as well, only rebooting
it for a needed os + disk firmware upgrade.  our F720, which serves our
developers via nfs & cifs, has been down for one reboot (upgrade) in the
28 months we've had it - 100.0% uptime during CY2001.

in six years of running filers, from the early faserver 1400 to an F330 we
beat the hell out of an isp (25,000 users' homedirs + /var/mail!) to an
F630 in an academic environment to the F7x0's (and soon a new F820) here
at this little dot com i manage, i have never once lost data stored on a
netapp except in cases of "user error" - and even then, filesystem
snapshots generally make recovery from silly things like "rm * .o" a snap.
pardon the pun. :-)

so, never having had the budget for a big emc or ibm or hitachi high-end 
array, and not having an mbus old enough to boot my ancient auspex :-) i 
i can't say from personal experience how those systems stack up.  netapps 
are fast, solid, and easy - EVERYTHING ought to be as easy as a netapp!
plus, nowadays you can get into a "workgroup" sized filer with their 
little F87 with 500gigs or so for about $20K.  still a bit steep for the 
"home market". :-)

quick replies to some other recent threads:

first machine that made me go "wow!" was the perq workstation, ca 1979-80.  
brownie points to anyone on rescue who has ever even heard of the perq...
:-)  (rescuing perqs got me into rescuing all sorts of other stuff, too.)  
the xerox alto (perq's predecessor and inspiration), the decsystem 10's
and 20's and all the classic big dec iron - those were great machines.  
hanging out at carnegie-mellon during my impressionable years had quite an
impact. :-)  but possibly the coolest, oddest thing was an *analog* gauge
on the front of an old honeywell run by the portland public schools that
we did (i am loathe to admit this in public) cobol programming on.  
apparently that was the meter of "how busy the system is" - an analog
perfmeter.  kewl.

i found a 4-port aurora card w/cable on ebay for cheap - got lucky,
something like $40 or so?  can't remember now.  i'm afraid that my legacy
will be that every network i've run for the last n years has a console
server named "headbone"...

there was something else, but i forgot.  so tired...  cheers, all.

-- skeez



More information about the rescue mailing list