[rescue] Mainframe on eBay

Skeezics Boondoggle skeezics at q7.com
Sat Sep 17 02:51:22 CDT 2005


On Fri, 16 Sep 2005, Patrick Finnegan wrote:

> Skeezics Boondoggle declared on Friday 16 September 2005 09:58 pm:
> >
> > Well, I was mostly poking at the notion that machines that just a
> > couple of years ago were still sold new for > $1M are now available on
> > Ebay for less than $10K was astonishing; seriously, who here 2-3 years
> > ago would have ever even considered owning a box that big?
> 
> The people that bought a SC2000E a few years ago for much less than 
> that. :)

Hee hee.  Hey, you're talkin' to the King Of the Sun4d's. :-)  SuperSPARCs 
rule!  Heh. :-)

> I bet the Oracle DBA didn't have to fix the hardware.  It seems somewhat 
> hit-or-miss; one of our E10ks (which we use as parts to the other two) 
> had lots of issues, and the other two seemed to be more reliable, until 
> we put parts from the other one in them.

Wow.  Finicky.  That's kind of a bummer, given how much positive 
experience I've had with Sun's reliability over the years.  I guess, 
though, that the high-end machines just don't have the user base to help 
identify problems quickly; at least in higher volume products you tend to 
find bugs more readily...

[...]
> 
> It may not be every machine, but some machines that Sun has shipped sure 
> seem to have intermittent problems that may not surface right away.

True enough.  I was rather displeased with Sun about not being more
forthright about owning up to the problems with the 400Mhz/8MB modules,
but they were very responsive when it came to swapping them out.  And in
each case, once the bad module was replaced, the affected machine was
problem-free.  And I ran two dozen E420Rs, an E4500, couple of U1's and
U2's and three NetApps for over 376 days straight - from the day we
remodelled and upgraded our server room and launched our new product until
the day the company shut down - with no hiccups.  Three years of that kind
of reliability (despite horrific cooling problems early on, until I forced
them to upgrade our AC unit) is what allowed me to convince management to
trust my purchasing recommendations.  Was I just lucky? :-)

> > Well, I'd still take on an E10K if I could.  Besides, with prices so
> > damn low, and with so much kit out there for sale, you can easily
> > stock up on a few spares. :-)  And since Sun seems destined...
> 
> An E10K uses IIRC 8x30A, 200-240V feeds (probably only 4 are necessary).  
> You have enough cooling and money to run the machine (power and cooling) 
> laying around?

Sunsolve says max input power is 13,456 watts, with 5 redundant 30A line
cords.  So it looks like you could get away with 3 circuits (max 24A @
240V per line cord) or maybe add a 4th for N+1.  Out of my new 200A
(single phase, sigh) house panel, my server room is currently wired for
2x20A @ 120V (lights and outlets), 3x30A @ 120V (one 3KVA UPS for each
rack), and 1x30A @ 240V for a 2000E or an 11/750, whichever I can get down
the steps... (I was so tempted to go 3-phase and try to shoehorn a CS6400
in here... and that puppy draws 4KW more than a puny E10K.)  I'll get
around to replacing the 70A meter socket, um, "real soon now."

Right now in that room I've got one Cisco 678 + 3548 switch, a 3com
switch, a small hub, a 32-port serial concentrator, a 4-way SS20, two U1s,
a U2, two 220Rs, two 420Rs, two 8-slot Andataco LVD SCSI trays, a NetApp
F330, a NetApp F720, and a Sun L1000 tape jukebox - plus the most badass
German engineered 14" 1800CFM fan (two air changes/minute!) cooling it
all.  That's not including the bench of desktop machines that I plan to
move in there, or the NetApp F760 and the E4500 I plan to pull out of
storage. :-)

Seriously, once you pass $200/mo in electric bills, another few KwH's to
run the E10K isn't a problem.  I mean, we're at the point of rubber rooms
and family interventions anyway.  I tell them I'm running weather
simulations - computing how much my computer room is contributing to 
global warming. :-)

> 
> IBM's promotion of Linux on all of their hardware now helps to improve 
> their cred, and makes their POWER stuff much more usable.

Eh.  I'm not a big Linux fan.  I'm sort of Linux agnostic.  But
OpenSolaris on POWER would be nifty. :-)

I guess my beef with AIX (and often with HP-UX) is that coming from the
BSD/SunOS4 world, they seemed to just make ridiculous, unnecessary, and
even gratuitous changes to the "usual" way of doing things in Unix.  I
think the approach one has to take with AIX is the same as making the leap
from C to C++ -- that is, treat it like something new to be learned from
scratch, not as the natural evolution of one thing into a newer
incarnation.  If I had treated AIX as a whole new OS to learn - forgetting
all the experience and habits learned with other Unix variants out there -
then AIX (like C++) might not have left such a sour taste in my mouth.

Many years ago I spent one weekend upgrading and rearranging a diverse set
of porting platforms for a client - they supported a fairly wide variety
of Unix environments, including SunOS4, Solaris, HPUX, AIX, etc.  We were
upgrading and consolidating disks and adding memory to each of the main
compile/build platforms.  The Suns took virtually no time at all; easy to
install the memory, easy to move the disks and filesystems around, easy to
get Disksuite set up, etc.

The HP was slightly more problematic;  they just LOVE making you jump
through crazy hoops to get the machine open and install the RAM, and they
insist on doing their SCSI IDs backwards, and then getting around "sam"
and doing things purely command-line finally made what should have been a
fairly simple task possible; the circular maze of mouse clicks in the
"helpful" admin tool was just pointless.  /etc/checklist?  Are you
serious?  Consistency has never been a strong point in the wide world of
Unix variants, but "ls /etc/*tab" ought to at least give one a reasonable
starting point.  Bleah.  HP didn't even have the "tune a fish" joke in
their man pages.  Lame.  Corporate.

Then there was the RS/6000 and AIX.  By the end of the day I was wanting
to jab a fork in my eye.  I wanted to "smit" everyone at IBM personally,
for the abuse and horror that they inflicted on my young and
impressionable mind that day.  What was infuriating was that nowhere was
there a shred of help or documentation that suggested the highly magical
order in which you had to do things - just a circle jerk of error messages
about how not to do them.  Kinda like the small sign way, way, way down a
long corridor at the airport that says "Not the way to baggage claim."  
Well, gee, thanks.  I figured it out, eventually, and after all the
useless pain it made some sense - but fsckin' 'ell, AIX made you *pay* for
the experience.  And it felt like I was talking to a Honeywell that had
dropped half a tab of VMS and smoked some Unix laced with DG AOS for good
measure; I mean, WTF?  AIX that day became "Unix for mainframe guys",
since I figured they'd devised it as a gateway drug to get MVS dudes to
finally come over to Unix.  For me, after years of using AIX just casually
(in a mostly non-sysdamin role, but with some previous exposure) I
realized I should just stick with beer.  Regular old Sun-flavored beer.

Feh.  Seems I always yak more on Fridays.  Anyway, at this point I think
I'm kinda done with computers, save for spewing odd reminiscenses and
posting something occasionally helpful or funny; the job market here SUCKS
DONKEY and I've decided to be a rock star instead, since I'm almost 38 and
haven't yet trashed a motel room for no reason, and doing that after 40 is
kinda pathetic.  So, after two days this week in a recording studio and
far too many beers I'm realizing that "VH1 Behind the Music" probably
won't have a flippin' clue what I'm rambling on about with AIX upgrades; I
need to go find some hookers and punch out a photographer or something.

Cheers!

-- Chris



More information about the rescue mailing list