[geeks] Re: [rescue] IBM Regatta server thoughts

Derrick Daugherty geeks at sunhelp.org
Wed Oct 3 20:57:56 CDT 2001


It's rumored that around Wed, Oct 03, 2001 at 03:20:15PM -0500
Bill Bradford <mrbill at mrbill.net> wrote:
> Went to the customer presentation about it this morning at the IBM
> Customer Briefing Center here in Austin (where they develop the POWER-series
> CPUs).

What's really cool is we can take tours of that place, actually go into
the dev labs.  I never took Rick up on it (yet) but I'd love to.   Mebbe
the day we do the Rudy's Geek's outing we could preface it with a tour
of the IBM processor/kernel labs.  e-gad that's arousing.

I play e-mail tag with one of their kernel developers who's really nice.
he doesn't speak english well but is an incredibly brilliant man.  If
only their source was open I'd be so lucky as being in something like a
THANKS file for aiding in their system security.

> Saw Derrick there too, but we lost contact after saying hi to each other
> at the lobby.

Apparently there was some mythical 2nd room that was connected via a
polycom conferencing phone...  I was sandwiched between two managerial
types that hadn't enough clue between them to turn on a tv.  I looked
for ya afterwards, found, Roland was it?  But no Bill.

> Things I was impressed with:
> 
> 	- Ability to partition off individual PCI slots for a LPAR 
>           (Logical PARtition)

Agreed.  I really want to know how this is done.  Is it via the system
board, ie slot/pci id or is there a 'bios' type entity living on a cmos
somewhere that handles who has what.  If they are separate, what is the
common piece that _does_ the separating.  Is it redundant/mirrored? Very 
cool indeed, but I don't understand it enough which makes me unsettled.

> 	- Hot-swap PCI cards in *carriers* - so you dont have to pull out
> 	  a drawer or backplane, etc, to replace cards.  Supopsedly the
> 	  carriers will hold *any* industry-standard-size PCI card.

As the presenter stated, that is a simple idea, but it's often over
looked.  Why not do that? heh.  I went and played with the box after the
presentation and that I/O shelf was pretty beefy.  looks very similar to
the old sun disk shelfs, cept these did all manner of i/o.  So, in front
you had, umm, like 20 pci slots, three separate pci buses, and in the
back of the shelf two scsi controllers, don't recall the max number of
disks.  It's 24" wide so approximate the math.

i don't recall the max num of these io shelfs, although I believe he
stated it.  the other cool thing is there is talk of clustering these
beasts together in future aix releases. ++

> Things I wondered about:
> 
> 	- Why a *RS232 SERIAL* connection to the system management 
> 	  console?  They said you could even cut the cord once the LPARs
> 	  were up and it would keep running - but I woudl think that a
> 	  dedicated Ethernet connection would be better between the system
> 	  and the console that was managing it... I guess by using serial,
> 	  you're not platform-dependent.  One of the slides mentioned that
> 	  the management applications were *java*.  ick.

I didn't recall/read the java bit, I was assuming ncurses heh.  But once
initial configuration is done via rs232 you can then connect over the
network and manage that way.  Also worth noting, you are capable of two
rs232 mgmt ports, sun only has one clock board (which has the serial
console).


> All in all, pretty impressive, and the lunch (beef enchiladas) was good, and
> on real china too.  They gave away tshirts; my boss won one.

hah, mine resulted in indegestion..dunno if it was the previous night's
wild turkey or the food..but my tummy hurt heh.  I was fortunate enough
to win a shirt too, ticket number 515193 iirc...  melissa also had the
4.3.3 media for me so I can tackle the 320H this weekend.

> Pretty nice.  Now, I just need to find myself an older RS/6000 43P and 
> 24-bit graphics card, so I can play with AIX 5L...

get me one too :D

^Derrick

-- 
"Every sip is a kiss"  -TCS



More information about the geeks mailing list