Bus architecture was RE: [SunRescue] Re: NetApps?

Chris Byrne rescue at sunhelp.org
Tue Mar 20 10:43:55 CST 2001


In my previous incarnation (security architect at a storage company) we
worked a lot with NetApp, and  the real reason they didn't do gig e was bus
density vs. throughput.

Their bus architecture can only output something like 1 Gb/s per bus under
the absolutely perfect incredible ideal situations (or two Gb/s if they
finally switched to all 64 bit adapters. When we tested them they were still
32 bit adapters at 33mhz), so when they put gig e in they can only support a
limited number of clients per bus without major arbitration issues. That
means they would need to put in a hell of a lot more cache, or suffer a
major performance hit. That's why they only offer (or offered, I dont know
if they've changed) single port gig e cards, and a maximum of 6 (or seven on
some models) cards per bus. With 100baseT they can load up with quad ports,
and still not greatly exceed the max capacity of each bus unless all the
boxes are using all their throughput at the same time (if you ever see more
than 80Mb/s in the real world it's a miracle, and more like 40-60 is the
norm)

Note, this is a major reason why an u160 device in your PC isn't generally
any faster than an FC-AL device (probably not much faster than your 80MB/s
stuff), and one of the reasons why full duplex gig-e is pointless to the
desktop.

A standard PC is using a 32 bit 33mhz PCI bus. If you are lucky you may have
a 64 bit capable machine (some intel based server boards, many Sun systems,
Mac G4's, and other UNIX workstations), and you may even be running at 66mhz
(Some truly lucky intel based server owners, some sun machines, other UNIX
machines). This means that most of us have a max PCI bus bandwidth of around
1Gb/s and even the fastest PCI based systems have a max BUS bandiwdth of
4Gb/s

So even if your GigE card is the only PCI card in your machine, an unlikely
thing, it's probaly only seeing maybe 60% of the maximum badwidth avialable
to it, maybe a bit more with a very good motherboard design. This limits you
to something around 600Mb/s, which is not coincidentally about as much as an
80MB/s SCSI device.

A note, everything I'm saying here about numbers is under ideal conditions.
Typically you will see 60% or less of your maximum throughput on PCI devices
due to real world conditions (bus overhead, latency, caching issues, memory
access, processing overhead etc...)

Now if the infiniband stuff ever gets off the ground, or if manufacturers
would finally switch to 64 bit 66Mhz bus architecture, then maybe we would
be able to effectivley use high bandwidth devices like GigE, and the higher
end SCSI. Now prototyes are in the works for 2Gb/s ethernet, and 400MB/s FC
devices. Nothing we have available on the desktop and low-midrange server
market today could effectivley use these things.

Chris Byrne




-----Original Message-----
From: rescue-admin at sunhelp.org [mailto:rescue-admin at sunhelp.org]On
Behalf Of Bjorn Ramqvist
Sent: Tuesday, March 20, 2001 07:38
To: rescue at sunhelp.org
Subject: Re: [SunRescue] Re: NetApps?


Actually, I talked to ProAct, a Swedish distributor for Network
Appliance here, and he told me they went for Coppermines because of the
pricetag. There is a certain amount of power in those CPUs nowadays, and
it wasn't really justified to put an expensive 866 MHz EV68 Alpha in
those boxes. =)

Some months ago I saw one of those beasts in action (F820 something)
with a couple of FC-disks for demo-purpose. Cool thing, and fast
booting. And of course not to mention the throughput...
Although I couldn't see why they didn't bother to put in a GbE card.
Certainly FastEthernet must be a bottleneck sometime.

My customer went for a Compaq MA12000 instead, because of the pricetag.
But that ain't bad either... =)


/Bjorn
_______________________________________________
rescue maillist  -  rescue at sunhelp.org
http://www.sunhelp.org/mailman/listinfo/rescue




More information about the rescue mailing list