[SunRescue] PCI bus requirements

Maxwell Spangler maxwax at mindspring.com
Thu Aug 17 13:18:27 CDT 2000


On Thu, 17 Aug 2000, Gregory Leblanc wrote:

> > From: apotter at icsa.net [mailto:apotter at icsa.net]
> > Sent: Thursday, August 17, 2000 7:10 AM

> > My solution to this was a newer (also surplus) WinTel board, 
> > three PCI quad ethernet cards and OpenBSD (linux disn't line 
> > the cards).

> I just did the math real quick here.  
> 32-bits times 33MHz yeilds a throughput of 132MB/sec.  
> 12 ports at 100Mbit yeilds a throughput of 150MB/sec.
> 
> So, uhm, those had best not be all 100MBit ports.  I doubt that you can get
> even 100MBit from a standard P-II/III motherboard, probably much less with
> older boards.  Not that you'll be using that much bandwidth, but that's
> certainly more than it can handle.  That's probably why none of the
> commercial switches/routers use PCI.  :-)

This is a common mistake; I made it a couple weeks ago.

PCI is a bus: a shared communication channel that multiple devices arbitrate
over so that they can all share it but only one can use it at a time.

So in the case above where someone is using multiple quad ethernet cards, he
only has to ensure that the bandwidth requirements of a single card can be
satisfied by the PCI bus.  There will never be a case where two cards are
sending/receiving data on the same PCI bus simultaneously.  If one PCI device
is using the bus, the others will just wait in queue.

In addition to this, a single quad ethernet card might actually be the
equivalent of four single ethernet cards connected to the primary PCI bus
through a technique known as "bridging".  Bridging allows two independent PCI
buses to be daisy-chained together to increase the number of devices that can
be connected to the computer.

Here's an example of a basic desktop PCI motherboard

CPU---|PCI Slot #1|---|PCI Slot #2|---|PCI Slot #3|--|PCI Slot #4|

In this case, only one of the slots can talk on the bus at any given time and
the maximum number of PCI devices you can put in this computer is four because
you have four slots.  To add more slots, a motherboard designer might bridge
additional slots and the layout would look like this:

CPU---|PCI Slot #1|---|PCI Slot #2|---|PCI Slot #3|--(PCI Slot #4)
          PCI bus 0 ^^^^                               |
                                               PCI bridge chip
                                                       |
                         PCI bus 1, Slot 1, "Slot 4" label on motherboard
                                                       |
                         PCI bus 1, Slot 2, "Slot 5" label on motherboard
                                                       |
                         PCI bus 1, Slot 3, "Slot 6" label on motherboard


If the slot labeled "Slot 5" on the motherboard wants to send data to the host
CPU, it has to do this:

  * First, arbitrate for time on PCI bus #1.  This might mean waiting for
Slots 4 or 6 to finish.

  * Second, let the PCI bridge chip arbitrate for time on PCI bus 0.  This
might mean waiting for PCI slot 1, 2 or 3 to finish.

  * Finally, Slot 5 has control of PCI bus #1, and PCI bus #0 and can talk to
the host CPU.

The host CPU sees the entire PCI bus #1 as a single device hooked into PCI bus
#0's slot 4.  That's the magic of the bridge chip.

Quad Ethernet cards that include bridge chips should follow this exact same
layout.  So "PCI bus 1" as shown above might just be a single Quad Ethernet
card plugged into Slot 4 on the motherboard.  When one of the 10/100 ethernet
chips wants to send data, it arbitrates for time on the card's own PCI bus,
then arbitrates for time on the motherboard's bus.

Why I learned this:

A few weeks ago my company bought a Compaq ML350, a contemporary Pentium600
class system.  It has a motheboard with CPU, RAM, keyboard, mouse, Floppy and
PCI slots on it.  A *SINGLE 32-bit, 33Mhz PCI SLOT* contained a
"multifunction" card that held 10/100 networking, VGA video and two 80MBps
SCSI Ultra2 i/o channels.

I added up the bandwidth requirements of just the two SCSI channels
(80x2=160MBps) and thought it would be too much for the 132MBps 32-bit, 33Mhz
PCI bus.

Turns out that this card used bridging.  Of the four or so devices on the card
(net, video, disk, disk), only ONE could talk through its single PCI slot
connector to the host CPU at one time.  So because the maximum requirement is
80MBps for a single SCSI i/o channel, a single 32bit/33Mhz PCI slot is plenty.

Disclaimer: I'm no EE or device driver writer, so my terminology might be
incorrect, but I'm pretty confident in the concepts.

-------------------------------------------------------------------------------
Maxwell Spangler                         "Don't take the penguin too seriously.
Program Writer                       It's supposed to be kind of goofy and fun,
Greenbelt, Maryland, U.S.A.                      that's the whole point" - l.t.






More information about the rescue mailing list