[rescue] SMP on intel wasteful?

David Passmore dpassmor at sneakers.org
Tue Jun 25 10:21:01 CDT 2002


On Mon, Jun 24, 2002 at 12:46:06PM -0400, Chris Hedemark wrote:
> On Mon, 2002-06-24 at 12:27, Dave McGuire wrote:
> 
> >   - There's more to computer performance than the clock speed of the
> > processor.
> 
> Yep.  But when you are doing CPU intensive work it sure counts for a
> hell of a lot.
> 
> I've always found those nebulous arguments against PeeCee hardware
> interesting.  Usually unfounded in recent fact, but interesting
> nonetheless.  When put to the test though I've found that dollar for
> dollar a PeeCee running UNIX will spank a RISC box running UNIX.

I would advise you to never, ever, ever make sweeping statements like this.
You're bashing other people for not providing specifics, while providing
none of your own. If you know anything about the computing and networking
industry, which I assume you do, you might know that IT is a very small
world. Word travels fast; you never want to make an ass out of yourself in
front of a collective group such as this one. Just some words of advice. In
the meantime, I will try to address what you're saying.

Before the introduction of the load-balancer by companies like F5 and Cisco,
PC hardware had a cold chance in hell of seeing duty in any server farm.
Your argument that PC hardware has higher raw integer performance may be
true, but it is almost completely moot. Performance isn't key. Reliability
is key, and since PC hardware has historically been made of substandard
components (especially power supplies), only the introduction of redundancy
at the machine level through network load-balancing has allowed its
introduction.

So, I will give you that for stateless applications like web-serving, PC
hardware is ideal; if the box fails, throw it away and put in a new one.
Literally. However, for enterprise applications like critical databases
which need as close to 100% uptime as possible, you would be foolish to put
these on PC hardware. Not only are the components substandard (which makes
it cheap), but there is absolutely no redundancy or fault-recovery
capabilities built into the system like you'll see in higher-end UNIX boxes
and mainframes.

Also in terms of performance, since PCs are a bus architecture and most
modern UNIX boxes are a circuit-switched crossbar architecture, PCs are only
so good at getting data out of peripherals and to the CPU and back. So if
you have a highly-connected server with terabytes of data and a fat network
pipe it has to serve data up to, it doesn't matter how fast they clock the
bus or processor; there's going to be contention on it, and in computing,
contention = abysmal performance. This concept applies across the board,
even to things like graphics-intensive workstations that have to manipulate
several gigabytes of data at a time. You might argue that you could segment
this workload onto several PC machines to get around the latent limitation
of the PC bus I/O, but I would argue that there is a limit to how much you
can segment a given workload into digestible units of work before it simply
becomes too painful and complex, or simply impossible. Segmenting workloads
into multiple machines also increases the likelyhood of failure. In this
case, you simply 'throw hardware at the problem': high-end, redundant UNIX
hardware with the I/O capabilities to handle the workload. These machines
are expensive for a reason.

Hope that was specific enough for you.

David



More information about the rescue mailing list