[rescue] SMP on intel wasteful?

Chris Hedemark chris at yonderway.com
Tue Jun 25 10:49:45 CDT 2002


On Tue, 2002-06-25 at 11:21, David Passmore wrote:

> I would advise you to never, ever, ever make sweeping statements like this.
> You're bashing other people for not providing specifics, while providing
> none of your own.

I did earlier in the thread.

For the archive handicapped, I'll summarize.

A took a <$2,000 PeeCee running Linux (a machine that would now be
chipped according to the standards of Mr. McGuire's recycling friend)
and a fully populated Sun E450.  I ran a number of EDA tools from
Synopsys, most notably VCS and Design Compiler.  The PeeCee averaged
2.5x the overall performance of the Sun running the same jobs.  Sun was
able to wring minor performance improvements out of the E450 but was
unable to offer any greater explanation of why we should keep buying
their hardware in the face of such a defeat.

Since engineers were getting their results back much more quickly, they
were able to increase productivity and reduce the time of the design
cycle for their next DSP core.  EE's aren't cheap, so you can multiple
the number of engineers times the number of days saved by switching to a
PeeCee EDA farm and you can quickly come up with some big numbers beyond
the obvious ones to show why the switch to PeeCee's and away from Suns
for EDA was a wise move.

> If you know anything about the computing and networking
> industry, which I assume you do, you might know that IT is a very small
> world. Word travels fast; you never want to make an ass out of yourself in
> front of a collective group such as this one. Just some words of advice. In
> the meantime, I will try to address what you're saying.

Ah, I see.  I am the guy showing real world examples of why x86 is a
viable business platform for UNIX workstations.  The other guy says
"bite my arse hairs" and I am the one making an ass out of myself.

That's about as clear as mud.

> Before the introduction of the load-balancer by companies like F5 and
Cisco,
> PC hardware had a cold chance in hell of seeing duty in any server farm.

PC hardware was not a viable platform for UNIX workstations back then,
based on its own merits.  Today, I contend that it is, based on its own
merits.  The load balancing systems are irrelevant.

> Your argument that PC hardware has higher raw integer performance may be
> true, but it is almost completely moot. Performance isn't key.

Don't take things out of context.  Indeed, in the example that I gave,
performance *is* key.

Also, the Sun server farm had *worse* uptime than the PeeCee farm that
replaced it.  These were hardware failures here.  In one case, a Sun
E450 self combusted and forced me to clear the second floor to ventilate
the noxious burning plastic fumes.

> Reliability
> is key, and since PC hardware has historically been made of substandard
> components (especially power supplies), only the introduction of redundancy
> at the machine level through network load-balancing has allowed its
> introduction.

That is a rather sweeping generalization.  Yes, the majority of PC's are
made of haphazardly slapped together components.  Then again, I've seen
some Sun clones made in much the same way.  One of them actually
electrocuted one of our engineers when he tried to hit the ON button.

But, like with Sun hardware, when you want a PC you go straight to a
vendor who uses good components.

BTW - I used to work for $big_vendor on their load balancing product
team.  Most of the load balancing customers were actually running AIX
and Solaris on their balanced servers.  Not PC.

> Literally. However, for enterprise applications like critical databases
> which need as close to 100% uptime as possible, you would be foolish to put
> these on PC hardware.

I won't argue with you on that.  My previous postings stated that I
would prefer critical RDBMS functions on sparc hardware.

> Not only are the components substandard (which makes
> it cheap), but there is absolutely no redundancy or fault-recovery
> capabilities built into the system like you'll see in higher-end UNIX boxes
> and mainframes.

I disagree.

I did some shopping around last night.  I won't bother quoting prices.
For the price of one Sun Ultra 60 workstation, which is described as a
low cost machine, I can have three dual processor PC servers with
hardware RAID, redundant power supplies, redundant fans, etc.

> Also in terms of performance, since PCs are a bus architecture and most
> modern UNIX boxes are a circuit-switched crossbar architecture, PCs are
only
> so good at getting data out of peripherals and to the CPU and back.

The bus on some of the higher end RISC boxen may indeed be faster.

But the DASD is no faster.

The network adapters are no faster.

About the only significant area where higher end RISC boxen will make a
difference is memory speed.

But then the CPU is the bottleneck.

That's one of the reasons the Athlon really took off, IMHO, is they took
all the great features of the Alpha platform and the best parts of x86
and Intel is just now catching up on performance (but not on price).

[demime 0.99c.1 removed an attachment of type application/pgp-signature which had a name of signature.asc]



More information about the rescue mailing list