[rescue] Perverse Question

Francisco Javier Mesa-Martinez lefa at ucsc.edu
Sun Jun 15 19:49:50 CDT 2003


>    Async machines were quite common in the '60s and early '70s.  And they

So were purelly "analog" computers, heck mechanical computers have been
around since the 20's :). Does the fact that a technology was popular 30 years
ago make it somehow viable right now? I am not getting this line of
reasoning...

 > were quite easy to understand, too.  Take a look at PDP-6 and
KA-10
> schematics (formerly known as "prints") for a really good example of how
> the technology worked.  It's not all that complex.

For a limited degree of complexity async stuff is manageable, its
complexity explodes rather quickly. This is the fact that most of its
proposers seem to just plain ignore. That "it is not at all that complex"
is a very dangerous statement :).

This is the main example, I have had students that see the "light" right
after they take their first assembly language course. Until them most of
them have programmed using HLL like C. All of the sudden they go "whoah"
this is cool, I can make things go really fast, and look at all the neat
tricks I can do. And OK, they think "who the hell would have come up with
something as clunky as C for example" look my simple program to compute
fionacci is soooo much better than the stuff that hte C compiler
generates. And it is not that complex really. So I ask them: OK now write
a complete operating system using assembly only.... aaaaah, now they
understand why HLL's evolved :).

Same with async, when it is taught everyone just can see the "light". Well
ask yourself the same question why did DEC go from async CPUs to clocked
systems, if they were so easy to produce :).

The degree of complexity is such, that even the design tools are
incredibly hard to produce. Testability and satisfiablity of async designs
is an incredible hard problem, after you surpass a certain degree of size.
It can be manageable for small units, which is what people are doing.
Extend that to a whole modern CPU and the theory/tools is just not there.
SO far the best approach is a compromise, async modules glued together
using sync approaches.


>
>    It _is_ easier to understand a machine that runs internally in
> lock-step with all other phases of the machine, but that doesn't
> necessarily make for a "better" machine.

I never claimed that they were worse or anythign like that. Heck quantum
computers will be the be all end all, right. Theoretically, sure. Now
build a system around those quantum computing theories. Theory has little
to do with reality when it comes to design complex systems. The whole
engineering game is understanding the compromises and tradeoffs inherent to
the design process.

I have had many people coming to my office with great ideas, and
epifanies. How we are doing it all wrong and yadi yada (usually because
they just read this article on Slashdot, OSnews or whatever net hellhole
of geekdom). I am always "OK, sure you are right" Now design a system
around that cool theory that you had and get back to me. Once they spend a
significant amount of time thinking about it, it hits them :). There is a
reason for things to be the way they are...

>
> > So the question is there: Do you design a superduper async system that
> > takes 5 years to get out of the door, or you bet for a traditional design
> > which is cheaper and is out of the door in 1 year? Granted is not as
> > superduper by a factor of 10%. The problem is that by the time you release
> > the superduper system, the old fart system has gone through 5 generations
> > and you are only competive with the 1st generation, i.e. 4 year old
> > technology... This is what everyone has had to deal with: the steam roller
> > that is CMOS.
>
>    The above model assumes that the software running on any given
> machine will suffer the ongoing bloat that is so common now.

That model had no assumption on software at all. Pure system speed as in
OPs per second or whatever you wanna think. This is the main predicament
of new technologies.

> personal opinion (YMMV) is that 99% of that bloat is avoidable and
> we can realistically (save for gaming machines -- a whole *other*
> family) support 5-year cycles.  Faster clock speeds give us fatter,
> less efficient software which necessitates faster clock speeds.
> Loop to your heart's content or until the laws of physics are
> reached.

You assume that everyone uses a computer to write letters or somesuch.
Frankly I really like the fact that my placement and routing programs now
take less than 1/2 the time they took last year :). Or that the computer that I use
now is 2x times as fast and significantly cheaper, it makes it easier for
me to justify purchases in such a shitty economic climate. And I sudder to
think that the same things I do now would have taken days to complete a
decade ago. It allows me more time to be inefficient for like writing
emails like these for example :).


> >
> > They have been saying this for over 20 years. I am skeptical as usual,
> > Intel has the best production technology period and they know it...
>
>    IBM beat 'em to copper technology, and Intel have been sued at
> least once (and lost) for patent infringement (Intergraph).

I believe the Intergraph suit had nothing to do with copper or production
but rather with some architeture details. They settled out of court
anyway.

  Intel
> are not the powerhouse they're sometimes made out to be.  They're
> good, yes, but they're not the masters of the universe.

Right now they have the best production technology, bar none. May not be
the case in the future and certainly has not been the case in the past.
But right now nobody can push comodity parts as fast as intel does, with
such turn around circles. They have the whole industry scrambling to catch
up.

>    I must say, I *do* like your spelling of Wintel kit.

Feel free to spread it along :).



More information about the rescue mailing list