[rescue] Small servers (was Re: WTT: 1.5G of PC2700 for 1G of PC100)

Joshua Boyd jdboyd at jdboyd.net
Mon May 5 16:53:40 CDT 2008


On Mon, May 05, 2008 at 03:40:51PM -0500, Jonathan C. Patschke wrote:

> Sometimes you just don't have the luxury.  Sometimes you just have to
> get something done with some truly atrocious tools that meet some
> reasonable intersection of usable and fast (leaning more towards the
> latter).  And sometimes[1], programmer time is not more valuable than
> run-time.

And when that is the case, you revise the code, or even toss it and
start over, until it is fast enough.

I never tried to say that programmer time is always worth more.  How
much money do businesses loose every day just over small stuff like
program launch time?

Keep in mind I work in semi-realtime embedded systems, including both
large ones and very small ARM ones.

I started to say more, but mentioning work and old school optimization
ideas is raising my blood presure too much.  Let me just say that if the
specifications say a 60hz refresh rate of the display LEDs and 60hz
polling of the keys, with some arbitrary amount of time to actually
process a key press (maybe 30ms), then after that sacrificing
maintainability for speed is a waste. 

One recent activity here was rewritting assembly into C.  The new C code
fit just fine into the same 8 bit chip with the same 128 bytes of ram.
I would argue that writing it in assembly for performance and memory
savings in the first place was a waste of time and money.  

But we can offer counter examples back and forth endlessly.  My key
point is only that it is better to have something that works before
worrying about making it faster.  After all, to refer to your chip tester
system, you started with a working chip tester before you created the
new compiled lanaguage to speed up parts of it.  Should they have
created that special compiler before there was every a working tester?

> Take, for example, one of my current projects, which is a pile of code
> that gets run against every part that $ork[2] ships out.  It's already
> written in lean C, but I've reimplemented part of it in a special-case
> scripting language (and written a compiler that generates PA-RISC
> assembly code) because the ancient[3] version of HP's compiler we're using
> generates such awful code.  Shaving a second off the runtime means being
> able to run thousands more parts through the test facility in a day.
> Sacrificing memory to generate full-gamut lookup tables for certain hash
> functions (to save the hash computation at each dispatch) gave us a few
> seconds of improvement.

So, how are you likking the HPPA platform now that you are using it in
depth? 

And I hope they reward you nicely for savings like that.
 
> I look at it this way:  If I'm all about more money, I can get it.  I
> might have to break laws or hurt people, but I can get it.  If I'm all
> about getting more tail, the same thing applies.  If I'm all about any
> material thing or experience, the same thing applies.  There's one
> exception though:  I can't get more time.  Once my time on this planet
> is up, that's it, as far as science can prove.

I agree.

And thus, my plan is to use my time to get my projects working by what
ever method is fastest, then worry about system performance.  For one
thing, if it isn't a work project, then the alternative is probably that
the project just never gets done.  And if it is a work project, they
probably would benefit from having the basics working as fast as
possible, then worry about adding higher performance extras, or new
features in freed up memory later.  After all, do we get the money now,
or do we not get the money at all because we couldn't deliver something
for the customer to start using fast enough?

Obviously, knowning what pitfalls not to get into up front will help
keep you out of impossible to optimize code later.  I wish I could have
convinced employees and contracters of that so that I wouldn't have
pre-"optimized" to deal with.  Because, you know apparently gotos are
faster.  And apparently using only byte or short arithmatic is faster
than keeping all of the pipelines busy on the CPU.  And here I am
talking about code that doesn't run synchonously with the real time
parts. 
 
> And every time my computer wastes my time because some retard programmer
> got into a "look who's the cleverest" contest with some other retard
> programmer, I take personal offense.  I did not buy the tools on my desk
> for the amusement of others.  I bought it to make large piles of work
> turn into large piles of deliverables.

Yeah, but there is also a fairly large amount of incompetence mixed in,
I'm sure.  Picking bad algorithms will almost never save developer
time.  And reasonable caching would probably be easier in a less
efficient language.  I'm remebering when Nautilus used to reload the
file type icon for every single file rather than use a hash to lookup if
they had already loaded that icon.  I don't know if they ever fixed
that. 

And at the moment I'm rather peeved that GNOME 2.22 seems to have
changed something over 2.12 that makes 2.22 programs crawl over a
local network display.

> [0] Try using a Mac SE/30 with software from the period when it was
>     released.  Then switch to a modern Mac (or modern PC) and use
>     today's software.  Nearly everything takes longer to launch, and
>     isn't quite as snappy.  Why not?  The computer on my desk has over
>     ONE THOUSAND times as much memory, four times as many CPUs, each of
>     which is clocked just under TWO HUNDRED times as fast, nevermind the
>     improvements in memory access times.

Yeah, that is irritating.

OTOH, I would imagine that there is a lot of software we wouldn't have
if wasteful version weren't created first.  Would someone have ever
taken the time to make a fast bayes filter if the idea hadn't been
proven with a scripted version?

And I have seen plenty of HLL software that is reasonably fast
compared to other C programs.  Yaws comes to mind.

> [4] Not to say that they haven't their place.  Another project I'm
>     working on actually uses ECMAscript as its embedded scripting
>     language, specifically because it has closures and prototype-style
>     OO.  However, in that project, the scripting engine does not run
>     synchronously with the test payload.

Realtime scripting is an interesting area, but most of the time I
suspect that it is a major minefield waiting to happen.  Same for
realtime garbage collecting systems.  And especially for something as
dynamic as javascript.



More information about the rescue mailing list