[geeks] value of PIII PC servers

der Mouse mouse at Rodents.Montreal.QC.CA
Thu Jun 22 19:28:10 CDT 2006


> Now that's a silly insinuation.  You seem to be implying that there
> is no reason except bloatware for the use of more than 128MB of RAM.

Yes, I was.  But see my followup correction.  There *are* proper uses
for lots of RAM.

> procmail, php and perl generated web pages, (...) will all benefit
> from 128, 256, 512, 1024 MB.

I use procmail.  On my main house machine.  It currently has *plenty*
of ram - 192MB - but I also used procmail on it when it had only 32.  I
used procmail on my former mailhost, which was a Sun-3 with 24MB.
Benefit from?  Perhaps.  Almost anything will benefit from more RAM,
especially under an OS that knows how to use it for filesystem cache.

PHP, Perl, well, yes, I'd tend towards counting those as bloatware,
though I have nothing to do with either except at a distance at work,
so I don't really know enough to be sure.

> You use, like, *tar*, right?  That certainly benefits from having
> more RAM.

Not so's I'd notice.  I just now ran a tar c | tar t > /dev/null
pipeline, and checked the process sizes.  The tar c was 3660K virtual,
116K resident; the tar t was 3728K virtual, 84K resident.  And that was
on a machine with over 100M free RAM, so they weren't starved.

> The least amount of RAM I have on one of my systems is 80MB, in a
> Pentium 133 laptop.  Lemmetellya, it sucks if you want to copy files
> over the network or untar pkgsrc.  I couldn't imagine running a
> modern (desktop/workstation) operating system on 16MB of RAM.

Apparently your idea of "modern" includes a lot that I call "bloat".

Shrug.

> But then, tar must be mega-bloatware.

Perhaps the version of tar you use is.  (Actually, if it needs more
than a meg or two of RAM to untar pkgsrc, yes, I think it is.  But I
suspect you might be confusing the OS using your spare RAM to cache
filesystem data with tar using it.  Some of the effects will be
similar, such as reduced wall-clock time from command start to command
finish - if you don't wait for the data to be pushed to disk.)

> Along with any C compiler, considering how long it takes to compile a
> software package like GNU coreutils.

gcc, yes, is getting uncomfortably bloated these days.  "Any C
compiler"?  I haven't tried them all, but I'm inclined to doubt it.

coreutils, well, I don't know.  I conjecture that coreutils is what's
bloated there.

> CD burners must be mega-bloat(hard)ware, since you'll need way more
> than 16MB to pull that off at anything close to reasonable speed.

I don't know.  The machine I burn CDs on is rather overloaded - I just
checked and it appears to have 112MB - and I'm not in the same
building as it so I can't try burning a CD to see how much RAM cdrecord
uses.  If it needs more than a few megs, yes, I think that's bloat.

> Systems that supported RDRAM were in the 1+GHz range (right?).

I don't know.  I don't have enough to do with peecees to know that sort
of arcana.

> Do you have any 1GHz machine that has less 128MB or less of RAM?

Not at the moment; both my "fast box" machines I loaded up with RAM
because I occasionally run things on them that really do have a use for
tons of RAM.  (One has half a gig, the other a quarter.  I actually
think only one of them is nominally >1GHz.)

/~\ The ASCII				der Mouse
\ / Ribbon Campaign
 X  Against HTML	       mouse at rodents.montreal.qc.ca
/ \ Email!	     7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B



More information about the geeks mailing list