[Sunhelp] SGML tools on Solaris7 or non-GNUified Solaris
Gregory Leblanc
GLeblanc at cu-portland.edu
Tue Nov 23 00:05:47 CST 1999
> -----Original Message-----
> From: James Lockwood [mailto:lockwood at ISI.EDU]
> Sent: Monday, November 22, 1999 7:13 PM
> To: 'sunhelp at sunhelp.org'
> Subject: RE: [Sunhelp] SGML tools on Solaris7 or non-GNUified Solaris
>
>
> On Mon, 22 Nov 1999, Gregory Leblanc wrote:
>
> I'll give it one more shot, but this type of thing tires me
> out quickly.
> Then again, hopefully this crowd will be a little more
> attentive than the
> typical Usenet groupies.
I'm paying attention. :)
[ip stuff snipped]
Ok, so my evaluation isn't very fair, but since I'm not running a server,
and only have 384K, it works out for me. :)
>
> > > Finer kernel granularity.
> > > Better kernel threading.
> >
> > I don't understand kernel hacking yet, so I'll assume that
> those are "good
> > things" that Solaris does better than Linux.
>
[locking stuff that is almost making sense to me after the 30th explanation
snipped]
>
> Linux and Solaris both do this, but Solaris does it much more heavily.
> It takes a hit on smaller boxes but pays off on larger ones.
So that would explain why Solaris7 takes such a big hit running on an SS2
compared to linux 2.0.x or 2.2.x. Again, cool.
>
> > > Fewer arbitrary limits (filesize, max open fd's, max memory, etc).
> >
> > Well, the filesize limit is on it's way out, ext3 is
> running on a couple of
> > servers that I know of, with good results. The ram limit
> isn't for Intel
> > hardware, since that's all the CPUs can address, except
> maybe the Xeon. On
> > SPARC this may be more of a problem, since they can easily
> handle much more
> > RAM on a modern CPU than x86 can.
>
> Wrong. Modern Intel h/w (Pentium Pro and above) has a 36-bit virtual
> address space yielding a total addressable space of 64GB. That was
> introduced 4 years ago.
Uh, sorry, I wasn't clear here. The stupid things can address a whole mess
of memory, but they do a crappy job of it. I don't know TOO much about the
ppro, since they've been replaced by the celeron, err.. Anyway, what I
meant what the "features" that the P-II has that only allows it to cache up
to 512MB of ram (I'm assuming that's per-processor, but I'm not 100% sure).
This makes a MUCH bigger difference on this hardware than the actual address
space, IMHO. Once you get beyond the cached area, things get REALLY sloppy,
from what I've seen. I don't know this for a fact, but I assume that it has
to start doing some sort of address compression, or relocation. This seems
to absolutely STOP the CPU in its tracks, on the two machines that I've done
it to. (both P5 class machines, but if 256MB simms were cheaper, I'd try it
on a P-II/III).
>
> Linux limits the systemwide memory to that which can be
> directly addressed
> by a register. Other than 64-bit architectures (Alpha, Sparc
> V9) these
> are in the minority. I can have 16GB of RAM in a 32-bit CS6400 and
> Solaris handles it fine. Only 4GB is addressable by each
> process, but I
> can run several 4GB processes simultaneously in RAM.
>
> This is even an issue for machines with less than 4GB of RAM. I might
> want to have more than 4GB RAM + swap in order to fit a large
> working set
> of code + data. Examples include many processes doing mmap() on large
> files, it's not as rare as you might think.
Hmm, I'm going to have to learn to think BIG to figure some of this out.
We're getting (to us) an extreemly high end machine, with 1GB of ram in
January.
>
> The open fd issue is IMHO more critical.
>
[kernel modules snipped]
> > I don't suppose that these sort of things could be either
> kernel load time,
> > or kernel compile time tweakable? If they could then
> perhaps that might be
> > a direction to point the linux kernel, although I have only a fringe
> > knowledge of this, again.
>
> It can be done, but it isn't easy. It also wouldn't buy you
> much unless
> you gave up some of the Solaris kernel multithreading at the
> same time.
> This might net you faster syscalls, but less throughput when
> handling many
> at one time.
I'm not sure I follow this one, so you can ignore me if I don't make any
sense at all. What I was really thinking, was (probably incredibly
impractical) a way to either tell the kernel to optimize for faster
syscalls, or for better throughput on syscalls, trading those two off. I
don't know enough about kernel hacking to say, but it at least sounds good
to me. :) Thanks again,
Greg
More information about the SunHELP
mailing list