[rescue] My new UltraSparc 5's, also my first Sun stations

Charles Shannon Hendrix shannon at widomaker.com
Thu Jun 9 17:42:09 CDT 2005


Tue, 07 Jun 2005 @ 11:15 -0400, Patrick Giagnocavo said:

> On Tue, Jun 07, 2005 at 10:03:02AM -0400, Charles Shannon Hendrix wrote:
> > Mon, 06 Jun 2005 @ 20:43 -0400, Patrick Giagnocavo said:
> > It solves a real need: performance.
> 
> I think it solves it the wrong way though... there is nothing inherent
> in any X app that should *require* DGA, merely use it if it is
> available.

I don't see that as a problem with DGA.  It's a fact that applications
have needed high-speed graphics access in X for years, and have not had
it without doing really bad things.

DGA is an abstracted hardware access so code is safer and more portable.

For applications that need it, I don't much see the need for an option
to disable it, since there would be little point.

Also, DGA is mainly for graphics editing, video, and games.  I just
offered it as an example, since human nature is to use a feature
if it exists.

There are still plenty of other things which prevent or hurt remote X,
even in common apps.

For example: a toolkit calculates an image, and sends it to X via local
shared memory.  Something that either X cannot do itself, or the toolkit
is just written that way.

If the app will not use anything but this method to talk with X, then it
won't run remotely.

If it is more generic, it will run like crap on a remote server.

I mention this because some toolkits have a lot of redundant
functionality in them (Gnome and KDE) which does this sort of thing.

It's not totally the fault of app developers either: for years X did not
support a lot of things which applications needed.  Of course, there are
bad reasons it happens too.

It's very wasteful that KDE and Gnome are so redundant.  Not only does
it increase resource consumption, but it makes X less flexible.

For example, a lot of font handling is done in the toolkits instead of
X, which means you have multiple applications all doing something
and not sharing it, and also sending a lot of data to X.

If X had the needed support and did all of that, then the resource would
be shared, and would (theoretically) work better remotely.

Ideally, the new X accelerations would *help* remote X sessions, I just
worry because a lot of developers don't seem to be focusing on that
issue.

> > Also, even with plain remote terminals, a lot of applications eat up
> > tons of X memory.  I tried to deploy some X terminals with 64MB of RAM,
> > and found that a lot of apps used that up rather quickly.
> 
> Netscape's browser used to be the biggest criminal in this case.  

Mozilla and Firefox still both burn memory pretty bad.  One thing I
noticed will running them under a tracer was they did a lot of shared
memory regions that they never released, but did stop using.

> Try turning off backing store in the X terminal's config as a starting
> point.

At 24-32 bits, I find that even local displays stuggle a bit there.

Some of that is poor drivers though.

I think with the commercial nVidia driver for Linux and FreeBSD, it
doesn't matter much, but I didn't test extensively.

Somewhat related: 

I wonder if there will be any new dedicated X servers coming out
supporting new features, with more memory and CPU power?  I have not
seen an ad for an X server in a long time now.  I guess admins build
diskless PCs to do the job these days.

> Alternatively use Xvfb and then let that X server store the X atoms
> (xlsatoms is the command to run to see what is going on, btw) and just
> display on the X terminal.

Interesting idea.  

Xvfb is intended for testing and fooling applications that desire but do
not need an X server.  That's the only thing I've ever done with it.

-- 
shannon "AT" widomaker.com -- [governorrhea: a contagious disease that
spreads from the governor of a state downward through other offices and his
corporate sponsors]



More information about the rescue mailing list