[geeks] Itanium 32 bit performace.... hahahaha

jdboyd at jdboyd.zill.net jdboyd at jdboyd.zill.net
Sat Dec 20 03:25:25 CST 2003


On Wed, Dec 10, 2003 at 01:11:55PM -0800, Francisco Javier Mesa-Martinez wrote:

> Transforms are usually the same regardless of window size, the
> rasterization is the process of mapping those transforms in the
> viewport/window. I.e. if you have 50,000 polygons in your scene, you still
> need to consider them regardless of your viewport size (that is one of the
> drawbacks of raster based gfx, as opposed to image based gfx).

Right.  So, the fact that it runs the same speed reguardless of window
size would indicate that it is running at the speed that the transform
will allow it, right?

I believe I originally said I was running a program that had a large
number of vertexs that ran the same speed reguardless of window size.  I
didn't say (but should have) that it did single pass rendering only,
using no textures or alpha blending.  The program drew a lot more than
50,000 polygons.  I don't currently recall what the count was.  Perhaps
100,000ish. 

The main test machines were a dual P2-350 with a Geforce3, a single P4
1.6 or 1.8ghz machine with a Geforce3, and a dual P4 Xeon machine,
2.4ghz with a Quadro4 (don't recall exactly which one).  The P2 machine
ran linux, the P4 machines ran Windows 2000.  All used the latest nvidia
drivers for the time (sometime in fall of 02).  The code did not use
threading. 

Both the P4 and the P2 system ran in the 60fps range (not a steady
framerate), while the Xeon system ran more like 100fps.

I didn't record the details in more depth because I didn't expend to
need them past demonstrating to a professor my point.  As I'm no longer
a student, I can't use those exact machines anymore (except for the dual
P2-350 since that is one of my computers).
 
> In any case, most moder FeeCee processors have enough oomph due to their
> 3Dnow, MMX extentions to handle geometry relatively nicelly.

If they found a way to accelerate geometry tranformations done with
floats on a P2 using MMX, I'd be impressed.  I suppose Nvidia's driver
could convert the floats to a 16bit fixed point representation behind my
back, but if so, I'd be surprised I don't see artifacts of that.

Given a newer processor, I'm sure the CPU could handle geometry nicely,
but then that would take away from being able to use it for other
things. 



More information about the geeks mailing list