The rendrib program is a high-quality renderer
incorporating the techniques of ray tracing and radiosity to make
(potentially) very realistic images. This renderer supports ray tracing,
global illumination,
solid modeling,
area light sources,
texture mapping, environment mapping, displacements, volume and imager
shading, and programmable shading.
The format for invoking rendrib is as follows:
rendrib [options] myfile.rib
Usually, this will result in one or more TIFF image files to be
written to disk. If the file specified framebuffer display (as
opposed to file), or you override with the -d flag, the resulting
image will be displayed as a window on your screen. When the
rendering is complete, rendrib will pause. Hitting the ESC
key will terminate. Alternately, if you hit the `w' key, the image in
the window will be written to a file (using the filename specified in
the file's Display command).
If no filename is specified to rendrib, it will attempt to
read commands from standard input (stdin). This allows you to pipe output
of another program directly to rendrib. For example, suppose that
myprog dumps RIB to its standard output. Then you could display
frames from myprog as they are being generated with the following
command:
myprog | rendrib
The file which you specify may contain either a single frame or
multiple frames (if it is an animation sequence).
The following subsection details command line options alter the way
in which rendrib creates and/or displays images.
-d [interleave]
By default, any fully rendered frames are sent to a TIFF image
file (unless, of course, the file specifies framebuffer
output with the Display directive).
The -d command line option overrides file output and
forces output to be sent to a screen window. If the optional
integer interleave is specified,
scanlines will be computed in an interleaved fashion, giving you a kind of
progressive refinement display. For example,
rendrib -d 8 myfile.rib
will display every 8th scanline first (making a very quick, but
blocky image), then compute every 4th scanline, then every 2nd, and so
on, until you get the final image. This is extremely useful if you
want to quickly see a rough version of the scene.
-res xres yres
Sets the resolution of the output image. Note that if the file
contains a Format statement which explicitly specifies the image
resolution, then the -res option will be ignored and the window
will be opened with the resolution specified in the Format
statement.
-pos xpos ypos
Specifies the position of the window on your display (obviously,
this only works if used in combination with the -d option or
if your Display command in your file specifies framebuffer
output).
-crop xmin xmax ymin ymax
Specify that only a portion of the whole image should be
rendered. The meaning of this command line switch is precisely
the same as if the CropWindow directive was in your
file (and like the other options of this section, a CropWindow
option takes precedence over any command line arguments).
-samples xsamp ysamp
Sets the number of samples per pixel to xsamp (horizontal) by
ysamp (vertical). Note that if the file contains a PixelSamples statement which explicitly specifies the sampling rate,
then the -samples option will be ignored and the sampling rate
will be as specified by the PixelSamples statement.
-stats
Upon completion of rendering, output various statistics about
memory and time usage, number of primitives, and all sorts of
other debugging information. Using this option on the command
line is equivalent to putting Option "statistics" "endofframe" [1]
in your file.
-v
Verbose output -- this prints more status messages as rendering
progresses, such as the names of shaders and textures as they are
loaded.
You can combine the -v and -stats options if you want.
-radio steps
By default, rendrib calculates images using the rendering
technique of ray tracing. Ray tracing alone does no energy balancing
of the scene. In other words, it does not account for interreflected
light. However, rendrib supports radiosity, which is a
method for performing these calculations. You can instruct
rendrib to perform a radiosity pass prior to the ray tracing
by using the -radio command line switch. This command is followed by
a single integer argument, which is the number of radiosity steps to
perform. For example, the following command causes rendrib
to perform 50 radiosity steps prior to forming its image:
rendrib -radio 50 myfile.rib
If the energy is balanced in fewer steps than you specify,
rendrib will skip the remaining steps (saving time).
Depending on your scene, the radiosity calculations can take a long
time, but they are independent of the final resolution of your image.
Specifying the number of radiosity steps on the command line is
exactly equivalent to including a Option "radiosity" "nsteps"
line in your file.
-rsamples samps
By default, rendrib calculates the visibility between
geometric elements by casting a minimum of one ray between the two
elements. You can increase this number to get better accuracy (but at
a big decrease in speed) by using the -rsamples option. This option
takes a single integer argument. The minimum number of rays used to
determine visibility will be the square of this argument. For
example, the following command will perform a radiosity pass of 100
steps, using a minimum of 4 sample rays per visibility calculation:
rendrib -radio 100 -rsamples 2 myfile.rib
-frames first last
Sometimes you may only want to render a subset of frames from a
multi-frame file. You can do this by using the
-frames command line option. This option takes two
integer arguments: the first and last frame numbers to display. For
example,
rendrib -frames 10 10 myfile.rib
This example will render only frame number 10 from this file.
If you are going to use this option, it is recommended that your
frames be numbered sequentially starting with 0 or 1.
-safe
When you submit a scene file for rendering, the image files will have
filenames as specified in the file with the Display directive. If
a file already exists with the same name, the original file will be
overwritten with the new image. Sometimes you may want to avoid this.
Using the -safe command line option will abort rendering of any frame
which would overwrite an existing disk file. This is mostly useful if
you are rendering many frames in a sequence, and do not want to
overwrite any frames already rendered. Here is an example:
rendrib -safe -frames 100 200 myfile.rib
This example will render a block of 100 frames from the myfile.rib, but
will skip over any frames which happen to already have been rendered.
-ascii
Will produce an ASCII (yes, exactly what you think)
representation of your scene to the terminal window!
-beep
Rings the terminal bell upon completion of rendering.
-arch
Just print out the architecture name (e.g., sgi_m3, linux,
etc.).
Various
implementation-specific behaviors of a renderer can be set using two
commands: Option and Attribute. Options apply to
the entire scene and should be specified prior to WorldBegin.
Attributes apply to specific geometry, are generally set after the
WorldBegin statement, and bind to subsequent geometry.
Several of the features of this renderer can be controlled as
nonstandard options. The mechanism for this is to use the Option
command. The syntax for this is:
Option name params
Where name is the option name, and params is a list
of token/value pairs which correspond to this option. Remember that
options apply to an entire rendered frame, while attributes apply to
specific pieces of geometry.
Similarly, other renderer features can be controlled as
nonstandard attributes, with the following syntax:
Attribute name params
Attributes apply to specific pieces of geometry, and are saved and
restored by the AttributeBegin and AttributeEnd commands.
Remember that both of BMRT's renderers (rendrib
and rgl) read from a file called
.rendribrc both in the local directory where it is run, and
also in your home directory. This file can be plain RIB, which means
that if you want to set any defaults of the options discussed below,
you can just put the Option or Attribute lines in this file in your
home directory.
The remainder of this chapter explain the various
nonstandard options and attributes supported by rendrib.
In most cases, the new ``inline declaration'' syntax is used
to clarify the expected data types, and the default values
are provided as examples.
Option "render" "integer max_raylevel" [4]
Sets the maximum number of recursive rays that will be cast
between reflectors and refractors. This has no effect if there are no
truly reflective or refractive objects in the scene (in other words,
shaders which use the trace function).
Option "render" "float minshadowbias" [0.01]
Sets the minimum distance that one object has to be in order to shadow
another object. This keeps objects from self-shadowing themselves.
If there are serious problems with self-shadowing, this number can be
increased. You may need to decrease this number if the scale of your
objects is such that 0.01 is on the order of the size of your objects.
In general, however, you will probably never need to use this option
if you don't notice self-shadowing artifacts in your images.
Option "statistics" "integer endofframe" [0]
When nonzero, this option will cause rendrib to print out various
statistics about the rendering process. Greater values print
more detailed data: 1 just prints time and memory information,
2 gives more detail, 3 is all the data that the renderer
ever wants to print. (Usually 2 is just fine for lots of data.)
Option "statistics" "string filename" [""]
When non-null, this option will cause rendrib's statistics to
be echoed to the given filename, rather than printed to stdout.
Various external files may be needed as the renderer is running,
and unless they are specified as fully-qualified file paths,
the renderer will need to search through directories to
find those files. There exists an option to set the lists
of directories in which to search for these files.
Option "searchpath" "archive" [pathlist]
Option "searchpath" "texture" [pathlist]
Option "searchpath" "shader" [pathlist]
Option "searchpath" "procedural" [pathlist]
Option "searchpath" "display" [pathlist]
Sets the search path that the renderer will use for files that
are needed at runtime.
The different search paths recognized by
rendrib are:
- archive files included by ReadArchive.
- texture texture image files.
- shader compiled shaders.
- procedural DSO's and executables for Procedural calls.
- display DSO's for custom display services.
Search path types in BMRT are specified as colon-separated lists
of directory names (much like an execution path for shell commands).
There are two special strings that have special meaning in
BMRT's search paths:
- & is replaced with the previous search path
(i.e., what was the search path before this statement).
- $ARCH is replaced with the name of the machine
architecture (such as linux, sgi_m3, etc.). This allows
you to keep compiled software (like DSO's) for different platforms in
different directories, without having to hard-code the platform name
into your file.
For example, you may set your procedural path as follows:
Option "searchpath" "procedural"
["/usr/local/bmrt:/usr/local/bmrt/$ARCH:&"]
The above statement will cause the renderer to find
procedural DSO's by first looking in /usr/local/bmrt, then
in a directory that is dependent on the architecture, then
wherever the default (or previously set) path indicated.
Attribute "render" "integer visibility" [7]
Controls which rays may see an object. The integer
parameter is the sum of:
- 1 The object is visibile from primary (camera) rays.
- 2 The object is visibile from reflection rays.
- 4 The object is visibile from shadow rays.
This attribute is useful for certain special effects, such as having
an object which appears only in the reflections of other objects, but
is not visible when the camera looks at it. Or an object which only
casts shadows, but is not in reflections or is not seen from the camera.
Attribute "render" "string casts_shadows" ["Os"]
Controls how surfaces shadow other surfaces. Possible
values for shadowval are shown below, in order of increasing
computational cost:
- "none" The surface will not cast shadows on any other surface,
therefore it may be ignored completely for shadow computations.
- "opaque" The surface will completely shadow any object which it
occludes. In other words, this tells the renderer to treat this
object as completely opaque.
- "Os" The surface may partially shadow, depending on the value
set by the Opacity directive. In other words, it has a constant
opacity across the surface. (This is the default.)
- "shade" The surface may have a complex opacity pattern,
therefore its surface shader should be called on a point-by-point
basis to determine its opacity for shadow computations.
The default value is "Os". You can optimize rendering time by
making surfaces known to be opaque "opaque", and surfaces known
not to shadow other surfaces "none". It is important, however,
to use "shade" for any surfaces whose shaders modify the opacity
of the surface in any patterned way.
Attribute "render" "integer truedisplacement" [0]
If the argument is nonzero, subsequent primitives will truly
be diced and displaced using their displacement shader (if any).
If the value of 0 is used, bump mapping will be used rather than
true displacement. Only a
displacement shader can move the diced geometry - altering P in a
surface shader will not move the surface, only the normals. Using a
displacement shader without this attribute also only results in the
normals being modified, but not the surface. Be sure to set
displacement bounds if you displace! Please see the section on
``limitations of rendrib'' for details on the limitations placed
on true displacements.
Attribute "displacementbound" "string coordinatesystem" ["current"]
"float sphere" [0]
For truly displaced surfaces, specifies the amount that its bounding
box should grow to account for the displacement. The box is grown
in all directions by the radius argument, expressed in the
given coordinate system (a string).
Attribute "render" "float patch_multiplier" [1.0]
Takes an float argument giving a multiplier for the dicing rate that
BMRT computes for displaced surfaces and for certain curved surfaces
which are subdivided. Smaller values will
make the scene render faster and using less memory, but may produce
a more faceted appearance to certain curved surfaces. Larger values
will make more accurate surfaces, but will take longer and more memory
to render. The default is probably just right for 99% of scenes, but
occasionally you may need to tweak this.
Attribute "render" "float patch_maxlevel" [256]
Attribute "render" "float patch_minlevel" [1]
Takes an integer argument giving the maximum (or minimum) subdivision
level for bicubic and NURBS patches. These patches are subdivided
based on the screen size of the patch and their curvature. This
attribute will split the patches into at least (minlevel x minlevel)
and at most (maxlevel x maxlevel) subpatches. The default is min=1,
max=256. In general, you shouldn't ever need to change this, but
occasionally you may need to set a specific subdivision rate for some
reason.
Attribute "trimcurve" "string sense" ["inside"]
By default, trim curves on NURBS will make the portions of the surface
that are inside the closed curve. You can reverse this
property (by keeping the inside of the curve
and throwing out the part of the surface outside the curve)
by setting the trimcurve sense to "outside".
Attribute "render" "integer use_shadingrate" [1]
When non-zero (the default), rendrib will attempt to share
shaded colors among nearby screen rays that strike the same
object (specifically, it shares among rays that are within the
screen space area defined by the ShadingRate). Occasionally,
you may see a blocky or noisy appearance resulting from this shared
computation. In such a case, setting this attribute to 0 will cause
subsequent primitives to compute their shading for every screen
ray, resulting in much more accurate color (though at a higher cost).
Attribute "light" "string shadows" ["off"]
Turns the automatic ray cast shadow calculations on or off on a
light-by-light basis. This attribute can be used for any LightSource
or AreaLightSource which is declared. For example, the following RIB
fragment declares a point light source which casts shadows:
Attribute "light" "shadows" ["on"]
LightSource "pointlight" 1 "from" [ 0 10 0 ]
Attribute "light" "integer nsamples" [1]
Sets the number of times to sample a particular light source
for each shading calculation. This is only useful for an area
light which is being undersampled -- i.e., its soft shadows are too
noisy. By increasing the number of samples, you can reduce the noise
by increasing sampling of this one light, independently of overall
PixelSamples.
Finite Element Radiosity Controls
If you are using finite element radiosity (one of the two global
illumination methods supported by BMRT), there are some additional
options that you can set.
Option "radiosity" "integer steps" [0]
In addition to using the -radio command line option to
rendrib, you can specify the number of radiosity steps with
this option. Setting steps to 0 indicates that radiosity should not
be used. Nonzero indicates that radiosity should be used (with the
given number of steps) even if the -radio command line switch is not
given to rendrib.
Option "radiosity" "integer minpatchsamples" [1]
Just like the -rsamples command line option to rendrib, this
option lets you set the minimum number of samples per patch to
determine radiosity form factors. Actually, the minimum total number
of samples per patch is this number squared (since it is this number
in each direction). In some cases, the render will decide to use more
samples, but this is the minimum.
A number of attributes control specific features of the radiosity
computations on a per-primitive basis. These attributes have
absolutely no effect if you are not performing radiosity calculations.
Attribute "radiosity" "color averagecolor" [color]
By default, the radiosity renderer assumes that the diffuse
reflectivity of a surface is the default color value (set by Color)
times the Kd value sent to the shader for that surface. For the
lighting calculations to be accurate, the reflective color should be
the average color of the patch. For surfaces with a solid color, this
is fine. However, some surface shaders create surfaces whose average
colors have nothing to do with the color set by the Color directive.
In this case, you should explicitly set the average color using the
attribute above. You may have to guess what the average color is for
a particular surface.
Attribute "radiosity" "color emissioncolor" [color]
All surfaces which are not light sources (Lightsource or
AreaLightsource) are assumed to be reflectors only (i.e. they do not
glow). If you want a piece of geometry to actually emit radiative
energy into the environment, you can either declare it as an
AreaLightSource, or you could declare it as regular geometry but give
it an emission color (see above). The tradeoffs are discussed further
in the radiosity section of this chapter.
Attribute "radiosity" "float patchsize" [4]
Attribute "radiosity" "float elemsize" [2]
Attribute "radiosity" "float minsize" [1]
This attribute tells rendrib how finely to mesh the
environment for radiosity calculations. The statement above instructs
to chop all geometry into patches no larger than 4 units on a side.
Each patch is then diced into elements no larger than 2 units on a
side. As a result of analyzing the radiosity gradients, elements may
be diced even finer, but a particular element will not be diced if its
longest edge is shorter than 1 unit. The smaller these numbers, the
longer the radiosity calculation will take (but it will be more
accurate). This attribute can be used to set these numbers on a
surface-by-surface basis (i.e., different surfaces in the scene may
have different dicing rates). The values are
measured in the current (i.e., local) coordinate system in
effect at the time of this Attribute statement. NOTE: The default
values are probably bad -- if you are
using radiosity, you should set these to appropriate sizes for your
particular scene.
Attribute "radiosity" "string zonal" ["fully_zonal"]
This attribute controls which radiosity calculations are performed on
surfaces. This can be set on a surface-by-surface basis. Possible
values are shown below, in order of increasing computational cost:
- "none" The surface will neither shoot or receive energy,
i.e. it will be ignored by the radiosity calculation.
- "zonal_receives" The surface receives radiant energy, but
does not shoot it back into the environment.
- "zonal_shoots" The surface reflects (or emits) energy,
but does not receive energy from other patches.
- "fully_zonal" The surfaces both receives and shoots
energy. This is the default zonal property of materials.
Monte Carlo Global Illumination Controls
In addition to finite element radiosity, whose options are described in the
previous subsection, BMRT also supports Monte Carlo-based global
illumination calculations. There are a few options related to this
technique. Many of the options are related to the fact that it's
ridiculously expensive to recompute the indirect illumination at every
pixel. So it's only done periodically, and results from the sparse
sampling are interpolated or extrapolated. Many options relate to how
often it's done. Most of the settings are attributes so that they can be
varied on a per-object basis. They are shown here with their default
values as examples:
Attribute "indirect" "float maxerror" [0.25]
A maximum error metric. Smaller numbers cause recomputation
to happen more often. Larger numbers render faster, but you
will see artifacts in the form of obvious "splotches" in the
neighborhood of each sample. Values between 0.1-0.25 work
reasonably well, but you should experiment. But in any case, this is
a fairly straightforward time/quality knob.
Attribute "indirect" "float maxpixeldist" [20]
Forces recomputation based roughly on (raster space) distance.
The above line basically says to recompute the indirect illumination
when no previous sample is within roughly 20 pixels, even if the
estimated error is below the allowable maxerror threshold.
Attribute "indirect" "integer nsamples" [256]
How many rays to cast in order to estimate irradiance, when generating
new samples. Larger is less noise, but more time. Should be
obvious how this is used. Use as low a number as you can stand
the appearance, as rendering time is directly proportional to this.
There are also two options that make it possible to store and re-use
indirect lighting computations from previous renderings.
Option "indirect" "string savefile" ["indirect.dat"]
If you specify this option, when rendering is done the contents
of the irradiance data cache will be written out to disk in a file
with the name you specify. This is useful mainly if the next time
you render the scene, you use the following option:
Option "indirect" "string seedfile" ["indirect.dat"]
This option causes the irradiance data cache to start out with
all the irradiance data in the file specified. Without this, it
starts with nothing and must sample for all values it needs.
If you read a data file to start with, it will still sample for
points that aren't sufficiently close or have too much error.
But it can greatly save computation by using the samples that
were computed and saved from the prior run.
Options for Photon Mapping for Caustics
Attribute "caustic" "float maxpixeldist" [16]
Limits the distance (in raster space) over which it will consider
caustic information. The larger this number, the fewer total photons
will need to be traced, which results in your caustics being calculated
faster. The appearance of the caustics will also be smoother. If the
maxpixeldist is too large, the caustics will appear too blurry. As
the number gets smaller, your caustics will be more finely focused, but
may get noisy if you don't use enough total photons.
Attribute "caustic" "integer ngather" [75]
Sets the minimum number of photons to gather in order to estimate the
caustic at a point. Increasing this number will give a more accurate
caustic, but will be more expensive.
There's also an attribute that can be set per light, to indicate how
many photons to trace in order to calculate caustics:
Attribute "light" "integer nphotons" [0]
Sets the number of photons to shoot from this light source in order
to calculate caustics. The default is 0, which means that the light
does not try to calculate caustic paths. Any nonzero number will
turn caustics on for that light, and higher numbers result in more
accurate images (but more expensive render times). A good guess to
start might be 50,000 photons per light source.
The algorithm for caustics doesn't understand shaders particularly
well, so it's important to give it a few hints about which objects
actually specularly reflect or refract lights. These are controlled
by the following attributes:
Attribute "caustic" "color specularcolor" [0 0 0]
Sets the reflective specularity of subsequent primitives. The
default is [0 0 0], which means that the object is not
specularly reflective (for the purpose of calculating caustics; it
can, of course, still look shiny depending on its surface shader).
Attribute "caustic" "color refractioncolor" [0 0 0]
Attribute "caustic" "float refractionindex" [1]
Sets the refractive specularity and index of refraction for subsequent
primitives. The default for refractioncolor is [0 0 0],
which means that the object is not specularly refractive at all (for
the purpose of calculating caustics; it can, of course, still look
like it refracts light depending on its surface shader).
Option "limits" "integer texturememory" [1000]
Sets the texture cache size, measured in Kbytes.
The renderer will try to keep no more
than this amount of memory tied up with textures. Setting it low keeps
memory consumption down if you use many textures. But setting it too
low may cause thrashing if it just can't keep enough in cache.
The default is 1000 (i.e., 1 Mbyte). The texture cache is only used
for tiled textures, i.e. those made with the mkmip
program. For regular scanline TIFF files, texture memory can grow
very large.
Option "limits" "integer geommemory" [unlimited]
Analogous to the texturememory option, this sets a limit to the amount
of memory used to hold the diced pieces of NURBS, bicubics, and
displaced geometry. It is an integer, giving a measurement in Kbytes.
The default is unlimited, but setting this to something smaller
(like 100000, or 100 Mbytes) can keep your memory
consumption down for large scenes, but setting it too low may cause you
to continually be throwing out and regenerating your NURBS or displaced
surfaces.
Option "limits" "integer derivmemory" [2]
A certain amount of memory is needed to allow rendrib's Shading
Language interpreter to correctly compute derivatives. Very
occasionally, you may need to increase this number (generally only if
you have absolutely humongous shaders with many texture or other
derivative calls). The default is 2 (i.e., 2 Kbytes), which is
almost always adequate. If your frames are
not crashing mysteriously in the shaders, don't screw with this number!
Option "runtime" "string verbosity" ["normal"]
This option controls the same output as the -v and
-stats command line options. The verb parameter is a
string which controls the level of verbosity. Possible values, in
order of increasing output detail, are: "silent",
"normal", "stats", "debug".
The default rendering method used by BMRT is ray tracing. Of course,
if you only use standard surfaces and light sources, the results will
not be very dramatic. But you can also write shaders that cast
reflection and refraction rays. Light sources which cast ray-traced
shadows can be added automatically, even from area light sources.
This section describes the Shading Language functions that provide
extra support for ray tracing.
color trace (point from, vector dir)
Traces a ray from position from in the direction of
vector dir. The return value is the incoming light
from that direction.
color visibility (point p1, p2)
Forces a visibility (shadow) check between two arbitrary points,
retuning the spectral visibility between them. If there is no
geometry between the two points, the return value will be (1,1,1). If
fully opaque geometry is between the two points, the return value will
be (0,0,0). Partially opaque occluders will result in the return of a
partial transmission value.
An example use of this function would be to make an explicit shadow
check in a light source shader, rather than to mark lights as casting
shadows in the RIB stream (as described in the previous section on
nonstandard attributes). For example:
light
shadowpointlight (float intensity = 1;
color lightcolor = 1;
point from = point "shader" (0,0,0);
float raytraceshadow = 1;)
{
illuminate (from) {
Cl = intensity * lightcolor / (L . L);
if (raytraceshadow != 0)
Cl *= visibility (Ps, from);
}
}
float rayhittest (point from, vector dir,
output point Ph, output vector Nh)
Probes geometry from point from looking in direction
dir. If no geometry is hit by the ray probe, the return
value will be very large (1e38). If geometry is encountered, the
position and normal of the geometry hit will be stored in
Ph and Nh, respectively, and the return
value will be the distance to the geometry.
float fulltrace (point pos, vector dir,
output color hitcolor, output float hitdist,
output point Phit, output vector Nhit,
output point Pmiss, output point Rmiss)
Traces a ray from pos in the direction dir.
If any object is hit by the ray, then hitdist will be
set to the distance of the nearest object hit by the ray,
Phit and Nhit will be set to the position
and surface normal of that nearest object at the intersection point,
and hitcolor will be set to the light color arriving from
the ray (just like the return value of trace).
If no object is hit by the ray, then hitdist will be set to 1.0e30,
hitcolor will bet set to (0,0,0).
In either case, in the course of tracing, if any ray
(including subsequent rays traced through glass, for example) ever
misses all objects entirely, then Pmiss and Rmiss
will be set to the position and direction of the deepest ray that
failed to hit any objects, and the return value of this function will
be the depth of the ray which missed. If no ray misses (i.e. some ray
eventually hits a nonreflective, nonrefractive object), then the
return value of this function will be zero. An example use of this
functionality would be to combine ray tracing of near objects with
an environment map of far objects.
The code fragment below traces a ray (for example, through glass).
If the ray emerging from the far side of the glass misses all objects,
it adds in a contribution from an environment map, scaled such that
the more layers of glass it went through, the dimmer it will be.
missdepth = fulltrace (P, R, C, d, Ph, Nh, Pm, Rm);
if (missdepth > 0)
C += environment ("foo.env", Rm) / missdepth;
float isshadowray ()
Returns 1 if this shader is being executed in order to evaluate
the transparency of a surface for the purpose of a shadow ray.
If the shader is instead being evaluated for visible appearance,
this function will return 0. This function can be used to alter
the behavior of a shader so that it does one thing in the case
of visibility rays, something else in the case of shadow rays.
float raylevel ()
Returns the level of the ray which caused this shader to be
executed. A return value of 0 indicates that this shader is being
executed on a camera (eye) ray, 1 that it is the result of a single
reflection or refraction, etc. This allows one to customize the
behavior of a shader based on how ``deep'' in the reflection/refraction
tree.
Ray tracing will determine illumination only via direct paths from
light sources to surfaces being shaded. No knowledge of indirect
illumination, or interreflection between objects, is available to the
ray tracer. This kind of illumination is responsible for effects such
as indirect lighting, soft shadows, and color bleeding. BMRT actually
has two different algorithms for computing these kinds of global
illumination effects: finite element radiosity, and Monte Carlo
irradiance calculations.
Finite element radiosity subdivides all of your geometric primitives
into patches, then subdivides the patches into ``elements.'' In a
series of progressive steps, the patch with the most energy ``shoots''
its energy at all of the vertices of all of the elements. This
distributes the energy around the scene. The more steps you run,
and the smaller your patches and elements are, the more accurate your
image will be (but the longer it will take to render). The radiosity
steps are all computed up front, before the first pixel is actually
rendered.
Finite element radiosity has some big drawbacks, almost all of which
are related to the fact that it has to pre-mesh the entire scene.
First, it gets inaccuracies whenever you use CSG, or have trim curves
on your NURBS patches. It can use lots of time and memory when you
have many geometric primitives, especially if your objects are made
out of lots of little polygons or subdivision meshes. If you use
procedural primitives, the renderer will have to expand them all right
at the beginning, in order to mesh them. Overall, FE radiosity
just doesn't scale particularly well with large scenes.
The newer Monte Carlo irradiance approach has a different set of
tradeoffs. Rather than enmeshing the scene and solving the light
transport up front, the MC approach is ``pay as you go.'' As it's
rendering, when it needs information about indirect illumination, it
will do a bunch of extra ray tracing to figure out the irradiance. It
will save those irradiance values, and try to reuse them for nearby
points. The MC approach works just fine with CSG and trim curves. It
doesn't unpack procedural primitives until they are really needed. It
takes longer than FE radiosity for small scenes, but it scales better
and should be cheaper for large scenes. As this technique continues
to be improved in BMRT, we will probably phase out the FE radiosity.
To render the scene using radiosity, just type:
rendrib -radio n myfile.rib
The parameter n is a number giving the maximum number of radiosity
steps to perform. A typical number might be 50. Higher values of n
will yield more accurate illumination solutions, but will also take
much longer to compute. If the solution to the illumination equations
converges in fewer steps, the program will simply terminate early, and
not perform the additional steps.
Alternately, you could just use Attribute "radiosity" "nsteps"
as described in the previous section.
When using radiosity, there are a few more things you need to do:
- You should set the meshing rates for the patches and elements.
See ``patchsize,'' ``elemsize,'' and ``minsize'' in the ``radiosity
attributes'' section of this document.
- For any nonobvious surfaces, you need to give the average diffuse
reflectivity. (Obvious means that the average diffuse reflectivity is
the same as the color set by the Color directive.) See the
``nonstandard attributes'' section of this document for details on
setting the average and emissive colors for surfaces. The renderer is
smart enough to query shaders for their ``Kd'' values, so there is no
need to premultiply the average color by Kd. However, that's about as
smart as it gets, so don't expect any tricks done by the surface
shader to be somehow divined by the radiosity engine. Any texture
mapped objects must also have their average color declared in order to
specify the average color of the texture map.
When rendering with radiosity, there are two ways to make area
light sources. One way is to use the AreaLightSource directive,
explicitly making area light sources. The second way is to declare
regular geometry, but setting an emission color:
Attribute "radiosity" "emissioncolor" [color]
The difference is subtle. Both ways will make these patches shoot
light into their environment. However, only the geometry declared
with AreaLightSource will be resampled again on the second pass. This
results in more accurate shadows and nicer illumination, but at the
expense of much longer rendering time on the second pass.
To use the Monte Carlo irradiance calculations for global illumination,
you need to follow a different set of steps.
- Don't use any of the radiosity options or the -radio flag.
The old radiosity and the new irradiance stuff are not meant to
be used together.
- Add a light source to the scene using the "indirect" light
shader, which is in the shaders directory of BMRT. This light is
built into BMRT, so the shader will not actually be accessed. If there
are any objects that you specifically do not want indirect illumination
to fall on, you can just use Illuminate to turn the light off, and
subsequent surfaces won't get indirect illumination.
- There are several options that control the behavior of the
computations. See section
for their
description. You may need to adjust several of them on a per-shot
basis, based on time/memory/quality tradeoffs.
There are a few limitations with the irradiance calculations
that you should be aware of:
- Right now, it only works for front lit objects, and assumes
that you're interested in the side with the outward-pointing normal.
Translucent surfaces will be okay eventually, but for now they're not
operational.
- No volumes yet.
- If you compare simple scenes with the old-style BMRT radiosity,
you'll find that the radiosity is much faster than the new method.
However, for large scenes the new method will most likely win out.
Furthermore, the new method can also be used with the ray server,
unlike the old.
You shouldn't use the "seedfile" option if the objects have
moved around. But if the objects are static, either because you have
only moved the camera, or because you are rerendering the same frame,
the combination of "seedfile" and "savefile" can
tremendously speed up computation.
Here's another way they can be used. Say you can't afford to set the
other quality options as nice as you would like, because it would take
too long to render each frame. So you could render the environment
from several typical viewpoints, with only the static objects, and
save all the results to a single shared seed file. Then for main
frames, always read this seed file (but don't save!) and most of the
sampling is already done for you, though it will redo sampling on the
objects that move around. Does this make sense?
You can also use the Monte Carlo irradiance global illumination in ray
server mode, to serve global illumination to PRMan. If you look at
indirect.sl (which is only used on the PRMan side -- the light
is built into BMRT), you'll see that the light shader simply makes a
call to rayserver_indirect, and then stashes the results into
Cl so that it looks like an ordinary light, and hence it will work
with any existing surface shaders without modifications. Don't forget
to compile indirect.sl for PRMan.
Note that PRMan doesn't know anything about the indirect options, so
you'll see warnings about them. This is perfectly harmless.
BMRT also has the option of computing caustics, which (to
mangle the true meaning just a bit) refers to light that reflects or
refracts from specular objects to focus on other objects. To compute
caustics, you must follow these steps:
- Declare a magic light source with the "caustic" shader
(like "indirect", it's built into BMRT rather than being an
actual shader). You should use RiIlluminate to turn
the caustic light on for objects that receive caustics, and turn it
off for objects that are known to not receive caustics. Illuminating
just the objects that are known to receive caustics can save lots of
rendering time.
- For any light sources that should reflect or refract from specular
object, thereby causing caustics, you will need to set the number of
photons with:
Attribute "light" "integer nphotons"
This sets the number of photons to shoot from this light
source in order to calculate caustics. The default is 0, which means
that the light does not try to calculate caustic paths. Any nonzero
number will turn caustics on for that light, and higher numbers result
in more accurate images (but more expensive render times).
- The algorithm for caustics doesn't understand shaders particularly
well, so it's important to give it a few hints about which objects
actually specularly reflect or refract lights. This is done with
Attribute "radiosity" "specularcolor",
Attribute "radiosity" "refractioncolor",
Attribute "radiosity" "refractionindex",
See section
for details.
- Finally, you may want to adjust several global options that
control basic time/quality tradeoffs. These are also described in
section
.
Please note that rendering full color frames can take a really
long time! High quality rendering, especially ray tracing, is
notoriously slow. Try a couple test frames first, to make sure you
have everything right before you compute many frames. Multiply the
time it takes for each frame by the total number of frames you need.
If your total rendering time is prohibitive (say, 5 months), you'd
better change something!
Don't bother praying or panicking: we have it on good authority that
neither does much to increase rendering throughput. Some optimization
hints are listed below. Obvious, effective, easy optimizations are
listed first. Trickier or subtler optimizations are listed last.
- Resolution
Use low resolution when you can. You may want to do test frames at
320 x 240 resolution or lower. Remember that video resolution is only
about 640 x 480 pixels. It's pointless to render at higher resolution
if you intend to record onto videotape, since any higher resolution
will be lost in the scan conversion. Even film can be done at very
high quality with about 2048 pixels wide, so don't go wasting time
with 4k renders.
- Pixel Sampling Rate and Antialiasing
Try to specify only 1 sample per pixel for test frames. You can
sometimes get away with one sample per pixel for final video frames,
too. However, to get really good looking frames you probably need to
do higher sampling for antialiasing. There are several sources of
aliasing: geometric edges, motion blur, area light shadows, depth of
field effects, reflections/refractions, and texture patterns.
Usually, 2x2 sampling is perfectly adequate to antialias geometric
edges for video images. Higher than 3x3 does not usually give
noticeable improvements for geometric edges, but you may
require even more samples to reduce noise from motion blur and depth of
field. There's not much you can do about that if you must
using these effects.
You should prefer using Attribute "light" "nsamples" to
increase sampling of area lights, rather than increasing PixelSamples.
Similarly, if the source of your aliasing is blurry reflections or
refractions from shaders which use the trace() function, you should
consult the documentation for those shaders -- many give the option
of firing many distributed ray samples, rather than being forced to
increase the screen space sampling rate.
Higher sampling rates should never be used to eliminate aliasing in
shaders. Well written shaders should be smart enough to analytically
antialias themselves by frequency clamping or other trickery. It's
considered bad style to write shaders which alias badly enough to
require high sampling at the image level.
- Geometric Representation
Keep your geometry simple, and use curved surface primitives instead
of lots of polygons whenever possible. Try writing surface or
displacement shaders to add detail to surfaces. It's generally faster
to fake the appearance of complexity than it is to create objects with
real geometric complexity. Try to make your images interesting
through the use of complex textures used on relatively simple
geometry.
- Lights and Shadows
Shadows are important visual cues, but you must use them wisely.
Shadowed light sources can really increase rendering time. Only cast
shadows from light sources that really need them. If you have several
light sources in a scene, you may be able to get away with having only
the brightest one cast shadows. Nobody may know the difference!
Similarly, most objects can be treated as completely opaque (this
assumption speeds rendering time). Some objects do not need to cast
shadows at all (for example, floors or walls in a room). See the
``nonstandard options and attributes'' section of this chapter for
information on giving the renderer shadowing hints.
- Shading Models
Keep your shading models simple. Complex procedural textures (such as
wood or marble) take much more time to compute than plastic. On the
other hand, it is much cheaper to use custom surface or displacement
shaders to make surfaces look complex than it is to actually use
complex geometry.
Distribution of rays results in noise. The fewer samples per pixel,
the higher the noise. So if you want to keep sampling rates low and
reduce noise in the image, you should: avoid using the ``blur''
parameter in the ``shiny'' and ``glass'' surfaces unless you really
need it; do not use depth of field if you can get away with a
post-processing blur; use nonphysical lights (``pointlight'',
``distantlight'', etc.) instead of physical and area lights.
- Tuning Ray Tracing Parameters
Several time/quality knobs exist in the ray tracing engine - see the
earlier section on nonstandard options and attributes for details. In
addition to ensuring that opaque and non-shadow-casting objects are
tagged as such, also be sure that your max ray recursion level (Option "render" "max_raylevel") is set as low as possible (the
default is 4, but you may be able to get away with as little as 1 or 2
if you don't have much glass or mutual reflection.
This section details how BMRT differs from the RenderMan Interface 3.2
Specification, as well as any issues related to other renderers.
The rendrib renderer uses API's that are very similar to the
RenderMan Interface 3.2 standard. In fact, you may find that your
scenes written to comply with the RenderMan Interface 3.2 standard can
be rendered with BMRT without modification. The book Advanced
RenderMan: Creating CGI for Motion Pictures by Apodaca and Gritz,
(Morgan-Kaufmann, 1999) should apply almost totally to BMRT. However,
compared to the published RenderMan standard, BMRT has several
differences, unimplemented features, and limitations:
- True displacement of surface points is only partially supported.
If you displace in a surface shader, or even in a displacement shader
without using the ``truedisplacement'' attribute, only the surface
normals will be purturbed, the points will not move. This usually
looks fine as long as the bumps are small. However, if you use the
``truedisplacement'' attribute, a displacement shader will actually do
what you expect and move the surface points.
True displacements are somewhat limited: (1) it only works for
displacement shaders, not surface shaders; (2) it uses lots
of memory, and also takes more time to render; (3) you cannot use
``message passing'' between the displacement and surface shaders; (4)
you must remember to set displacement bounds; (5) you may get odd
self-shadowing of surfaces during radiosity calculations if you use
too small a shadow bias.
- The following optional capabilities are not supported: Special
Camera Projections, Spectral Colors.
- Motion Blur, Depth of Field, and Level of Detail are not
supported in BMRT 2.6. Also, the Blobby and Curves primitives are
not currently supported (but they do work fine with rgl).
We hope to have all of these features working
in future releases.
Many people use both BMRT and Pixar's PhotoRealistic RenderMan
((R) Pixar) (sometimes called PRMan).
While rendrib uses
ray tracing and radiosity, PRMan uses a scanline method
called REYES. Though both renderers should take nearly the same
input, the difference in their underlying methods necessarily results
in different subsets of the RenderMan standard supported by the two
programs. This section lists some of the incompatibilities of the two
programs. These differences should not be construed as bugs in either
program, but are mostly natural limitations of the two rendering
methods. This list is for the user who uses both programs, or wishes
to use one program to render output meant for the other.
- slc outputs compiled Shading Language as ``.slc'' files (either
interpreted ASCII or DSO's), which are not compatible with Pixar's
``.slo'' files. The Shading Language source files (``.sl'') are
almost completely compatible
- The texture mapping and environment mapping routines in
rendrib take TIFF files directly (either scanline or tiled),
and do not read PRMan's proprietary texture format.
- PRMan doesn't support true area light sources (but instead
places a point light at the current origin), but rendrib
supports area light sources correctly.
- rendrib's support of true displacement is somewhat more
limited than PRMan's, as detailed in the previous subsection.
- PRMan's trace() function always returns 0, and does
not support the nonstandard visibility, fulltrace,
raylevel, and isshadowray functions which rendrib
implements.
- PRMan does not support Imager, Interior, or Exterior
shaders. rendrib fully supports these kinds of shaders.
There are a bunch of other things you should know about rendrib but
we couldn't figure out where they shoudl go in the manual. In no particular
order:
- Before rendering any RIB specified on the command line
or piped to it, rendrib will first read the contents of the
file $BMRTHOME/.rendribrc. If there is no environment
variable named $BMRTHOME, then the file $HOME/.rendribrc
is read instead. In either case, by putting RIB in one of these
places, you can set various options for rendrib before any other
RIB is read.
- When using the new Shading Language rendererinfo() function
to query the "renderer", the value returned is "BMRT".
|