Contents Previous Next

Java 3D API Specification


A P P E N D I X E

Equations




THIS appendix contains the Java 3D equations for fog, lighting, sound, and texture mapping. Many of the equations use the following symbols:

· Multiplication
Function operator for sound equations,
Dot product for all other equations

E.1 Fog Equations

The ideal fog equation is as follows:

The fog coefficient, f, is computed differently for linear and exponential fog. The equation for linear fog is as follows:

The equation for exponential fog is as follows:

The parameters used in the fog equations are as follows:

C = Color of the pixel being fogged
Cf = Fog color
d = Fog density
F = Front fog distance, measured in eye coordinates
B = Back fog distance, measured in eye coordinates
z = The z-coordinate distance from the eyepoint to the pixel being fogged, measured in eye coordinates
f = Fog coefficient

Fallbacks and Approximations

E.2 Lighting Equations

The ideal lighting equation is as follows:


Note: If (Li N) 0, then diffi and speci are set to 0.


Note: For directional lights, atteni is set to 1.


Note: If the vertex is outside the spot light cone, as defined by the cutoff angle, spoti is set to 0. For directional and point lights, spoti is set to 1.
This is a subset of OpenGL in that the Java 3D ambient and directional lights are not attenuated and only ambient lights contribute to ambient lighting.

The parameters used in the lighting equation are as follows:

E = Eye vector
Ma = Material ambient color
Md = Material diffuse color
Me = Material emissive color
Ms = Material specular color
N = Vertex normal
shin = Material shininess

The per-light values are as follows:

di = Distance from vertex to light
Di = Spot light direction
expi = Spot light exponent
Kci = Constant attenuation
Kli = Linear attenuation
Kqi = Quadratic attenuation
Li = Direction from vertex to light
Lci = Light color
Si = Specular half-vector = || (Li + E) ||

Fallbacks and Approximations

E.3 Sound Equations

There are different sets of sound equations, depending on whether the application uses headphones or speakers.

E.3.1 Headphone Playback Equations

For each sound source, Java 3D calculates a separate left and right output signal. Each left and right sound image includes differences in the interaural intensity and an interaural delay. The calculation results are a set of direct and indirect (delayed) sound signals mixed together before being sent to the audio playback system's left and right transducers.

E.3.1.1 Interaural Time Difference (Delay)

For each PointSound and ConeSound source, the left and right output signals are delayed based on the location of the sound and the orientation of the listener's head. The time difference between these two signals is called the interaural time difference (ITD). The time delay of a particular sound reaching an ear is affected by the arc the sound must travel around the listener's head. Java 3D uses an approximation of the ITD using a spherical head model. The interaural path difference is calculated based on the following cases:

where

where

The time from the sound source to the closest ear is , and the time from the sound source to the farthest ear is , where S is the current AuralAttribute region's speed of sound.

If the sound is closest to the left ear, then

If the sound is closest to the right ear, then

The parameters used in the ITD equations are as follows:

= The smaller of the angles between Vh (or -Vh) and Va in radians
= Angle between Vh and radius to tangent point on Vt in radians
De = Distance between ears (interaural distance)
Dh = Distance from interaural center to sound source
Ec = Distance from sound source to ear closest to sound
Ef = Distance from sound source to ear farthest from sound
P, P' = Arc path around the head an indirect signal must travel to reach an ear
S = Speed of sound for the current AuralAttribute region
Va = Vector from center ear forward parallel to Z axis of head coordinates
Vc = Vector from sound source to ear closest to sound
Vh = Vector from center ear to sound source
Vt = Vector from sound source to tangent point on the listener's head

E.3.1.2 Interaural Intensity (Gain) Difference

For each active and playing Point and ConeSound source, i, separate calculations for the left and right signal (based on which ear is closest and which is farthest to the source) are combined with nonspatialized BackgroundSound to create a stereo sound image. Each equation below is calculated separately for the left and right ear.


Note: For BackgroundSound sources ITDi is an identity function so there is no delay applied to the sample for these sources.


Note: For BackgroundSound sources Gdi = Gai = 1.0. For PointSound sources Gai = 1.0.


Note: For BackgroundSound sources Fdi and Fai are identity functions. For PointSound sources Fai is an identity function.
If the sound source is on the right side of the head, Ec is used for left G and F calculations and Ef is used for right. Conversely, if the Sound source is on the left side of the head, Ef is used for left calculations and Ec is used for right.

Attenuation
For sound sources with a single distanceGain array defined, the intersection points of Vh (the vector from the sound source position through the listener's position) and the spheres (defined by the distanceGain array) are used to find the index k where dk L dk+1. See Figure E-3.

For ConeSound sources with two distanceGain arrays defined, the intersection points of Vh and the ellipsi (defined by both the front and back distanceGain arrays) closest to the listener's position are used to determine the index k. See Figure E-4.

The equation for the distance gain is

Angular attenuation for both the spherical and elliptical cone sounds is identical. The angular distances in the attenuation array closest to are found and define the index k into the angular attenuation array elements. The equation for the angular gain is

Filtering
Similarly, the equations for calculating the AuralAttributes distance filter and the ConeSound angular attenuation frequency cutoff filter are

An N-pole lowpass filter may be used to perform the simple angular and distance filtering defined in this version of Java 3D. These simple lowpass filters are meant only as an approximation for full, FIR filters (to be added in some future version of Java 3D).

Fallbacks and Approximations
The parameters used in the interaural intensity difference (IID) equations are as follows:

A, B = Triples containing DistanceGain linear distance, gain scale factor, and AuralAttribute cutoff frequency
C, D = Triples containing AngularAttenuation angular distance, gain scale factor, and cutoff frequency
= Angle between Vh and Va in radians
Ec = Distance from sound source to ear closest to sound from the ITD equation
Ef = Distance from sound source to ear farthest from sound source from the ITD equation
Fa = Angular filter from ConeSound definition
Fd = Distance filter from AuralAttributes
Ga = Angular gain attenuation scale factor
Gd = Distance gain attenuation scale factor
Gi = Initial gain scale factor
Gr = Current AuralAttribute region's gain scale factor
I = Stereo sound image
L = Listener distance from sound source
maxNumS = Maximum number of sound sources for the audio device that the application is using for playback
numS = Number of sound sources
sample = Sound digital sample with a specific sample rate, bit precision, and an optional encoding and/or compression format
Vh = Vector from center ear to sound source

E.3.1.3 Doppler Effect Equations

Between two snapshots of the head and the sound source positions some delta time apart, the distance between the head and source is compared. If there has been no change in the distance between the head and the sound source over this delta time, the Doppler effect equation is as follows:

If there has been a change in the distance between the head and the sound, the Doppler effect equation is as follows:

When the head and sound are moving towards each other (the velocity ratio is greater than 1.0), the velocity ratio equation is as follows:

When the head and sound are moving away from each other (the velocity ratio is less than 1.0), the velocity ratio equation is as follows:

The parameters used in the Doppler effect equations are as follows:

Af = AuralAttribute frequency scale factor
Ar = AuralAttribute rolloff scale factor
Av = AuralAttribute velocity scale factor
v = Delta velocity
f = Frequency of sound
h = Listener's head position
v = Ratio of delta velocities
Vh = Vector from center ear to sound source
s = Sound source position
S = Speed of sound
t = Time


Note: If the adjusted velocity of the head or the adjusted velocity of the sound is greater than the adjusted speed of sound, is undefined.

E.3.1.4 Reverberation Equations

The overall reverberant sounds, used to give the impression of the aural space in which the active/enabled source sources are playing, is added to the stereo sound image output from equation E.14.

Reverberation for each sound is approximated in the following:

Note that the reverberation calculation outputs the same image to both left and right output signals (thus there is a single monaural calculation for each sound reverberated). Correct first-order (early) reflections, based on the location of the sound source, the listener, and the active AuralAttribute's bounds, are not required for this version of Java 3D. Approximations based on the reverberation delay time, either suppled by the application or calculated as the average delay time within the selected AuralAttribute's application region, will be used.

The feedback loop is repeated until AuralAttribute's reverberation feedback loop count is reached or Grj 0.000976 (effective zero amplitude, -60 dB, using the measure of -6 dB drop for every doubling of distance).

Fallbacks and Approximations
The parameters used in the reverberation equations are as follows:

D = Delay function
fLoop = Reverberation feedback loop count
Gr = Reverberation coefficient acting as a gain scale-factor
I = Stereo image of unreflected sound sources
R = Reverberation for each sound sources
Sample = Sound digital sample with a specific sample rate, bit precision, and an optional encoding and/or compression format
t = Time
Tr = Reverberation delay time (approximating first-order delay in the AuralAttribute region)

E.3.2 Speaker Playback Equations

Different speaker playback equations are used depending on whether the system uses monaural or stereo speakers.

E.3.2.1 Monaural Speaker Output

The equations for headphone playback need only be modified to output a single signal, rather than two signals for left and right transducers. Although there is only one speaker, distance and filter attenuation, Doppler effect, elevation, and front and back cues can be distinguished by the listener and should be included in the sound image generated.

E.3.2.2 Stereo Speaker Output

In a two-speaker playback system, the signal from one speaker is actually heard by both ears and this affects the spectral balance and interaural intensity and time differences heard by each of the listener's ears. Cross-talk cancellation must be performed on the right and left signal to compensate for the delayed attenuated signal heard by the ear opposite the speaker. Thus a delayed attenuated signal for each of the stereo signals must be added to the output from the equations for headphone playback.

The equations for stereo speaker playback assume that the two speakers are placed symmetrically about the listener (at the same off-axis angle from the viewing axis at an equal distance from the center of the listener's head).

The parameters used in the cross-talk equations, expanding on the terms used for the equations for headphone playback, are as follows:

= Angle between vectors from speaker to near and far ears
D = Delay function of signal variant over time
G = Gain attenuation scale factors function taking initial distance and angular gain scale factors into account
I = Sound image for left and right stereo signals calculated as for headphone output
P = Distance difference between near ear and far ear as defined for ITD, the speaker substituted for the sound source in equation
t = Time

E.4 Texture Mapping Equations

Texture mapping can be divided into two steps. The first step takes the transformed s and t (and possibly r) texture coordinates, the current texture image, and the texture filter parameters, and computes a texture color based on looking up the texture coordinates in the texture map. The second step applies the computed texture color to the incoming pixel color using the specified texture mode function.

E.4.1 Texture Lookup

The texture lookup stage maps a texture image onto a geometric polygonal primitive. The most common method for doing this is to reverse map the s and t coordinates from the primitive back onto the texture image, then filter and resample the image. In the simplest case, a point in s, t space is transformed into a u, v address in the texture image space (E.29), then this address is used to look up the nearest texel value in the image. This method, used when the selected texture filter function is BASE_LEVEL_POINT, is called nearest-neighbor sampling or point sampling.

If the texture boundary mode is REPEAT, then only the fractional bits of s and t are used, ensuring that both s and t are less than 1.

If the texture boundary mode is CLAMP, then the s and t values are clamped to be in the range [0, 1] before being mapped into u and v values. Further, if s 1, then i is set to width - 1; if t 1, then j is set to height - 1.

The parameters in the point-sampled texture lookup equations are as follows:

width = Width, in pixels, of the texture image
height = Height, in pixels, of the texture image
s = Interpolated s coordinate at the pixel being textured
t = Interpolated t coordinate at the pixel being textured
u = u coordinate in texture image space
v = v coordinate in texture image space
i = Integer row address into texture image
j = Integer column address into texture image
T = Texture image

The above equations are used when the selected texture filter function-either the minification or the magnification filter function-is BASE_LEVEL_POINT. Java 3D selects the appropriate texture filter function based on whether the texture image is minified or magnified when it is applied to the polygon. If the texture is applied to the polygon such that more than one texel maps onto a single pixel, then the texture is said to be minified and the minification filter function is selected. If the texture is applied to the polygon such that a single texel maps onto more than one pixel, then the texture is said to be magnified and the magnification filter function is selected. The selected function is one of the following: BASE_LEVEL_POINT, BASE_LEVEL_LINEAR, MULTI_LEVEL_POINT, or MULTI_LEVEL_LINEAR. In the case of magnification, the filter will always be one of the two base level functions (BASE_LEVEL_POINT or BASE_LEVEL_LINEAR).

If the selected filter function is BASE_LEVEL_LINEAR, then a weighted average of the four texels that are closest to the sample point in the base level texture image is computed.

If the selected filter function is MULTI_LEVEL_POINT or MULTI_LEVEL_LINEAR, the texture image needs to be sampled at multiple levels of detail. If multiple levels of detail are needed and the texture object only defines the base level texture image, Java 3D will compute multiple levels of detail as needed.

Mipmapping is the most common filtering technique for handling multiple levels of detail. If the implementation uses mipmapping, the equations for computing a texture color based on texture coordinates are simply those used by the underlying rendering API (such as OpenGL or PEX). Other filtering techniques are possible as well.

Fallbacks and Approximations

E.4.2 Texture Application

Once a texture color has been computed, this color is applied to the incoming pixel color. If lighting is enabled, only the emissive, ambient, and diffuse components of the incoming pixel color are modified. The specular component is added into the modified pixel color after texture application.

The equations for applying that color to the original pixel color are based on the texture mode, as follows:

REPLACE Texture Mode

MODULATE Texture Mode

DECAL Texture Mode

Note that the texture format must be either RGB or RGBA.

BLEND Texture Mode

Note that if the texture format is INTENSITY, alpha is computed identically to red, green, and blue:

The parameters used in the texture mapping equations are as follows:

C = Color of the pixel being texture mapped (if lighting is enabled, then this does not include the specular component)
Ct = Texture color
Cb = Blend color

Note that Crgb indicates the red, green, and blue channels of color C and that C indicates the alpha channel of color C. This convention applies to the other color variables as well.

If there is no alpha channel in the texture, a value of 1 is used for Ct in BLEND and DECAL modes.

When the texture mode is one of REPLACE, MODULATE, or BLEND, only certain of the red, green, blue, and alpha channels of the pixel color are modified, depending on the texture format, as described below.

Fallbacks and Approximations
An implementation may apply the texture to all components of the lit color, rather than separating out the specular component. Conversely, an implementation may separate out the emissive and ambient components in addition to the specular component, potentially applying the texture to the diffuse component only.



Contents Previous Next

Java 3D API Specification


Copyright © 1999, Sun Microsystems, Inc. All rights reserved.