Contents Previous Next

Java 3D API Specification


A P P E N D I X C

View Model Details




AN application programmer writing a 3D graphics program that will deploy on a variety of platforms must anticipate the likely end-user environments and must carefully construct the view transformations to match those characteristics using a low-level API. This appendix addresses many of the issues an application must face and describes the sophisticated features that Java 3D's advanced view model provides.

C.1 An Overview of the Java 3D View Model

Both camera-based and Java 3D-based view models allow a programmer to specify the shape of a view frustum and, under program control, to place, move, and re-orient that frustum within the virtual environment. However, how they do this varies enormously. Unlike the camera-based system, the Java 3D view model allows slaving the view frustum's position and orientation to that of a six-degrees-of-freedom tracking device. By slaving the frustum to the tracker, Java 3D can automatically modify the view frustum so that the generated images match the end-user's viewpoint exactly.

Java 3D must handle two rather different head-tracking situations. In one case, we rigidly attach a tracker's base, and thus its coordinate frame, to the display environment. This corresponds to placing a tracker base in a fixed position and orientation relative to a projection screen within a room, relative to a computer display on a desk, or relative to the walls of a multiple-wall projection display. In the second head-tracking situation, we rigidly attach a tracker's sensor, not its base, to the display device. This corresponds to rigidly attaching one of that tracker's sensors to a head-mounted display and placing the tracker base somewhere within the physical environment.

C.2 Physical Environments and Their Effects

Imagine an application where the end user sits on a magic carpet. The application flies the user through the virtual environment by controlling the carpet's location and orientation within the virtual world. At first glance, it might seem that the application also controls what the end user will see-and it does, but only superficially.

The following two examples show how end-user environments can significantly affect how an application must construct viewing transformations.

C.2.1 A Head-mounted Example

Imagine that the end user sees the magic carpet and the virtual world with a head-mounted display and head tracker. As the application flies the carpet through the virtual world, the user may turn to look to the left, right, or even toward the rear of the carpet. Because the head tracker keeps the renderer informed of the user's gaze direction, it might not need to draw the scene directly in front of the magic carpet. The view that the renderer draws on the head-mount's display must match what the end user would see had the experience occurred in the real world.

C.2.2 A Room-mounted Example

Imagine a slightly different scenario, where the end user sits in a darkened room in front of a large projection screen. The application still controls the carpet's flight path; however, the position and orientation of the user's head barely influences the image drawn on the projection screen. If a user looks left or right, then he or she only sees the darkened room. The screen does not move. It's as if the screen represents the magic carpet's "front window" and the darkened room represents the "dark interior" of the carpet.

By adding a left and right screen, we give the magic carpet rider a more complete view of the virtual world surrounding the carpet. Now our end user sees the view to the left or right of the magic carpet by turning left or right.

C.2.3 Impact of Head Position and Orientation on the Camera

In the head-mounted example, the user's head position and orientation significantly affects a camera model's camera position and orientation but has hardly any effect on the projection matrix. In the room-mounted example, the user's head position and orientation contributes little to a camera model's camera position and orientation; however, it does affect the projection matrix.

From a camera-based perspective, the application developer must construct the camera's position and orientation by combining the virtual-world component (the position and orientation of the magic carpet) and the physical-world component (the user's instantaneous head position and orientation).

Java 3D's view model incorporates the appropriate abstractions to compensate automatically for such variability in end-user hardware environments.

C.3 The Coordinate Systems

The basic view model consists of eight or nine coordinate systems, depending on whether the end-user environment consists of a room-mounted display or a head-mounted display. First we define the coordinate systems used in a room-mounted display environment. Next we define the added coordinate system introduced when using a head-mounted display system.

C.3.1 Room-mounted Coordinate Systems

The room-mounted coordinate system is divided into the virtual coordinate system and the physical coordinate system. Figure C-1 shows these coordinate systems graphically. The coordinate systems within the grayed area exist in the virtual world; those outside exist in the physical world. Note that the coexistence coordinate system exists in both worlds.

C.3.1.1 The Virtual Coordinate Systems

The Virtual World Coordinate System
The virtual world coordinate system encapsulates the unified coordinate system for all scene graph objects in the virtual environment. For a given View, the virtual world coordinate system is defined by the Locale object that contains the ViewPlatform object attached to the View. It is a right-handed coordinate system with +x to the right, +y up, and +z toward the viewer.

The ViewPlatform Coordinate System
The ViewPlatform coordinate system is the local coordinate system of the ViewPlatform leaf node to which the View is attached.

The Coexistence Coordinate System
A primary implicit goal of any view model is to map a specified local portion of the physical world onto a specified portion of the virtual world. Once established, one can legitimately ask where the user's head or hand is located within the virtual world, or where a virtual object is located in the local physical world. In this way the physical user can interact with objects inhabiting the virtual world, and vice versa. To establish this mapping, Java 3D defines a special coordinate system, called coexistence coordinates, that is defined to exist in both the physical world and the virtual world.

The coexistence coordinate system exists half in the virtual world and half in the physical world. The two transforms that go from the coexistence coordinate system to the virtual world coordinate system and back again contain all the information needed to expand or shrink the virtual world relative to the physical world, as well as the information needed to position and orient the virtual world relative to the physical world.

Modifying the transform that maps the coexistence coordinate system into the virtual world coordinate system changes what the end user can see. The Java 3D application programmer moves the end user within the virtual world by modifying this transform.

C.3.1.2 The Physical Coordinate Systems

The Head Coordinate System
The head coordinate system allows an application to import its user's head geometry. The coordinate system provides a simple consistent coordinate frame for specifying such factors as the location of the eyes and ears.

The Image Plate Coordinate System
The image plate coordinate system corresponds with the physical coordinate system of the image generator. The image plate is defined as having its origin at the lower left-hand corner of the display area and as lying in the display area's XY plane. Note that image plate is a different coordinate system than either left image plate or right image plate. These last two coordinate systems are defined in head-mounted environments only (see Section C.3.2, "Head-mounted Coordinate Systems").

The Head Tracker Coordinate System
The head tracker coordinate system corresponds to the six-degrees-of-freedom tracker's sensor attached to the user's head. The head tracker's coordinate system describes the user's instantaneous head position.

The Tracker Base Coordinate System
The tracker base coordinate system corresponds to the emitter associated with absolute position/orientation trackers. For those trackers that generate relative position/orientation information, this coordinate system is that tracker's initial position and orientation. In general, this coordinate system is rigidly attached to the physical world.

C.3.2 Head-mounted Coordinate Systems

Head-mounted coordinate systems divide the same virtual coordinate systems and the physical coordinate systems. Figure C-2 shows these coordinate systems graphically. As with the room-mounted coordinate systems, the coordinate systems within the grayed area exist in the virtual world; those outside exist in the physical world. Once again, the coexistence coordinate system exists in both worlds. The arrangement of the coordinate system differs from those for a room-mounted display environment. The head-mounted version of Java 3D's coordinate system differs in another way. It includes two image plate coordinate systems, one for each of an end-user's eyes.

The Left Image Plate and Right Image Plate Coordinate Systems
The left image plate and right image plate coordinate systems correspond with the physical coordinate system of the image generator associated with the left and right eye, respectively. The image plate is defined as having its origin at the lower left-hand corner of the display area and lying in the display area's XY plane. Note that the left image plate's XY plane does not necessarily lie parallel to the right image plate's XY plane. Note that left image plate and right image plate are different coordinate systems than the room-mounted display environment's image plate coordinate system.

C.4 The ViewPlatform Object

The ViewPlatform object is a leaf object within the Java 3D scene graph. The ViewPlatform object is the only portion of Java 3D's viewing model that resides as a node within the scene graph. Changes to TransformGroup nodes in the scene graph hierarchy above a particular ViewPlatform object move the view's location and orientation within the virtual world (see Section 8.4, "ViewPlatform: A Place in the Virtual World"). The ViewPlatform object also contains a ViewAttachPolicy and an ActivationRadius (see Section 5.10, "ViewPlatform Node," for a complete description of the ViewPlatform API).

C.5 The View Object

The View object is the central Java 3D object for coordinating all aspects of a viewing situation. All parameters that determine the viewing transformation to be used in rendering on a collected set of canvases in Java 3D are either directly contained within the View object, or within objects pointed to by a View object (or pointed to by these, etc.). Java 3D supports multiple simultaneously active View objects, each of which controls its own set of canvases.

The Java 3D View object has several instance variables and methods, but most are calibration variables or user-helping functions.

Methods
public final void setTrackingEnable(boolean flag)
public final boolean getTrackingEnable()
These methods set and retrieve a flag specifying whether to enable the use of six-degrees-of-freedom tracking hardware.

public final void getUserHeadToVworld(Transform3D t)
This method retrieves the user-head-to-vworld coordinate system transform. This Transform3D object takes points in the user's head coordinate system and transforms them into points in the virtual world coordinate system. This value is read-only. Java 3D continually generates it, but only if enabled by using the setUserHeadToVworldEnable method.

public final void setUserHeadToVworldEnable(boolean flag)
public final boolean getUserHeadToVworldEnable()
These methods set and retrieve a flag that specifies whether or not to repeatedly generate the user-head-to-vworld transform (initially false).

public String toString()
This method returns a string that contains the values of this View object.

C.5.1 View Policy

The view policy informs Java 3D whether it should generate the view using the head-tracked system of transformations or the head-mounted system of transformations. These policies are attached to the Java 3D View object.

Methods
public final void setViewPolicy(int policy)
public final int getViewPolicy()
These two methods set and retrieve the current policy for view computation. The policy variable specifies how Java 3D uses its transforms in computing new viewpoints, as follows:

C.5.2 Screen Scale Policy

The screen scale policy specifies where the screen scale comes from when the view attach policy is NOMINAL_SCREEN_SCALED (see Section 8.4.3, "View Attach Policy"). The policy can be one of the following:

public final void setScreenScalePolicy(int policy)
public final int getScreenScalePolicy()
These methods set and retrieve the current screen scale policy.

public final void setScreenScale(double scale)
public final double getScreenScale()
These methods set and retrieve the screen scale value. This value is used when the view attach policy is NOMINAL_SCREEN_SCALED and the screen scale policy is SCALE_EXPLICIT.

C.5.3 Window Eyepoint Policy

The window eyepoint policy comes into effect in a non-head-tracked environment. The policy tells Java 3D how to construct a new view frustum based on changes in the field of view and in the Canvas3D's location on the screen. The policy only comes into effect when the application changes a parameter that can change the placement of the eyepoint relative to the view frustum.

Constants
public static final int RELATIVE_TO_FIELD_OF_VIEW
This variable tells Java 3D that it should modify the eyepoint position so it is located at the appropriate place relative to the window to match the specified field of view. This implies that the view frustum will change whenever the application changes the field of view. In this mode, the eye position is read-only. This is the default setting.

public static final int RELATIVE_TO_SCREEN
This variable tells Java 3D to interpret the eye's position relative to the entire screen. No matter where an end user moves a window (a Canvas3D), Java 3D continues to interpret the eye's position relative to the screen. This implies that the view frustum changes shape whenever an end user moves the location of a window on the screen. In this mode, the field of view is read-only.

public static final int RELATIVE_TO_WINDOW
This variable specifies that Java 3D should interpret the eye's position information relative to the window (Canvas3D). No matter where an end user moves a window (a Canvas3D), Java 3D continues to interpret the eye's position relative to that window. This implies that the frustum remains the same no matter where the end user moves the window on the screen. In this mode, the field of view is read-only.

Methods
public final int getWindowEyepointPolicy()
public final void setWindowEyepointPolicy(int policy)
This variable specifies how Java 3D handles the predefined eyepoint in a non-head-tracked application. The variable can contain one of three values: RELATIVE_TO_FIELD_OF_VIEW, RELATIVE_TO_SCREEN, or RELATIVE_TO_WINDOW. The default value is RELATIVE_TO_FIELD_OF_VIEW.

C.5.4 Monoscopic View Policy

This policy specifies how Java 3D generates a monoscopic view.

Constants
public final static int LEFT_EYE_VIEW
public final static int RIGHT_EYE_VIEW
public final static int CYCLOPEAN_EYE_VIEW
These constants specify the monoscopic view policy. The first constant specifies that the monoscopic view should be the view as seen from the left eye. The second constant specifies that the monoscopic view should be the view as seen from the right eye. The third constant specifies that the monoscopic view should be the view as seen from the "center eye," the fictional eye half-way between the left and right eyes. This is the default setting.

Methods
public final void setMonoscopicViewPolicy(int policy)
public final int getMonoscopicViewPolicy()
These methods set and return the monoscopic view policy, respectively.

C.5.5 Sensors and Their Location in the Virtual World

public final void getSensorToVworld(Sensor sensor, Transform3D t)
public final void getSensorHotSpotInVworld(Sensor sensor, 
       Point3d  position)
public final void getSensorHotSpotInVworld(Sensor sensor, 
       Point3f  position)
The first method takes the sensor's last reading and generates a sensor-to-vworld coordinate system transform. This Transform3D object takes points in that sensor's local coordinate system and transforms them into virtual world coordinates. The next two methods retrieve the specified sensor's last hotspot location in virtual world coordinates.

C.6 The Screen3D Object

A Screen3D object represents one independent display device. The most common environment for a Java 3D application is a desktop computer with or without a head tracker. Figure C-3 shows a scene graph fragment for a display environment designed for such an end-user environment. Figure C-4 shows a display environment that matches the scene graph fragment in Figure C-3.

A multiple-projection wall display presents a more exotic environment. Such environments have multiple screens, typically three or more. Figure C-5 shows a scene graph fragment representing such a system and Figure C-6 shows the corresponding display environment.

A multiple-screen environment requires more care during the initialization and calibration phase. Java 3D must know how the Screen3D's are placed with respect to one another, the tracking device, and the physical portion of the coexistence coordinate system.

C.6.1 Screen3D Calibration Parameters

The Screen3D object is the 3D version of AWT's screen object (see Section 8.8, "The Screen3D Object"). To use a Java 3D system, someone or some program must calibrate the Screen3D object with the coexistence volume. These methods allow that person or program to inform Java 3D of those calibration parameters.

Measured Parameters
These calibration parameters are set once, typically by a browser, calibration program, system administrator, or system calibrator, not by an applet.

public final void setPhysicalScreenWidth(double width)
public final void setPhysicalScreenHeight(double height)
These methods store the screen's (image plate's) physical width and height in meters. The system administrator or system calibrator must provide these values by measuring the display's active image width and height. In the case of a head-mounted display, this should be the display's apparent width and height at the focal plane.

C.6.2 Accessing and Changing Head Tracker Coordinates

public void setTrackerBaseToImagePlate(Transform3D t)
public void getTrackerBaseToImagePlate(Transform3D t)
These methods set and get the tracker-base-to-image-plate coordinate system transform. If head tracking is enabled, this transform is a calibration constant. If head tracking is not enabled, this transform is not used. This is used only in SCREEN_VIEW mode. Users must recalibrate whenever the image plate moves relative to the tracker.

public void setHeadTrackerToLeftImagePlate(Transform3D t)
public void getHeadTrackerToLeftImagePlate(Transform3D t)
public void setHeadTrackerToRightImagePlate(Transform3D t)
public void getHeadTrackerToRightImagePlate(Transform3D t)
These methods set and get the head-tracker-to-left-image-plate and head-tracker-to-right-image-plate coordinate system transforms, respectively. If head tracking is enabled, these transforms are calibration constants. If head tracking is not enabled, these transforms are not used. They are used only in HMD_VIEW mode.

C.7 The Canvas3D Object

Java 3D provides special support for those applications that wish to manipulate an eye position even in a non-head-tracked display environment. One situation where such a facility proves useful is an application that wishes to generate a very high-resolution image composed of lower-resolution tiled images. The application must generate each tiled component of the final image from a common eye position with respect to the composite image but a different eye position from the perspective of each individual tiled element.

C.7.1 Scene Antialiasing

public final boolean getSceneAntialiasingAvailable()
This method returns a status flag indicating whether scene antialiasing is available.

C.7.2 Accessing and Modifying an Eye's Image Plate Position

A Canvas3D object provides sophisticated applications with access to the eye's position information in head-tracked, room-mounted runtime environments. It also allows applications to manipulate the position of an eye relative to an image plate in non-head-tracked runtime environments.

public final void setLeftManualEyeInImagePlate(Point3d position)
public final void setRightManualEyeInImagePlate(Point3d position)
public final void getLeftManualEyeInImagePlate(Point3d position)
public final void getRightManualEyeInImagePlate(Point3d position)
These methods set and retrieve the position of the manual left and right eyes in image plate coordinates. These values determine eye placement when a head tracker is not in use and the application is directly controlling the eye position in image plate coordinates. In head-tracked mode or when the windowEyepointPolicy is RELATIVE_TO_FIELD_OF_VIEW, this value is ignored. When the windowEyepointPolicy is RELATIVE_TO_WINDOW, only the Z value is used.

public final void getLeftEyeInImagePlate(Point3d position)
public final void getRightEyeInImagePlate(Point3d position)
public final void getCenterEyeInImagePlate(Point3d position)
These methods retrieve the actual position of the left eye, right eye, and center eye in image plate coordinates and copy that value into the object provided. The center eye is the fictional eye half-way between the left and right eye. These three values are a function of the windowEyepointPolicy, the tracking enable flag, and the manual left, right, and center eye positions.

public final void getPixelLocationInImagePlate(int x, int y, 
       Point3d position)
This method computes the position of the specified AWT pixel value in image plate coordinates and copies that value into the object provided.

public final void getVworldToImagePlate(Transform3D t)
This method retrieves the current virtual-world-to-image-plate coordinate system transform and places it into the specified object.

public final void getImagePlateToVworld(Transform3D t)
This method retrieves the current image-plate-to-virtual-world coordinate system transform and places it into the specified object.

C.7.3 Canvas Width and Height

public final double getPhysicalWidth()
public final double getPhysicalHeight()
These methods retrieve the physical width and height of this canvas window, in meters.

C.8 The PhysicalBody Object

The PhysicalBody object contains information concerning the physical characteristics of the end-user's body. The head parameters allow end users to specify their own head's characteristics and thus to customize any Java 3D application so that it conforms to their unique geometry. The PhysicalBody object defines head parameters in the head coordinate system. It provides a simple and consistent coordinate frame for specifying such factors as the location of the eyes and thus the interpupilary distance.

The Head Coordinate System
The head coordinate system has its origin on the head's bilateral plane of symmetry, roughly half-way between the left and right eyes. The origin of the head coordinate system is known as the center eye. The positive X-axis extends to the right. The positive Y-axis extends up. The positive Z-axis extends into the skull. Values are in meters.

Constructors
public PhysicalBody()
Constructs a default user PhysicalBody object with the following default eye and ear positions:

Parameter Default Value
leftEyePosition (-0.033, 0.0, 0.0)
rightEyePosition (0.033, 0.0, 0.0)
leftEaPosition (-0.080, -0.030, 0.095)
rightEarPosition (0.080, -0.030, 0.095)
nominal eye height from ground 1.68
nominal eye offset from nominal screen 0.4572
head to head tracker transform identity

public PhysicalBody(Point3d leftEyePosition, 
       Point3d  rightEyePosition)
public PhysicalBody(Point3d leftEyePosition, 
       Point3d  rightEyePosition, Point3d leftEarPosition, 
       Point3d  rightEarPosition)
These methods construct a PhysicalBody object with the specified eye and ear positions.

Methods
public void getLeftEyePosition(Point3d position)
public void setLeftEyePosition(Point3d position)
public void getRightEyePosition(Point3d position)
public void setRightEyePosition(Point3d position)
These methods set and retrieve the position of the center of rotation of a user's left and right eyes in head coordinates.

public void getLeftEarPosition(Point3d position)
public void setLeftEarPosition(Point3d position)
public void getRightEarPosition(Point3d position)
public void setRightEarPosition(Point3d position)
These methods set and retrieve the position of the user's left and right ear positions in head coordinates.

public double getNominalEyeHeightFromGround()
public void setNominalEyeHeightFromGround(double height)
These methods set and retrieve the user's nominal eye height as measured from the ground to the center eye in the default posture. In a standard computer monitor environment, the default posture would be seated. In a multiple-projection display room environment or a head-tracked environment, the default posture would be standing.

public double getNominalEyeOffsetFromNominalScreen()
public void setNominalEyeOffsetFromNominalScreen(double offset)
These methods set and retrieve the offset from the center eye to the center of the display screen. This offset distance allows an "over the shoulder" view of the scene as seen by the end user.

public void setHeadToHeadTracker(Transform3D t)
public void getHeadToHeadTracker(Transform t)
These methods set and retrieve the head-to-head-tracker coordinate system transform. If head tracking is enabled, this transform is a calibration constant. If head tracking is not enabled, this transform is not used. This transform is used in both SCREEN_VIEW and HMD_VIEW modes.

public String toString()
This method returns a string that contains the values of this PhysicalBody object.

C.9 The PhysicalEnvironment Object

The PhysicalEnvironment object contains information about the local physical world of the end-user's physical environment. This includes information about audio output devices and tracking sensor hardware, if present.

Constructors
public PhysicalEnvironment()
Constructs and initializes a new PhysicalEnvironment object with default parameters:

Parameter Default Value
sensorCount 3
sensors null (for all array elements)
headIndex 0
rightHandIndex 1
leftHandIndex 2
dominantHandIndex 1
nondominantHandIndex 2
tracking available false
audio device null
input device list empty
coexistence to tracker base transform identity
coexistence center in pworld policy View.NOMINAL_SCREEN

public PhysicalEnvironment(int sensorCount)
Constructs and initializes a new PhysicalEnvironment object.

The sensor information provides real-time access to continuous-input devices such as joysticks and trackers. It also contains two-degrees-of-freedom joystick and six-degrees-of-freedom tracker information. See Section 10.2, "Sensors," for more information. Java 3D uses Java AWT's event model for noncontinuous input devices such as keyboards (see Chapter 10, "Input Devices and Picking").

Audio device information associated with the PhysicalEnvironment object allows the application a mechanism to choose a particular audio device (if more than one is available) and explicitly set the type of audio playback for sound rendered using this device. See Chapter 11, "Audio Devices," for more details on the fields and methods that set and initialize the device driver and output playback associated with the audio device.

Methods
The PhysicalEnvironment object specifies the following methods pertaining to audio output devices and input sensors.

public void setAudioDevice(AudioDevice device)
This method selects the specified AudioDevice object as the device through which audio rendering for this PhysicalEnvironment will be performed.

public AudioDevice getAudioDevice()
This method retrieves the specified AudioDevice object.

public final void addInputDevice(InputDevice device)
public final void removeInputDevice(InputDevice device)
These methods add and remove an input device to or from the list of input devices.

public final Enumeration getAllInputDevices()
This method creates an enumerator that produces all input devices.

public void setSensorCount(int count)
public int getSensorCount()
These methods set and retrieve the count of the number of sensors stored within the PhysicalEnvironment object. It defaults to a small number of sensors. It should be set to the number of sensors available in the end-user's environment before initializing the Java 3D API.

public void setCoexistenceToTrackerBase(Transform3D t)
public void getCoexistenceToTrackerBase(Transform3D t)
These methods set the coexistence-to-tracker-base coordinate system transform. If head tracking is enabled, this transform is a calibration constant. If head tracking is not enabled, this transform is not used. This is used in both SCREEN_VIEW and HMD_VIEW modes.

public boolean getTrackingAvailable()
This method returns a status flag indicating whether or not tracking is available.

public void setSensor(int index, Sensor sensor)
public Sensor getSensor(int index)
The first method sets the sensor specified by the index to the sensor provided. The second method retrieves the specified sensor.

public void setDominantHandIndex(int index)
public int getDominantHandIndex()
These methods set and retrieve the index of the dominant hand.

public void setNonDominantHandIndex(int index)
public int getNonDominantHandIndex()
These methods set and retrieve the index of the nondominant hand.

public void setHeadIndex(int index)
public int getHeadIndex()
public void setRightHandIndex(int index)
public int getRightHandIndex()
public void setLeftHandIndex(int index)
public int getLeftHandIndex()
These methods set and retrieve the index of the head, right hand, and left hand. The index parameter refers to the sensor index.

Physical Coexistence Policy
public int getCoexistenceCenterInPworldPolicy()
public void setCoexistenceCenterInPworldPolicy(int policy)
These methods set and retrieve the physical coexistence policy used in this physical environment. This policy specifies how Java 3D will place the user's eyepoint as a function of current head position during the calibration process. Java 3D permits one of three values: NOMINAL_HEAD, NOMINAL_FEET, or NOMI-NAL_SCREEN. Note: NOMINAL_SCREEN_SCALED is not allowed for this policy.

C.10 Viewing in Head-tracked Environments

Section 8.5, "Generating a View," describes how Java 3D generates a view for a standard flat-screen display with no head tracking. In this section, we describe how Java 3D generates a view in a room-mounted, head-tracked display environment-either a computer monitor with shutter glasses and head tracking or a multiple-wall display with head-tracked shutter glasses. Finally, we describe how Java 3D generates view matrices in a head-mounted and head-tracked display environment.

C.10.1 A Room-mounted Display with Head Tracking

When head tracking combines with a room-mounted display environment (for example, a standard flat screen display), the ViewPlatform's origin and orientation serves as a base for constructing the view matrices. Additionally, Java 3D uses the end-user's head position and orientation to compute where an end-user's eyes are located in physical space. Each eye's position serves to offset the corresponding virtual eye's position relative to the ViewPlatform's origin. Each eye's position also serves to specify that eye's frustum since the eye's position relative to a Screen3D uniquely specifies that eye's view frustum. Note that Java 3D will access the PhysicalBody object to obtain information describing the user's interpupilary distance and tracking hardware, values it needs to compute the end-user's eye positions from the head position information.

C.10.2 A Head-mounted Display with Head Tracking

In a head-mounted environment, the ViewPlatform's origin and orientation also serves as a base for constructing view matrices. And, as in the head-tracked, room-mounted environment, Java 3D also uses the end-user's head position and orientation to further modify the ViewPlatform's position and orientation. In a head-tracked, head-mounted display environment, an end-user's eyes do not move relative to their respective display screens, rather, the display screens move relative to the virtual environment. A rotation of the head by an end user can radically affect the final view's orientation. In this situation, Java 3D combines the position and orientation from the ViewPlatform with the position and orientation from the head tracker to form the view matrix. The view frustum, however, does not change since the user's eyes do not move relative to their respective display screen, so Java 3D can compute the projection matrix once and cache the result.

If any of the parameters of a View object are updated, this will effect a change in the implicit viewing transform (and thus image) of any Canvas3D that references that View object.

C.11 Compatibility Mode

A camera-based view model allows application programmers to think about the images displayed on the computer screen as if a virtual camera took those images. Such a view model allows application programmers to position and orient a virtual camera within a virtual scene, to manipulate some parameters of the virtual camera's lens (specify its field of view), and to specify the locations of the near and far clipping planes.

Java 3D allows applications to enable compatibility mode for room-mounted, non-head-tracked display environments, or to disable compatibility mode using the following methods. Camera-based viewing functions are only available in compatibility mode.

Methods
public final void setCompatibilityModeEnable(boolean flag)
public final boolean getCompatabilityModeEnable()
This flag turns compatibility mode on or off. Compatibility mode is disabled by default.


Note: Use of these view-compatibility functions will disable some of Java 3D's view model features and limit the portability of Java 3D programs. These methods are primarily intended to help jump-start porting of existing applications.

C.11.1 Overview of the Camera-based View Model

The traditional camera-based view model, shown in Figure C-7, places a virtual camera inside a geometrically specified world. The camera "captures" the view from its current location, orientation, and perspective. The visualization system then draws that view on the user's display device. The application controls the view by moving the virtual camera to a new location, by changing its orientation, by changing its field of view, or by controlling some other camera parameter.

The various parameters that users control in a camera-based view model specify the shape of a viewing volume (known as a frustum because of its truncated pyramidal shape) and locate that frustum within the virtual environment. The rendering pipeline uses the frustum to decide which objects to draw on the display screen. The rendering pipeline does not draw objects outside the view frustum and it clips (partially draws) objects that intersect the frustum's boundaries.

Though a view frustum's specification may have many items in common with those of a physical camera, such as placement, orientation, and lens settings, some frustum parameters have no physical analog. Most noticeably, a frustum has two parameters not found on a physical camera: the near and far clipping planes.

The location of the near and far clipping planes allow the application programmer to specify which objects Java 3D should not draw. Objects too far away from the current eyepoint usually do not result in interesting images. Those too close to the eyepoint might obscure the interesting objects. By carefully specifying near and far clipping planes, an application programmer can control which objects the renderer will not be drawing.

From the perspective of the display device, the virtual camera's image plane corresponds to the display screen. The camera's placement, orientation, and field of view determine the shape of the view frustum.

C.11.2 Using the Camera-based View Model

The camera-based view model allows Java 3D to bridge the gap between existing 3D code and Java 3D's view model. By using the camera-based view model methods, a programmer retains the familiarity of the older view model but gains some of the flexibility afforded by Java 3D's new view model.

The traditional camera-based view model is supported in Java 3D by helping methods in the Transform3D object. These methods were explicitly designed to resemble as closely as possible the view functions of older packages, and thus should be familiar to most 3D programmers. The resulting Transform3D objects can be used to set compatibility-mode transforms in the View object.

C.11.2.1 Creating a Viewing Matrix

The Transform3D object provides the following method to create a viewing matrix.

public void lookAt(Point3d eye, Point3d center, Vector3d up)
This is a utility method that specifies the position and orientation of a viewing transform. It works very similarly to the equivalent function in OpenGL. The inverse of this transform can be used to control the ViewPlatform object within the scene graph. Alternatively, this transform can be passed directly to the View's VpcToEc transform via the compatibility-mode viewing functions (see Section C.11.2.3, "Setting the Viewing Transform").

C.11.2.2 Creating a Projection Matrix

The Transform3D object provides the following three methods for creating a projection matrix. All three map points from eye coordinates (EC) to clipping coordinates (CC). Eye coordinates are defined such that (0, 0, 0) is at the eye and the projection plane is at z = -1.

public void frustum(double left, double right, double bottom, 
double top, double near, double far)
The frustum method establishes a perspective projection with the eye at the apex of a symmetric view frustum. The transform maps points from eye coordinates to clipping coordinates. The clipping coordinates generated by the resulting transform are in a right-handed coordinate system (as are all other coordinate systems in Java 3D).

The arguments define the frustum and its associated perspective projection: (left, bottom, -near) and (right, top, -near) specify the point on the near clipping plane that maps onto the lower-left and upper-right corners of the window, respectively. The -far parameter specifies the far clipping plane. See Figure C-8.

public void perspective(double fovx, double aspect, double zNear, 
       double zFar)
The perspective method establishes a perspective projection with the eye at the apex of a symmetric view frustum, centered about the Z-axis, with a fixed field of view. The resulting perspective projection transform mimics a standard camera-based view model. The transform maps points from eye coordinates to clipping coordinates. The clipping coordinates generated by the resulting transform are in a right-handed coordinate system.

The arguments define the frustum and its associated perspective projection: -near and -far specify the near and far clipping planes; fovx specifies the field of view in the X dimension, in radians; and aspect specifies the aspect ratio of the window. See Figure C-9.

public void ortho(double left, double right, double bottom, 
       double  top, double near, double far)
The ortho method establishes a parallel projection. The orthographic projection transform mimics a standard camera-based video model. The transform maps points from eye coordinates to clipping coordinates. The clipping coordinates generated by the resulting transform are in a right-handed coordinate system.

The arguments define a rectangular box used for projection: (left, bottom, -near) and (right, top, -near) specify the point on the near clipping plane that maps onto the lower-left and upper-right corners of the window, respectively. The -far parameter specifies the far clipping plane. See Figure C-10.

C.11.2.3 Setting the Viewing Transform

The View object provides the following compatibility-mode methods that operate on the viewing transform.

public final void setVpcToEc(Transform3D vpcToEc)
public final void getVpcToEc(Transform3D vpcToEc)
This compatibility-mode method specifies the ViewPlatform coordinates (VPC) to eye coordinates viewing transform. If compatibility mode is disabled, this transform is derived from other values and is read-only.

C.11.2.4 Setting the Projection Transform

The View object provides the following compatibility-mode methods that operate on the projection transform.

public final void setLeftProjection(Transform3D projection)
public final void getLeftProjection(Transform3D projection)
public final void setRightProjection(Transform3D projection)
public final void getRightProjection(Transform3D projection)
These compatibility-mode methods specify a viewing frustum for the left and right eye that transforms points in eye coordinates to clipping coordinates. If compatibility mode is disabled, a RestrictedAccessException is thrown. In monoscopic mode, only the left eye projection matrix is used.



Contents Previous Next

Java 3D API Specification


Copyright © 1999, Sun Microsystems, Inc. All rights reserved.