Java 3D API Specification
|C H A P T E R11|
A Java 3D application running on a particular machine could have one of several options available to it for playing the audio image created by the sound renderer. Perhaps the machine on which Java 3D is executing has more than one sound card (for example, one that is a wave table synthesis card and the other with accelerated sound spatialization hardware). Furthermore, suppose there are Java 3D audio device drivers that execute Java 3D audio methods on each of these specific cards. The application would therefore have at least two audio device drivers through which the audio could be produced. For such a case the Java 3D application must choose the audio device driver with which sound rendering is to be performed. Once this audio device is chosen, the application can additionally select the type of audio playback on which device the rendered sound image is to be output. The playback device (headphones or speaker(s)) is physically connected to the port to which the selected device driver outputs.
11.1 AudioDevice InterfaceThe selection of this device driver is done through methods in the PhysicalEnvironment object (see Section C.9, "The PhysicalEnvironment Object"). The application queries how many audio devices are available. For each device, the user can get the AudioDevice object that describes it and query its characteristics. Once a decision is made about which of the available audio devices to use for a PhysicalEnvironment, the particular device is set into this PhysicalEnvironment's fields. Each PhysicalEnvironment object may use only a single audio device.
The AudioDevice object interface specifies an abstract audio device that creators of Java 3D class libraries would implement for a particular device. Java 3D uses several methods to interact with specific devices. Since all audio devices implement this consistent interface, the user could have a portable means of initializing, setting particular audio device elements, and querying generic character-istics for any audio device.
Constantspublic final static int HEADPHONESSpecifies that audio playback will be through stereo headphones.public final static int MONO_SPEAKERSpecifies that audio playback will be through a single speaker some distance away from the listener.public final static int STEREO_SPEAKERSSpecifies that audio playback will be through stereo speakers some distance away from, and at some angle to, the listener.
11.1.1 InitializationEach audio device driver must be initialized. The chosen device driver should be initialized before any Java 3D Sound methods are executed because the implementation of the Sound methods, in general, is potentially device-driver dependent.
Methodspublic abstract boolean initialize()Initialize the audio device. Exactly what occurs during initialization is implementation dependent. This method provides explicit control by the user over when this initialization occurs.public abstract boolean close()Closes the audio device, releasing resources associated with this device.
11.1.2 Audio PlaybackMethods to set and retrieve the audio playback parameters are part of the AudioDevice object. The audio playback information specifies that playback will be through one of the following:
- Stereo headphones.
- A monaural speaker.
The type of playback chosen affects the sound image generated. Cross-talk cancellation is applied to the audio image if playback over stereo speakers is selected.
- A pair of speakers, equally distant from the listener, both at some angle from the head coordinate system Z axis. It's assumed that the speakers are at the same elevation and oriented symmetrically about the listener.
MethodsThe following methods affect the playback of sound processed by the Java 3D sound renderer.public abstract void setAudioPlaybackType(int type) public abstract int getAudioPlaybackType()These methods set and retrieve the type of audio playback device (
STEREO_SPEAKERS) used to output the analog audio from rendering Java 3D Sound nodes.public abstract void setCenterEarToSpeaker(float distance) public abstract float getCenterEarToSpeaker()These methods set and retrieve the distance in meters from the center ear (the midpoint between the left and right ears) and one of the speakers in the listener's environment. For monaural speaker playback, a typical distance from the listener to the speaker in a workstation cabinet is 0.76 meters. For stereo speakers placed at the sides of the display, this might be 0.82 meters.public abstract void setAngleOffsetToSpeaker(float angle) public abstract float getAngleOffsetToSpeaker()These methods set and retrieve the angle, in radians, between the vectors from the center ear to each of the speaker transducers and the vectors from the center ear parallel to the head coordinate's Z axis. Speakers placed at the sides of the computer display typically range between 0.175 and 0.350 radians (between 10 and 20 degrees).public abstract PhysicalEnvironment getPhysicalEnvironment()This method returns a reference to the AudioDevice's PhysicalEnvironment object.
11.1.3 Device-Driver-Specific DataWhile the sound image created for final output to the playback system is either only monaural or stereo (for this version of Java 3D), most device-driver implementations will mix the left and right image signals generated for each rendered sound source before outputting the final playback image. Each sound source will use N input channels of this internal mixer.
Each implemented Java 3D audio device driver will have its own limitations and driver-specific characteristics. These include channel availability and usage (during rendering). Methods for querying these device-driver-specific characteristics are provided below.
Methodspublic abstract int getTotalChannels()This method retrieves the maximum number of channels available for Java 3D sound rendering for all sound sources.public abstract int getChannelsAvailable()During rendering, when Sound nodes are playing, this method returns the number of channels still available to Java 3D for rendering additional Sound nodes.public abstract int getChannelsUsedForSound(Sound node)This is a deprecated method. This method is now part of the Sound class.
11.2 AudioDevice3D InterfaceThe AudioDevice3D Class extends the AudioDevice interface. The intent is for this interface to be implemented by AudioDevice driver developers (whether a Java 3D licensee or not). Each implementation will use a sound engine of their choice.
The methods in this interface should not be called an application. The methods in this interface are referenced by the core Java 3D Sound classes to render live, scheduled sound on the AudioDevice chosen by the application or use chosen by the application or user.
Methods in this interface provide the Java 3D core a generic way to set and query the audio device the application has chosen audio rendering to be performed on. Methods in this interface include:
- Setup and clearing the sound as a sample on the device
- Start, stop, pause, unpause, mute, and unmute of sample on the device
- Set parameters for each sample corresponding to the fields in the Sound node
- Set the current active aural parameters that affect all positional samples
Constantspublic static final int BACKGROUND_SOUND public static final int POINT_SOUND public static final int CONE_SOUNDThese constants specify the sound types. Sound types match the Sound node classes defined for Java 3D core for BackgroundSound, PointSound, and ConeSound. The type of sound a sample is loaded as determines which methods affect it.public static final int STREAMING_AUDIO_DATA public static final int BUFFERED_AUDIO_DATAThese constants specify the sound data types. Samples can be processed as streaming or buffered data. Fully spatializing sound sources may require data to be buffered.
Sound data specified as streaming is not copied by the AudioDevice diver implementation. It is up the application to ensure that this data is continuously accessible during sound rendering. Futhermore, full sound spatialization may not be possible, for all AudioDevice3D implementations on unbuffered sound data. Sound data specified as buffered is copied by the AudioDevice driver implementation.
Methodspublic abstract void setView(View reference)This method accepts a reference to the current View and passes reference to the current View Object. The PhysicalEnvironment parameters (with playback type and speaker placement) and the PhysicalBody parameters (position and orientation of ears) can be obtained from this object, and the transformations to and from ViewPlatform coordinate (the space the listener's head is in) and Virtual World coordinates (the space the sounds are in).public abstract int prepareSound(int soundType, MediaContainer soundData)Prepare the sound. This method accepts a reference to the MediaContainer that contains a reference to sound data and information about the type of data it is. The
soundTypeparameter defines the type of sound associated with this sample (Background, Point, or Cone).
Depending on the type of MediaContainer the sound data is and on the implementation of the AudioDevice used, sound data preparation could consist of opening, attaching, or loading sound data into the device. Unless the cached is true, this sound data should not be copied, if possible, into host or device memory.
Once this preparation is complete for the sound sample, an AudioDevice-specific index, used to reference the sample in future method calls, is returned. All the rest of the methods described below require this index as a parameter.public abstract void clearSound(int index)Clear the sound. This method requests that the AudioDevice free all resources associated with the sample with
indexid.public abstract long getSampleDuration(int index)Query Sample duration. If it can be determined, this method returns the duration in milliseconds of the sound sample. For non-cached streams, this method returns
Sound.DURATION_UNKNOWN.public abstract int getNumberOfChannelsUsed(int index) public abstract int getNumberOfChannelsUsed(int index, boolean muted)Query the number of channels used by Sound. These methods return the number of channels (on the executing audio device) that this sound is using, if it is playing, or is expected to use if it were begun to be played. The first method takes the sound's current state (including whether it is muted or unmuted) into account. The second method uses the
mutedparameter to make the determination.
For some AudioDevice3D implementations:
- Muted sounds take up channels on the systems mixer (because they're rendered as samples playing with gain zero).
- A single sound could be rendered using multiple samples, each taking up mixer channels.public abstract int startSample(int index)Start sample. This method begins a sound playing on the AudioDevice and returns a flag indicating whether or not the sample was started.public abstract int stopSample(int index)Stop sample. This method stops the sound on the AudioDevice and returns a flag indicating whether or not the sample was stopped.public abstract long getStartTime(int index)Query last start time for this sound on the device. This method returns the system time of when the sound was last "started." Note that this start time will be as accurate as the AudioDevice implementation can make it, but that it is not guaranteed to be exact.public abstract void setSampleGain(int index, float scaleFactor)Set gain scale factor. This method sets the overall gain scale factor applied to data associated with this source to increase or decrease its overall amplitude. The gain
scaleFactorvalue passed into this method is the combined value of the Sound node's initial gain and the current AuralAttribute gain scale factors.public abstract void setDistanceGain(int index, double frontDistance, float frontAttenuationScaleFactor, double backDistance, float backAttenuationScaleFactor)Set distance gain. This method sets this sound's distance gain elliptical attenuation (not including the filter cutoff frequency) by defining corresponding arrays containing distances from the sound's origin and gain scale factors applied to all active positional sounds. The gain scale factor is applied to sound based on the distance the listener is from the sound source. These attenuation parameters are ignored for BackgroundSound nodes. The
backAttenuationScaleFactorparameter is ignored for PointSound nodes.
For a full description of the attenuation parameters, see Section 5.8.3, "ConeSound Node."public abstract void setDistanceFilter(int filterType, double distance, float filterCutoff)Set AuralAttributes distance filter. This method sets the distance filter corresponding arrays containing distances and frequency cutoff applied to all active positional sounds. The gain scale factor is applied to sound based on the distance the listener is from the sound source. For a full description of this parameter and how it is used, see Section 7.1.15, "AuralAttributes Object."public abstract void setLoop(int index, int count)Set loop count. This method sets the number of times sound is looped during play. For a complete description of this method, see the description for the
Sound.setLoopmethod in Section 5.8, "Sound Node."public abstract void muteSample(int index) public abstract void unmuteSample(int index)These methods mute and unmute a playing sound sample. The first method makes a sample play silently. The second method makes a silently-playing sample audible. Ideally, the muting of a sample is implemented by stopping a sample and freeing channel resources (rather than just setting the gain of the sample to zero). Ideally, the un-muting of a sample restarts the muted sample by offset from the beginning by the number of milliseconds since the time the sample began playing.public abstract void pauseSample(int index) public abstract void unpauseSample(int index)These methods pause and unpause a playing sound sample. The first method temporarily stops a cached sample from playing without resetting the sample's current pointer back to the beginning of the sound data so that it can be un-paused at a later time from the same location in the sample when the pause was initiated. The second method restarts the paused sample from the location in the sample where it was paused.public abstract void setPosition(int index, Point3d position)Set position. This method sets this sound's location (in Local coordinates) from provided the
position.public abstract void setDirection(int index, Vector3d direction)Set direction. This method sets this sound's direction from the local coordinate vector provided. For a full description of the
directionparameter, see Section 5.8.3, "ConeSound Node."public abstract void setVworldXfrm(int index, Transform3D trans)Set virtual world transform. This method passes a reference to the concatenated transformation to be applied to local sound position and direction parameters.public abstract void setRolloff(float rolloff)Set AuralAttributes gain rolloff. This method sets the speed-of-sound factor. For a full description of this parameter and how it is used, see Section 7.1.15, "AuralAttributes Object."public abstract void setAngularAttenuation(int index, int filterType, double angle, float attenuationScaleFactor, float filterCutoff)Set angular attenuation. This method sets this sound's angular gain attenuation (including filter) by defining corresponding arrays containing angular offsets from the sound's axis, gain scale factors, and frequency cutoff applied to all active directional sounds. Gain scale factor is applied to sound based on the angle between the sound's axis and the ray from the sound source origin to the listener. The form of the attenuation parameter is fully described in Section 5.8.3, "ConeSound Node."public abstract void setReflectionCoefficient(float coefficient)Set AuralAttributes reverberation coefficient. This method sets the reflective or absorptive characteristics of the surfaces in the region defined by the current Soundscape region. For a full description of this parameter and how it is used, see Section 7.1.15, "AuralAttributes Object."public abstract void setReverbDelay(float reverbDelay)Set AuralAttributes reverberation delay. This method sets the delay time between each order of reflection (while reverberation is being rendered) explicitly given in milliseconds. A value for delay time of 0.0 disables reverberation. For a full description of this parameter and how it is used, see Section 7.1.15, "AuralAttributes Object."public abstract void setReverbOrder(int reverbOrder)Set AuralAttributes reverberation order. This method sets the number of times reflections are added to reverberation being calculated. A value of -1 specifies an unbounded number of reverberations. For a full description of this parameter and how it is used, see Section 7.1.15, "AuralAttributes Object."public abstract void setFrequencyScaleFactor(float frequencyScaleFactor)Set AuralAttributes frequency scale factor. This method specifies a scale factor applied to the frequency (or wavelength). This parameter can also be used to expand or contract the usual frequency shift applied to the sound source due to Doppler effect calculations. Valid values are 0.0. A value greater than 1.0 will increase the playback rate. For a full description of this parameter and how it is used, see Section 7.1.15, "AuralAttributes Object."public abstract void setVelocityScaleFactor(float velocityScaleFactor)Set AuralAttributes velocity scale factor. This method specifies a velocity scale factor applied to the velocity of sound relative to listener's position and movement in relation to the sound's position and movement. This scale factor is multiplied by the calculated velocity portion of Doppler effect equation used during sound rendering. For a full description of this parameter and how it is used, see Section 7.1.15, "AuralAttributes Object."public abstract void updateSample(int index)Explicit update of a sample. This method is called when a Sound is to be explicitly updated. It is only called when all a sound's parameters are known to have been passed to the audio device. In this way, an implementation can choose to perform lazy-evaluation of a sample, rather than updating the rendering state of the sample after every individual parameter changed. This method can be left as a null method if the implementor so chooses.
11.3 Instantiating and Registering a New DeviceA browser or applications developer must instantiate whatever system-specific audio devices that he or she needs and that exist on the system. This device information typically exists in a site configuration file. The browser or application will instantiate the physical environment as requested by the end user.
The API for instantiating devices is site-specific, but it consists of a device object with a constructor and at least all of the methods specified in the AudioDevice interface.
Once instantiated, the browser or application must register the device with the Java 3D sound scheduler by associating this device with a PhysicalEnvironment object. The
setAudioDevicemethod introduces new devices to the Java 3D environment and the
allAudioDevicesmethod produces an enumeration that allows examination of all available devices within a Java 3D environment. See Section C.9, "The PhysicalEnvironment Object," for more details.
Java 3D API Specification
Copyright © 1999, Sun Microsystems, Inc. All rights reserved.