Sun ONE Portal Server, Mobile 6.2 Developer's Manual |
Chapter 2
Developing Voice ApplicationsThis chapter provides information about SunTM ONE Portal Server, Mobile Access 6.2 voice application development. It contains the following sections:
Understanding Voice ApplicationsDeveloping voice applications is similar to developing any other Sun ONETM Portal Server application. The best approach is to develop the voice application in VoiceXML and integrate it with Portal Server software by developing a custom provider. The Notes and Personal Notes voice applications both use this approach.
This section discusses the following topics:
Voice Application Prerequisites
The pre-requisites for integrating new voice applications with the SunTM ONE Portal Server, Mobile Access 6.2 software are:
- Voice components of Mobile Access software are certified against the Nuance Voice Platform, which includes a VoiceXML 2.0-compliant voice browser. Because of differences between VoiceXML interpreters, the voice components may need to be ported to run on other platforms (refer to "Porting The Voice Environment and Applications" for further information). Portal Server software and voice applications should use the same VoiceXML browser.
- Mobile Access software supports VoiceXML applications only. If you have a legacy voice application that was not built using VoiceXML, you must provide a VoiceXML wrapper for it. This wrapper will integrate with Portal Server software and manage the voice session with the non-VoiceXML application.
- If you are not familiar with building voice applications using VoiceXML, consult a book on VoiceXML programming. The W3C VoiceXML 2.0 specification is the definitive reference on the programmatic elements of the language.
A Voice Application Example
The best way to start building voice applications is to examine theNotes application included with Portal Server software. The Notes application consists of a provider (NotesProvider) that uses template files for each type of access device (web browser, wireless device, and voice browser). The voice application template files for the NotesProvider are stored in the following directory:
/etc/opt/SUNWps/desktop/MAP/NotesProvider/vxml/Nuance
These files contain VoiceXML code in addition to template tags. The tags provide dynamic content when the dialogs are accessed. For example, the content.template file uses [tag:count] to retrieve the number of notes and [tag:note] tag to speak the notes using text-to-speech:
<prompt bargein="true"> [tag:note]</prompt>
The prompts for this voice application are Microsoft Windows audio (.wav) files, stored in the following directory:
/opt/SUNWam/servers/docs/voice/en_US/prompts/gary
This path is constructed at runtime by concatenating the root directory (/voice), the locale identifier (en_US), a prompts sub-directory (prompts) and a persona (gary). The resulting path /voice/en_US/prompts/gary is relative to the SunTM ONE Identity Server directory /opt/SUNWam/servers/docs.
Finally, each voice application must provide a grammar that allows the application to be selected from the voice desktop channel chooser. The grammar for the Notes application (notes.grammar) is located in the following directory:
/opt/SUNWps/web-src/jsp/default/Notes/vxml/Nuance/grammars
It contains the following grammar expression:
Notes [ (notes ?channel) ]
This allows the user to select the channel by speaking the phrase notes or notes channel.
Building a Voice ApplicationCreating a new voice application requires building a Portal Server custom provider, or extend an existing provider. For details on creating a custom provider, refer to the SunTM ONE Portal Server 6.2 Developer's Guide.
Most interactive voice applications use dynamically-generated content. For example, a weather application might report the weather for a particular region when you speak the name of a city or postal code. The dynamic content (the weather in this case) is retrieved from a weather service at runtime. For this reason, voice application dialogs are typically generated dynamically using techniques such as JavaServerTM Pages (JSPTM) technology.
Providers could generate the VoiceXML dialogs programmatically, but the simplest approach is to build template files that contain the static dialog code, and use tags that are interpreted at runtime to retrieve the dynamic content.
The following steps describe how to build a provider that implements a voice-enabled weather application:
- Develop a dialog design.
Most voice applications consist of a set of dialogs. Each dialog is responsible for one part of the user interaction.
You can use a flow chart to represent the design of your voice application. It should include phrases spoken by the user, shown as transitions between the dialogs. Or, you can develop a script where the conversational flow between the voice application and a user is listed in chronological order.
Either way, you must handle cases where the user says something that was not understood, or the voice application did not receive any input.
- Build a prototype of your voice application in VoiceXML.
For dynamic content, begin by simply including static text as a placeholder content. For example, in a weather application, you might always speak the same report using a <prompt> statement:
<prompt>
Here's the weather forecast for Santa Clara, California.Today it will be mostly sunny, with a high of 75 and a low of 68 degrees Fahrenheit.
</prompt>
The static content will be replaced with dynamic content once the prototype is complete.
- Test your application, exploring all possible dialog interactions.
- Integrate the application with the Portal Server software.
Adding a voice application to the Portal Server software involves building a custom provider. The simplest approach is to use template files like the NotesProvider described in the previous section. The template approach allows you to take the VoiceXML dialogs from the prototype and use them directly with your provider.
- Identify static placeholder text in your VoiceXML prototype.
Review your VoiceXML prototype and identify the places where you currently use static content as placeholders for dynamic content. In each case, you must build a custom tag that can generate the appropriate dynamic content at runtime. For example, the weather report might be implemented using a custom weather tag.
- Build a custom provider using templates.
Refer to the SunTM ONE Portal Server 6.2 Developer's Guide for information on building a custom provider that uses templates. Follow the instructions for creating a new custom provider. Implement support for the custom tags required for dynamic content in the voice application.
- Edit the VoiceXML files.
While building the custom provider, you must make some changes to the VoiceXML files:
- Replace the static content with tags. For example, the static weather report shown above would be replaced with the following, assuming you have implemented support for a weather tag in your provider:
- Change the file extensions of the VoiceXML files from .vxml to .template. You must modify references to other dialogs in the VoiceXML code.
- Move all files to the appropriate directories.
The files comprising your voice application must reside in specific directories. For a discussion of these directories, see "File System Directories for Dialogs, Grammars, and Prompts".
- Complete the installation of the new provider.
The basic steps are as follows. For detailed instructions, refer to the SunTM ONE Portal Server 6.2 Developer's Guide.
- Compile and install the provider class file in the correct location.
- Install any resource bundle files.
- Create an XML channel entry for the provider and update the Display Profile.
- Add the channel to the VoiceJSPDesktopContainer.
- Add the channel to the Available and Visible list so that users can select it.
User:
Add a channel.
System:
Sure;
Here's the list of channels you can add:
E-Mail, Calendar, Weather
That's it.
Tell me which one you want to add, or say cancel.
User:
Weather
System:
All right;
Weather has been added.
Would you like to go there?
User:
Yes
System:
OK, Weather.
When you're done, say main menu.
Here's the weather forecast for Santa Clara, California:
Today it will be mostly sunny, with a high of 75 and a low of 68 degrees Fahrenheit.
OK, we're back at the portal main menu. What’s next?
File System Directories for Dialogs, Grammars, and PromptsPortal Server software has specific directories for the various provider components. This section discusses the directories for:
Dialogs
The dialogs for each voice application are stored in a separate directory that includes the name of the application, the presentation format (vxml) and the name of the voice browser vendor (Nuance).
For a weather application, the dialog files would be stored in:
/etc/opt/SUNWps/desktop/MAP/weather/vxml/Nuance
where weather is the name of the new application.
Grammars
If your application uses external grammar files, they should be stored in the web server’s document root, or in some other well-known location within the Portal Server web application.
To make the application accessible from the voice Portal Desktop, you must create a second grammar file that allows the user to select the application. The grammar for the channel must be unique across all of the voice-enabled channels. For consistency, the grammar should allow the user to optionally speak the word channel after the name of the channel.
For example, the following grammar allows weather or weather channel:
Weather [ (weather ?channel) ]
Name this file weather.grammar and store it in the following directory:
/opt/SUNWps/web-src/jsp/default/weather/vxml/Nuance/grammars
Prompts
The voice prompts are located in the following directory:
/opt/SUNWam/servers/docs/voice/en_US/prompts/gary
The path element gary is the name of the default persona- the person whose voice appears on the recording. If you record new prompts, you should create a new directory for the new persona. This new directory could be named after the person who recorded the prompts.
For example:
/opt/SUNWam/servers/docs/voice/en_US/prompts/cheryl
Voice prompt file names use the following naming convention:
To use this prompt directory in your voice application, prepend the path /voice/en_US/prompts/cheryl (or the path to your prompts) before the prompt file name.
For example, if your prompts were stored in a prompts/ sub-directory, replace this statement:
<audio src="'thats_it.wav'" />
with:
<audio src="'/voice/en_US/prompts/cheryl/thats_it.wav'" />
For localization convenience, you might want to define some VoiceXML variables in your application's root document, and construct the path from this:
Then use the promptPath variable as follows, using expr= instead of src=:
<audio expr="promptPath + 'thats_it.wav'" />
Porting The Voice Environment and ApplicationsTo provide voice functionality with non-Nuance VoiceXML platforms, the Portal Server voice environment and voice applications must be ported.
The following issues are commonly encountered while porting:
- Each VoiceXML dialog file contains an XML DTD header, and the XML DTD header used by your voice application must be correct. To determine the correct XML DTD header, refer to the documentation for the voice browser you are using.
- Voice browser vendors sometimes use different grammar file formats. These could be proprietary, or based on evolving industry standards. Check to see if the grammar used by your voice browser is compatible with the Nuance format used by the Portal Server software, and modify the grammars if necessary. Some grammars may be in-line in VoiceXML dialogs, or in external grammar files.
- If you develop VoiceXML applications that use client-side scripting, such as ECMAScript or JavaScript, be aware of interpreter differences between voice platforms.
- Some voice browsers may not support the full set of VoiceXML tags. The voice browser documentation usually lists which tags the browser supports. You may need to modify VoiceXML code to remove unsupported tags.
- Some VoiceXML tags behave differently between voice browsers. Although the code may execute, the behavior may change subtly. Thoroughly test all dialog states in your application, even if it appears to execute correctly.
- If pre-recorded prompts are not played correctly, you may need to change the encoding from the default 8-bit, 8kHz, mulaw sphere encoded WAV audio file format.
Localizing Voice ApplicationsVoice applications are more locale-dependent than conventional software applications. Not only must the language that the computer uses to communicate with the user be modified for the user's locale, but the voice interface must also be modified so that it recognizes the language spoken by the user. There may even be differences within the same language based on location. Localizing a voice application therefore requires careful attention to language and regional differences.
Localization of a voice application involves:
Re-recording Voice Prompts
Mobile Access software uses a naming scheme for recorded prompts, where prompt file names are based on the words in the prompt with underscores between words.
For example, the prompt:
Here are your notes
is named
here_are_your_notes.wav
For long phrases, the file name is truncated at 54 characters with 50 for the file name and 4 for the extension .wav.
The phrase:
Tell me which channel you want to add, or say cancel
would be named:
tell_me_which_channel_you_want_to_add_or_say_cance.wav.
Using English phrases for prompt names greatly improves the readability of VoiceXML code for English speaking developers.
To localize an application that contains prompts, it is not necessary to change the prompt file names. Instead, the prompts can be re-recorded in the new language and saved with the same file name.
The prompt:
Here are your notes
would become:
Voici vos notes
in French, but the file name would remain here_are_your_notes.wav. This approach allows the language used by the application to be changed without editing any of the VoiceXML <audio> tags.
Grammar Translation
VoiceXML applications use grammars to define the phrases that the user can speak in any given dialog. For example, when logging into the Portal Server software, the user is prompted to enter an account number. In this case, the grammar allows the user to speak a sequence of numbers. Once logged in, a user can access a channel by speaking the channel name, such as email.
In VoiceXML applications, grammars can be included in-line in the VoiceXML dialog code, for example:
The other option is to store the grammars in a file and reference that file form the VoiceXML dialog.
For example:
<form id="channelcommand">
<field name="action" slot="action"?
<grammar src="grammars/overview.grammar#NextAction"/>
In this case the grammar is located in the file grammars/overview.grammar.
Voice applications, such as Personal Notes and Bulletin Board, generally use in-line grammars. However, external grammar files can also be used as well.
For external grammar files, localization involves replacing the English words or phrases in the grammar file with their equivalents in the new language. Do not simply replace words with their dictionary equivalents. Instead, choose words or phrases that native language speakers would use in every day conversation.
When several ways of saying the same thing are available, include these alternatives in the grammar. For example, in the example above, the user can exit by saying goodbye, exit, or quit.
For in-line grammars, you must identify which files contain in-line grammars. Search for files that contain the string <grammar> but do not include a src= parameter (which indicates an external grammar file). Replace the words or phrases as you would in the case of external grammar file, but be careful not to inadvertently modify other parts of the VoiceXML code.
Modifying Pre-Recorded Prompts to Match Grammar Changes
Some voice prompts contain instructions on how to interact with the system, such as to end your session say goodbye. In this case, the application grammar is defined to recognize the word goodbye. When localizing a voice application, you must take care to ensure that when you change a grammar, you also modify any prompts that refer to it. Typically, you will find audio prompts that give instructions to the user in <noinput>, <nomatch>, and <help> VoiceXML tags.
Before the recording artist re-records these prompts for the new language, make a note of any grammar changes, and update the localized prompt phrase to match the grammar.
Updating Concatenated Phrases
Sentences in voice applications are frequently constructed from individual phrases and words. For example, the phrase Today is Tuesday April 8th, 2003 might be constructed from the following eight words or phrases: Today is, Tuesday, April, eighth, two, thousand, and three. The VoiceXML code plays these prompts in order.
Localizing the application may require that words be concatenated in a different order. Recording the prompts in the localized language may not result in a correctly structured sentence.
This problem has two solutions:
- Test the localized application by interacting with it to detect any instances where the phraseology is incorrect. This is the simplest approach for localization teams who are not familiar with VoiceXML.
- Perform a code review, identifying prompt concatenation in the code, and making changes to the prompts as necessary. In some cases you may have to add new prompts to account for significant changes in sentence structure. You may need to go back to the recording studio to record new prompts.
Concatenated phrases may also suffer from cadence issues. Cadence is the way that individual words and phrases flow within a sentence. In some languages, words flow together without pauses. This may require the removal of silence at the beginning or end of recorded prompts, or in some cases, the recording of a single phrase to replace several concatenated words.
Cadence issues are usually discovered during testing and can often be resolved with careful prompt editing. If you edit or re-record a prompt to work well in one concatenation, it may not work correctly if it is used elsewhere in a different part of the sentence. If you make a change to a prompt in one dialog, check all other cases where that prompt is used to ensure that the change does not adversely affect them.
Sometimes the pronunciation of a word changes depending on the words immediately preceding or following it, or if the language has masculine and feminine forms of words, depending on the gender of the object. Review the prompt phrases before recording and make notes to the recording artist where a particular pronunciation is required. If the article in the sentence is only known at run-time, you may need to add VoiceXML code to select the correct pronunciation depending on the gender of the article.
Translating Text-To-Speech Prompts
In addition to pre-recorded prompts, some voice applications use text-to-speech (TTS) prompts. These prompts appear as English text in VoiceXML code within <prompt> statements. For example:
<prompt>Please say yes or no</prompt>
TTS prompts can also be used in conjunction with pre-recorded prompts:
<prompt><audio src="you_have.wav"/> 5 <audio src="unread_messages.wav"/> </prompt>
In this example, TTS is used for the word five in the phrase you have five unread messages.
Finally, VoiceXML variables may be used in TTS prompts:
<prompt><value expr="num_messages"/></prompt>
In this example, the digit 5 and the variable num_messages are spoken using TTS. No localization work is required because the TTS engine for the new locale automatically speaks the number in the new language. However, variables may also be assigned values that correspond to English words or phrases that the TTS engine will not translate. In such cases you must identify any place in the VoiceXML code where English language strings are assigned to variables. Look for <assign> tags such as:
<assign name="prompt" expr="'OK, got it!'"/>
You must change any embedded English language words that would be spoken using TTS. The easiest way to identify these prompts is to search for <prompt> tags.
Making a URLScraper Channel Accessible By TelephoneTo make a URLScraper channel accessible by phone, perform the following steps:
- Create a URLScraper channel with name TestURLScraper
- Provide a valid URL to this channel, such as http://wap.example.com
- Create a wrapped channel named WrappedTestURL
- Create a directory under /etc/opt/SUNWps/desktop/default
- Create an aml directory under WrappedTestURL
- Copy contentWrapper.jsp from RenderingWrappingProvider/aml to WrappedTestURL/aml
- In contentWrapper.jsp add AmlContainer tags as follows:
<%-- Copyright 2001 Sun Microsystems, Inc. All rights
reserved.
PROPRIETARY/CONFIDENTIAL. Use of this product is subject to
license terms. --%>
<%@ page
import="com.sun.portal.wireless.providers.rendering.wrapping
.RenderingWrappingProvider"%>
<%@ page session="false" %>
<AmlContainer>
<% // Get the container RenderingWrappingProvider rwp =
(RenderingWrappingProvider)pageContext.getAttribute("JSPProv
ider");
StringBuffer sb = rwp.getWrappedChannelContent(request,
response);
if(sb == null) sb=new StringBuffer("");
out.println(sb);
%> </AmlContainer>