Features

Here are the features that you can configure in the Oracle Web SDK.

Absolute and Relative Timestamps

  • Feature flag: timestampFormat: 'none'

    Note:

    enableTimestamp: true (default: true) has been deprecated.
  • Feature configuration: timestampFormat

You can enable absolute or relative timestamps for chat messages. Absolute timestamps display the exact time for each message. Relative timestamps display only on the latest message and express the time in terms of the seconds, days, hours, months, or years ago relative to the previous message.
Description of relative_v_absolute_timestamps.png follows
Description of the illustration relative_v_absolute_timestamps.png

The precision afforded by absolute timestamps make them ideal for archival tasks, but within the limited context of a chat session, this precision detracts from the user experience because users must compare timestamps to find out the passage of time between messages. Relative timestamps allow users to track the conversation easily through terms like Just Now and A few moments ago that can be immediately understood. Relative timestamps improve the user experience in another way while also simplifying your development tasks: because relative timestamps mark the messages in terms of seconds, days, hours, months, or years ago, you don't need to convert them for timezones.

How Relative Timestamps Behave

As previously mentioned, a relative timestamp appears only on the latest message. Here's that behavior in a little more detail. When you configure the timestamp (timestampMode: 'relative' or timestampMode: 'default'), an absolute timestamp displays before the first message of the day as a header. This header displays when the conversation has not been cleared and older messages are still available in the history.

A relative timestamp then displays on each new message.
Description of most_recent_message_timestamp.png follows
Description of the illustration most_recent_message_timestamp.png
This timestamp is updated at following regular intervals (seconds, minutes, etc.) until a new message is received.
  • For first 10s
  • Between 10s-60s
  • Every minute between 1m-60m
  • Every hour between 1hr-24hr
  • Every day between 1d-30d
  • Every month between 1m-12m
  • Every year after first year
When a new message is loaded into the chat, the relative timestamp on the previous message is removed and a new timestamp appears on the new message displaying the time relative to the previous message. At that point, the relative timestamp updates until the next messages arrives.

Add a Relative Timestamp

To add a relative timestamp:
  • Enable timestamps – enableTimestamp: true

    Note:

    This feature flag has been deprecated in Release 22.02 in favor of timestampFormat: 'none'.
  • Enable relative timestamps – timestampMode: 'relative'
  • Optional steps:
    • Set the color for the relative timestamp – timestamp: '<a hexadecimal color value>'
    • For multi-lingual skills, localize the timestamp text using these keys:
      Key Default Text Description
      relTimeNow Now The initial timestamp, which displays for the first 9 seconds. This timestamp also displays when the conversation is reset.
      relTimeMoment a few moments ago Displays for 10 to 60 seconds.
      relTimeMin {0}min ago Updates every minute
      relTimeHr {0}hr ago Updates every hour
      relTimeDay {0}d ago Updates every day for the first month.
      relTimeMon {0}mth ago Updates every month for the first twelve months.
      relTimeYr {0}yr ago Updates every year.
    • Use the timeStampFormat settings to change the format of the absolute timestamp that displays before the first message of each day.

Autocomplete

  • Feature flag: enableAutocomplete: true (default: false)
  • Enable client side caching: enableAutocompleteClientCache
Autocomplete minimizes user error by providing effective phrases that can be used as both direct input and as suggestions. To enable this feature, update the widget settings with enableAutocomplete: true and add a set of optimized user messages to the Create Intent page. Once enabled, a popup displays these messages after users enter three or more characters. The words in the suggested messages that match the user input are set off in bold. From there, users can enter their own input, or opt for one of the autocomplete messages instead.

Note:

This feature is only available over WebSocket.


When a digital assistant is associated with the Oracle Web channel, all of the sample utterances configured for any of the skills registered to that digital assistant can be used as autocomplete suggestions.

Auto-Submitting a Field

When a field has the autoSubmit property set to true, the client sends a FormSubmissionMessagePayload with the submittedField map containing either the valid field values that have been entered so far. Any fields that are not set yet (regardless of whether they are required), or fields that violate a client-side validation are not included in the submittedField map. If the auto-submitted field itself contains a value that's not valid, then the submission message is not sent and the client error message displays for that particular field. When an auto-submit succeeds, the partialSubmitField in the form submission message will be set to the id of the autoSubmit field.

Replacing a Previous Input Form

When the end user submits the form, either because a field has autosubmit set to true, the skill can send a new EditFormMessagePayload. That message should replace the previous input form message. By setting the replaceMessage channel extension property to true, you enable the SDK to replace previous input form message with the current input form message.

Automatic RTL Layout

When the host page's base direction is set with <html dir="rtl"> to accomodate right-to-left (RTL) languages, the chat widget automatically renders on the left side. Because the widget is left-aligned for RTL langauges, its icons and text elements are likewise repositioned. The icons are in the opposite positions from where they would be in a left-to-right (LTR) rendering. For example, the send, mic and attachment icons are flipped so that the mic and send icons occupy the left side of the input field (with the directional send icon pointing left) while the attachment icon is on the right side of the input field. The alignment of the text elements, such as inputPlaceholder and chatTitle, is based on whether the text language is LTR or RTL. For RTL languages, the inputPlaceHolder text and chatTitle appear on the right side of the input field.

Avatars

By default, none of the messages in the chat are accompanied with avatars. Using the following parameters, however, you can configure avatars for the skill, the user, and an agent avatar when the skill is integrated with live agent support.
  • avatarBot - The URL of the image source, or the source string of the SVG image that's displayed alongside the skill messages.
  • avatarUser - The URL of the image source, or the source string of the SVG image that's displayed alongside the user messages. Additionally, if the skill has a live agent integration, the SDK can be configured to show a different icon for agent messages.
  • avatarAgent - The URL of the image source, or the source string of the SVG image that's isplayed alongside the agent messages. If this value is not provided, but avatarBot is set, then the avatarBot icon is used instead.

Note:

These settings can only be passed in the initialization settings. They cannot be modified dynamically.
new WebSDK({
URI: '<URI>',
//...,
icons: {
    avatarBot: '../assets/images/avatar-bot.png',
    avatarUser: '../assets/images/avatar-user.jpg',
    avatarAgent: '<svg xmlns="http://www.w3.org/2000/svg" height="24" width="24"><path d="M12 6c1.1 0 2 .9 2 2s-.9 2-2 2-2-.9-2-2 .9-2 2-2m0 9c2.7 0 5.8 1.29 6 2v1H6v-.99c.2-.72 3.3-2.01 6-2.01m0-11C9.79 4 8 5.79 8 8s1.79 4 4 4 4-1.79 4-4-1.79-4-4-4zm0 9c-2.67 0-8 1.34-8 4v3h16v-3c0-2.66-5.33-4-8-4z"/></svg>'
}
})

Cross-Tab Conversation Synchronization

Feature flag: enableTabsSync: true (default: true)

Users may need to open the website in multiple tabs for various reasons. With enableTabsSync: true, you can synchronize and continue the user's conversation from any tab, as long as the connections parameters (URI, channelId, and userId) are the same across all tabs. This feature ensures that users can view messages from the skill on any tab and respond from the same tab or any other one. Additionally, if the user clears the conversation history in one tab, then it's deleted from the other tabs as well. If the user updates the chat language on one tab, then the chat language gets synchronized to the other tabs.

There are some limitations:
  • A new tab synchronizes with existing tab(s) for the new messages between the user and the skill on opening. If you have not configured the SDK to display messages from the conversation history, the initial chat widget on the new tab will appear empty when opened.
  • If you have configured the SDK to display conversation history, the messages from the current chat on existing tabs will appear as part of conversation history on a new tab. Setting disablePastActions to all or postback, may prevent interaction with the actions for messages in the new tab.
  • The Safari browser currently does not support this feature.

Custom Message Rendering

Feature flag: delegate.render: (message) => boolean (default: undefined)

Use this feature to override the default message rendering with your own custom message template. To use this feature, you need to implement the render delegate function which takes the message model as the input and returns a boolean flag as the output. It must return true to replace the default rendering with your custom rendering for a particular message type. If false is returned, the default message is rendered instead.

Note:

For custom rendering, all of the action click handling, and the disabling or enabling of action must be handled explicitly.
You can use any external framework component for your message rendering. Refer to the integration samples included in the SDK's samples directory to check how you can use this feature with such frameworks like React, Angular, and Oracle JavaScript Extension Toolkit (JET).

Default Client Responses

Feature flag: enableDefaultClientResponse: true (default: false)

Use this flag to provide default client-side responses along with a typing indicator when the skill response has been delayed, or when there's no skill response at all. If the user sends out the first message/query, but the skill does not respond within the number of seconds set by the defaultGreetingTimeout flag, the skill can display a greeting message that's configured using the defaultGreetingMessage translation string. Next, the client checks again for the skill response. The client displays the skill response if it has been received, but if it hasn't, then the client displays a wait message (configured with the defaultWaitMessage translation string) at intervals set by defaultWaitMessageInterval. When the wait for the skill response exceeds the threshold set by the typingIndicatorTimeout flag, the client displays a sorry response to the user and stops the typing indicator. You can configure the sorry response using the defaultSorryMessage translation string.

Delegation

Feature configuration: delegate

The delegation feature sets a delegate to receive callbacks before certain events in the conversation. To set a delegate, pass the delegate parameter, or use the setDelegate method. The delegate object may optionally contain the beforeDisplay, beforeSend, beforePostbackSend, beforeEndConversation and render delegate functions.
var delegate = {
    beforeDisplay: function(message) {
        return message;
    },
    beforeSend: function(message) {
        return message;
    },
    beforePostbackSend: function(postback) {
        return postback;
    },
    beforeEndConversation: function(message) {
        return new Promise((resolve, reject) => {
            setTimeout(() => {
                resolve(message);
            }, 2000);
        });
    },
    render: function(message) {
        if (message.messagePayload.type === 'card') {
            // Perform custom rendering for card using msgId
            return true;
        }
        return false;
    }
}

beforeDisplay

The beforeDisplay delegate allows a skill's message to be modified before it is displayed in the conversation. The message returned by the delegate displays instead of the original message. The returned message is not displayed if the delegate returns a falsy value like null, undefined, or false. If the delegate errors out, then the original message will be displayed instead of the message returned by the delegate. Use the beforeDisplay delegate to selectively apply the in-widget WebView linking behavior.

beforeSend

The beforeSend delegate allows a user message to be modified before it is sent to the chat server as part of sendMessage. The message returned by the delegate is sent to the skill instead of the original message. The message returned by the delegate is not set if the delegate returns a falsy value like null, undefined, or false, then the message is not sent. If it errors out, the original message will be sent instead of the message returned by the delegate.

beforePostbackSend

The beforePostbackSend delegate is similar to beforeSend, just applied to postback messages from the user. The postback returned by the delegate is sent to the skill. If it returns a falsy value, like null, undefined, or false, then no message is sent.

beforeEndConversation

The beforeEndConversation delegate allows an interception at the end of a conversation flow if some pre-exit activity must be performed. The function receives the exit message as its input parameter and it must return a Promise. If this Promise resolves with the exit message, then the CloseSession exit message is sent to the chat server. Otherwise, the exit message is prevented from being sent.
...

 beforeEndConversation: function(message) {
        return new Promise((resolve, reject) => {
            setTimeout(() => {
                resolve(message);
            }, 2000);
        });
    }

render

The render delegate allows you to override the default message rendering. If the render delegate function returns true for a particular message type, then the WebSDK creates a placeholder slot instead of the default message rendering. To identify the placeholder, add the msgId of the message as the id of the element. In the render delegate function, you can use this identifier to get the reference for the placeholder and render your custom message template. See Custom Message Rendering.

Draggable Launch Button

Feature flag: enableDraggableButton: true (default: false)

Sometimes, particularly on mobile devices where the screen size is limited, the chat widget's launch button can block content in a web page. By setting enableDraggableButton: true, you can enable users to drag the launch button out of the way when it's blocking the view. This flag only affects the location of the launch button, not the chat widget: the widget will still open from its original location.

Dynamic Typing Indicator

Feature flag: showTypingIndicator: 'true'

A typing indicator tells users to hold off on sending a message because the skill is preparing a response. By default, skills display the typing indicator only for their first response when you initialize the SDK with showTypingIndicator: 'true'. For an optimal user experience, the skill should have a dynamic typing indicator, which is a typing indicator that displays after each skill response. Besides making users aware the skill has not timed out but is still actively working on a response, displaying the typing indicator after each skill response ensures that users won’t attempt to send messages prematurely, as might be the case when the keepTurn property directs the skill to reply with a series of separate messages that don’t allow user to interject a response.

To enable a typing indicator after each skill response:
  • Initialize the SDK with showTypingIndicator set to true.
  • Call the showTypingIndicator API
The showTypingIndicator can only enable the display of the dynamic typing indicator when:
  • The widget is connected to the Oracle Chat Server. The dynamic typing indicator will not appear when the connection is closed.
  • The SDK has been initialized with showTypingIndicator set to true.

    Note:

    This API cannot work when the SDK is used in headless mode.
The typing indicator displays for the duration set by the optional property, typingIndicatorTimeout, that has default setting of 20 seconds. If the API is called while a typing indicator is already displaying, then the timer is reset and the indicator is hidden.

The typing indicator disappears as soon as the user receives the skill’s messages. The typing indicator moves to the bottom of the chat window if a user enters a message, or uploads an attachment, or sends a location, while it’s displaying.

Control Embedded Link Behavior

  • Custom handling: linkHandler: { onclick: <function>, target: '<string>' }
  • In the In-widget webview : linkHandler: { target: 'oda-chat-webview' }
  • In a new window: openLinksInNewWindow: 'true'
In addition to opening links within a new window by setting openLinksInNewWindow: true, or the default behavior of opening links in a new tab, you can also open links which overlay the widget’s web page. To enable this and other overrides to the linking behavior, initialize the SDK with
linkHandler: {
    target: '_blank',   // open link in a new page
    onclick: (event) => { // some operation }
}
Use linkHander to:
  • Control iframe navigation so that it can continue to overlay the page without having to include the widget in every page, reopening it upon navigation, and maintaining the same user ID.

  • Open some links in a new window, while opening others in the same tab.
  • Performing an action when a link is clicked.
  • Preventing a link from opening.
  • Opening a link in a webview.
To override the linking behavior set by the openLinksInNewWindow setting, you must define one, or both, of these attributes:
  • self – The current browsing context
  • target – Names the browsing location context, such as tab, a window, or an iFrame. Define the iFrame location as the target attribute of an anchor element (<a>). You can define the target’s _self, _blank, _parent and _top attributes.
  • onclick - Accepts a callback function that is called when the link is clicked. The callback is passed the MouseEvent that's received on the click, and can be used to perform an action, or even prevent the link from opening.

Embedded Mode

  • Feature flag: embedded: true (default: false)
  • Pass the ID of target container element: targetElement
In addition to the other settings that customize the look and feel of the widget that runs the chat, you can embed the widget itself in the Web page by:
  • Adding embedded: true.
  • Defining the targetElement property with the ID of the DOM element (an HTML component) that's used as the widget's container (such as 'container-div' in the following snippet).
<head>
    <meta charset="utf-8">
    <title>Oracle Web SDK Sample</title>
    <script src="scripts/settings.js"></script>
     <script>
        var chatWidgetSettings = {
            URI: YOUR_URI,
            channelId: YOUR_CHANNELID,
            embedded: true,
            targetElement: 'container-div'
...

    </script> 
</head>
<body>
    <h3 align="center">The Widget Is Embedded Here!</h3>
</body>
           <div id="container-div" 
                style="height: 600px; width: 380px; padding: 0; text-align: initial">
            </div>

Note:

The widget occupies the full width and height of the container. If it can't be accommodated by the container, then the widget won't display in the page.

End the Conversation Session

Feature flag: enableEndConversation: true (default: true)

Starting with Version 21.12, the SDK adds a close button to the chat widget header by default (enableEndConversation: true) that enables users to end the current session.
This is an image of the close button in the chat widget header.

After users click this button, the SDK presents them with a confirmation prompt whose text ("Are you sure you want to end the conversation? This will also clear your conversation history.") you can customize with the endConversationConfirmMessage and endConversationDescription keys. When a user dismisses the prompt by clicking Yes, the SDK sends the skill an event message that marks the current conversation session as ended. The instance then disconnects from the skill, collapses the chat widget, and erases the current user's conversation history. It also raises a chatend event that you can register for:
Bots.on('chatend', function() {
    console.log('The conversation is ended.');
});
Opening the chat widget afterward starts a new conversation session.

Note:

You can also end a session by calling the Bots.endChat() method (described in the reference that accompanies the Oracle Web SDK that's available from the Downloads page). Calling this method may be useful when the SDK is initialized in headless mode.

Focus on the First Action in a Message

Feature flag: focusOnNewMessage: 'action' (default: 'input')

For users who prefer keyboard-based navigation (which includes power users), you can shift the focus from the user input field to the first (or left most), action button in a message. By default, the chat widget sets the focus back to the user input field with each new message (focusOnNewMessage: 'input'). This works well for dialog flows that expect a lot of textual input from the user, but when the dialog flow contains a number of messages with actions, users can only select these actions through mousing or reverse tab navigation. For this use case, you can change the focus to the first action button in the skill message as it's received by setting focusOnNewMessage: 'action'. If the message does not contain any actions, the focus is set to the user input field.

Keyboard Shortcuts and Hotkeys

By defining the hotkeys object, you can create Alt Key combination shortcuts that activate, or shift focus to, UI elements in the chat widget. Users can execute these shortcuts in place of using the mouse or touch gestures. For example, users can enter Alt + L to launch the chat widget and Alt + C to collapse it. You assign the keyboard keys to elements using the hotkeys object's key-value pairs. For example:
var settings = {
    // ...,
    hotkeys: {
        collapse: 'c',  // Usage: press Alt + C to collapse the chat widget when chat widget is expanded
        launch: 'l'     // Usage: press Alt + L to launch the chat widget when chat widget is collapsed
    }
};
When creating these key value pairs:
  • You can pass only a single letter or digit for a key.
  • You can use only keyboard keys a-z and 0-9 as values.
You can pass the hotkey attribute by defining the following keys.

Note:

The attribute is not case-sensitive.
Key Element
clearHistory The button that clears the conversation history.
close The button that closes the chat widget and ends the conversation.
collapse The button that collapses the expanded chat widget.
input The text input field on the chat footer
keyboard The button that switches the input mode from voice to text.
language The select menu that shows the language selection list.
launch The chat widget launch button
mic The button that switches the input mode from text to voice.
send The button that sends the input text to the skill.
shareMenu The share menu button in the chat footer
shareMenuAudio The menu item in the share menu popup that selects an audio file for sharing.
shareMenuFile The menu item in the share menu popup that selects a generic file for sharing
shareMenuLocation The menu item in the share menu popup that selects the user location for sharing.
shareMenuVisual The menu item in the share menu popup that selects an image or video file for sharing.

Headless SDK

Feature flag: enableHeadless: true (default: false)

Similar to headless browsers, the SDK can also be used without its UI. The SDK maintains the connection to the server and provides APIs to send messages, receive messages, and get updates on the network status. You can use these APIs to interact with the SDK and to update the UI. To enable this feature, pass enableHeadless: true in the initial settings. The communication can be implemented as follows:
  • Sending messages - Calls Bots.sendMessage(message) to pass any payload to server.
  • Receiving messages - Responses can be listened for using Bots.on('message:received', <messageReceivedCallbackFunction>).
  • Get connection status update - Listens for updates on the status of the connection using Bots.on('networkstatuschange', <networkStatusCallbackFunction>). The callback has a status parameter that is updated with values from 0 to 3, each of which maps to WebSocket states:
    • 0 : WebSocket.CONNECTING
    • 1: WebSocket.OPEN
    • 2: WebSocket.CLOSING
    • 3: WebSocket.CLOSED
    • Return suggestions for a query – Returns a Promise that resolves to the suggestions for the given query string. The Promise is rejected if it takes too long (which is approximately 10 seconds) to fetch the suggestion.
      Bots.getSuggestions(utterance)
          .then((suggestions) => {
              const suggestionString = suggestions.toString();
              console.log('The suggestions are: ', suggestionString);
          })
          .catch((reason) => {
              console.log('Suggestion request failed', reason);
          });

    Note:

    To use this API, you need to enable autocomplete (
    enableAutocomplete: true
    ) and configure autocomplete for the intents.

Multi-Lingual Chat

The Web SDK's native language support enables the chat widget to detect a user's language or allow users to select the conversation language. Users can switch between languages, but only in between conversations, not during a conversation because the conversation gets reset whenever a user selects a new language.

Enable the Language Menu

You can enable a menu that allows users to select a preferred language from a dropdown menu by defining the multiLangChat property with an object containing the supportedLangs array, which is comprised of language tags (lang) and optional display labels (label). Outside of this array, you can optionally set the default language with the primary key (primary: 'en' in the following snippet).
multiLangChat: {
    supportedLangs: [{
        lang: 'en'
    }, {
        lang: 'es',
        label: 'Español'
    }, {
        lang: 'fr',
        label: 'Français'
    }, {
        lang: 'hi',
        label: 'हिंदी'
    }],
    primary: 'en'
}
The chat widget displays the passed-in supported languages in a dropdown menu that's located in the header. In addition to the available languages, the menu also includes a Detect Language option. When a user selects a language from this menu, the current conversation is reset, and a new conversation is started with the selected language. The language selected by the user persists across sessions in the same browser, so the user's previous language is automatically selected when the user revisits the skill through the page containing the chat widget.

Tip:

You can add an event listener for the chatlanguagechange event (described in the reference that accompanies the Oracle Web SDK that's available from the Downloads page), which is triggered when a chat language has been selected from the dropdown menu or has been changed.
Bots.on('chatlanguagechange', function(language) {
    console.log('The selected chat language is', language);
});
Here are some things to keep in mind when configuring language dropdown menu:
  • You need to define a minimum of two languages to enable the dropdown menu to display.
  • The label key is optional for the natively supported languages: fr displays as French in the menu, es displays as Spanish, and so on.
  • Labels for the languages can be set dynamically by passing the labels with the i18n setting. You can set the label for any language by passing it to its language_<languageTag> key. This pattern allows setting labels for any language, supported or unsupported, and also allows translations of the label itself in different locales. For example:
    i18n: {
        en: {
            langauge_de: 'German',
            language_en: 'English',
            language_sw: 'Swahili',
            language_tr: 'Turkish'
        },
        de: {
            langauge_de: 'Deutsche',
            language_en: 'Englisch',
            language_sw: 'Swahili',
            language_tr: 'Türkisch'
        }
    }
    If the i18n property includes translation strings for the selected language, then the text for fields like the input placeholder, the chat title, the hover text for buttons, and the tooltip titles automatically switch to the selected language. The field text can only be switched to a different language when there are translation strings for the selected language. If no such strings exist, then the language for the field text remains unchanged.
  • The widget automatically detects the language in the user profile and activates the Detect Language option if you omit the primary key.
  • While label is optional, if you've added a language that's not one of the natively supported languages, then you should add a label to identify the tag, especially when there is no i18n string for the language. For example, if you don't define label: 'हिंदी', for the lang: hi, then the dropdown displays hi instead, contributing to a suboptimal user experience.

Disable Language Menu

Starting with Version 21.12, you can also configure and update the chat language without also having to configure the language selection dropdown menu by passing multiLangChat.primary in the initial configuration without also passing a multiLangChat.supportedLangs array. The value passed in the primary variable is set as the chat language for the conversation.

Language Detection

In addition to the passed-in languages, the chat widget displays a Detect Language option in the dropdown. Selecting this option tells the skill to automatically detect the conversation language from the user's message and, when possible, to respond in the same language.

Note:

If you omit the primary key, the widget automatically detects the language in the user profile and activates the Detect Language option in the menu.

You can dynamically update the selected language by calling the setPrimaryChatLanguage(lang) API. If the passed lang matches one of the supported languages, then that language is selected. When no match can be found, Detect Language is activated. You can also activate the Detected Language option by calling setPrimaryChatLanguage('und') API, where 'und' indicates undetermined or by passing either multiLangChat: {primary: null} or multiLangChat: {primary: 'und'}.

You can update the chat language dynamically using the setPrimaryChatLanguage(lang) API even when the dropdown menu has not been configured. For example:
Bots.setPrimaryChatLanguage('fr')
You can dynamically update the language irrespective of whether the chat language is initially configured or not.

Note:

Voice recognition, when configured, is available when users select a supported language. It is not available when the Detect Language option is set. Selecting a language that is not supported by voice recognition disables the recognition functionality until a supported language has been selected.

Multi-Lingual Chat Quick Reference

To do this... ...Do this
Display the language selection dropdown to end users. Pass multiLangChat.supportedLangs.
Set the chat language without displaying the language selection dropdown menu to end users. Pass multiLangChat.primary.
Set a default language. Pass multiLangChat.primary with multiLangChat.supportedLangs. The primary value must be one of the supported languages included the array.
Enable language detection. Pass primary: null or primary: 'und' with multiLangChat.
Dynamically update the chat language. Call the setPrimaryChatLanguage(lang) API.

In-Widget Webview

You can configure the link behavior in chat messages to allow users to access web pages from within the chat widget. Instead of having to switch from the conversation to view a page in a tab or separate browser window, a user can remain in the chat because the chat widget opens the link within a Webview.

Configure the Linking Behavior to the Webview

You can apply the webview to all links, or in a more typical use case, to just select links. You can also customize the webview itself.
  • To open all links in the webview, pass linkHandler: { target: 'oda-chat-webview' } in the settings. This sets the target of all links to oda-chat-webview, which is the name of the iframe in the webview.
  • To open only certain links in the webview while ensuring that other links open normally in other tabs or windows, use the beforeDisplay delegate. To open a specific message URL action in the webview, replace the action.type field’s 'url' value with 'webview'. When the action type is 'webview' in the beforeDisplay function, the action button will open the link in the webview when clicked.

Open Links from Within the Webview

Links that are embedded within a page that displays within the WebView can only be opened within the WebView when they are converted into an anchor element (<a>), with a target attribute defined as target="oda-chat-webview".

Customize the WebView

You can can customize the WebView with the webViewConfig setting which accepts an object. For example:
{ referrerPolicy: 'no-referrer-when-downgrade', closeButtonType: 'icon', size: 'tall' 
The fields within this object are optional.

Note:

The configuration can also by updated dynamically by passing a webViewConfig object in the setWebViewConfig method. Every property in the object is optional.
Field Value Description
accessibilityTitle String The name of the WebView frame element for Web Accessibility.
closeButtonIcon String The image URL/SVG string that is used to display the close button icon.
closeButtonLabel String Text label/tooltip title for the close button.
closeButtonType
  • 'icon'
  • 'label'
  • 'iconWithLabel'
Sets how the close button is displayed in the WebView.
referrerPolicy ReferrerPolicy Indicates which referrer to send when fetching the frame's resource. The referrerPolicy policy value must be a valid directive. The default policy applied is 'no-referrer-when-downgrade'.
sandbox A String array An array of of valid restriction strings that allows for the exclusion of certain actions inside the frame. The restrictions that can be passed to this field are included in the description of the sandbox attribute in MDN Web Docs.
size
  • 'tall'
  • 'full'
The height of the WebView compared to the height of the chat widget. When set to 'tall', it is set as 80% of the widget's height, when set to 'full' it equals the widget's height.
title String The title that's displayed in the header of the WebView container.
Not all links may be able to open inside the WebView. Here are some reasons why:
  • Pages which provide response header X-frame-options: deny or X-frame-options: sameorigin may not open in the WebView due to server-side restrictions that prevent the page from being opened inside iframes. In such cases, the WebView presents the link back to the user so that they can open it in a new window or tab.
  • Due to server-side restrictions, authorization pages like IDCS, Google Login, and FaceBook Login cannot opened inside the WebViews, as authorization pages always return X-frame-options: deny to prevent a clickjacking attack.
  • External links, which can't open correctly within the WebView. Only links embedded in the conversation messages can be opened in the WebView.

    Note:

    Because external messages are incompatible with the WebView, do not target any external link to be opened in the WebView.
When a link can't open in the WebView, the widget presents the user with some informational text and a link to the WebView, which opens the page in a new tab when clicked. You can customize this text using the webViewErrorInfoText i18n translation string:
settings = {
    URI: 'instance',
    //...,
    i18n: {
        en: {
            webViewErrorInfoText: 'This link can not be opened here. You can open it in a new page by clicking {0}here{/0}.'
        }
    }
}

Long Polling

Feature flag: enableLongPolling: true (default: false)

The SDK uses WebSockets to connect to the server and converse with skills. If for some reason the WebSocket is disabled over the network, traditional HTTP calls can be used to chat with the skill. This feature is known as long polling because the SDK must continuously call, or poll, the server to fetch the latest messages from skill. This fallback feature can be enabled by passing enableLongPolling: true in the initial settings.

Typing Indicator for User-Agent Conversations

Feature flag: enableSendTypingStatus: boolean (default: false)

This feature allows agents to ascertain if users are still engaged in the conversation by sending the user status to the live agent. When enableSendTypingStatus is set to true, the SDK sends a RESPONDING typing status event along with the text that is currently being typed by the user to Oracle B2C Service or Oracle Fusion Service. This, in turn, displays a typing indicator on the agent console. When the user has finished typing, the SDK sends a LISTENING event to the service to hide the typing indicator on the agent console.

The typingStatusInterval configuration, which has a minimum value of three seconds, throttles the typing status update.

To send an Oracle B2C Service agent both the text as it's being typed by the user and the typing status, enableAgentSneakPreview (which by default is false) must be set to true and Sneak Preview must be enabled in Oracle B2C Service chat configuration.

Note:

You do not have to configure live typing status on the user side. The user can see the typing status of the agent by default. When the agent is typing, the SDK receives a RESPONDING status message which results in the display of a typing indicator in the user's view. Similarly, when the agent is idle, the SDK receives a LISTENING status message which hides the typing indicator.

Voice Recognition

Feature flag: enableSpeech: true (default: false)

Setting enableSpeech: true enables the microphone button to display in place of the send button whenever the user input field is empty.

Your skill can also utilize voice recognition with the startVoiceRecording(onSpeechRecognition, onSpeechNetworkChange) method to start recording and the stopVoiceRecording method to stop recording. (These methods are described in the User's Guide that's included with the SDK.)

Using the enableSpeechAutoSend flag, you can configure whether or not to send the text that’s recognized from the user’s voice directly to the chat server with no manual input from the user. By setting this property to true (the default), you allow the user’s speech response to be automatically sent to the chat server. By setting it to false, you allow the user to edit the message before it's sent to the chat server, or delete it.

Voice Visualizer

Feature configuration: enableSpeechAutoSend

The chat widget displays a voice visualizer when users click the voice icon This is an image of the Voice Visualizer's Speak icon., the chat widget displays a voice visualizer. It's an indicator of whether the audio level is sufficiently high enough for the SDK to capture the user’s voice. The user’s message, as it is recognized as text, displays below the visualizer.

Note:

Voice mode is indicated when the keyboard This is an image of the keyboard icon. icon appears.
Description of voice_visualizer.png follows
Description of the illustration voice_visualizer.png
Because of the default setting for enableSpeechAutosend is true (enableSpeechAutoSend: true), messages are sent automatically after they're recognized. Setting enableSpeechAutoSend: false switches the input mode to text after the voice message is recognized, allowing users to edit or complete their messages using text before sending them manually. Alternatively, users can complete their message with voice through a subsequent click of the voice icon before sending them manually.

Note:

The voice visualizer is created using AnalyserNode. You can implement the voice visualizer in headless mode using the startVoiceRecording method. Refer to the SDK to find out more about AnalyserNode and frequency levels.