Skip Headers
Oracle® Communications WebRTC Session Controller Web Application Developer's Guide
Release 7.0

E40978-01
Go to Documentation Home
Home
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

4 Setting Up Audio Calls in Your Applications

This chapter shows how you can use the Oracle Communications WebRTC Session Controller JavaScript application programming interface (API) library to enable your applications users to make and receive audio calls from your applications when your applications run on WebRTC-enabled browsers.

Note:

See WebRTC Session Controller JavaScript API Reference for more information on the individual WebRTC Session Controller JavaScript API classes.

About Implementing the Audio Call Feature in Your Applications

The WebRTC Session Controller JavaScript API for audio calls enables your web applications to support audio calls made to and received from phones located on applications hosted on other WebRTC-enabled browsers, Session Initialization Protocol (SIP) protocol based applications, and public switched telephone network (PSTN) phones.

To support audio calls in your application, implement the logic to do the following:

  • For calls made from by your application user, obtain the callee information and start the process to set up the call session between the caller and callee.

  • For calls received by your application user, obtain the callee's response to the incoming call request and respond to the callee accepting or rejecting the incoming call invitation.

  • Monitor the call session, taking action to respond to any change in the state of the application session, call session or media stream.

  • Take appropriate action when one of the parties ends the call.

This basic logic can be used to support calls with video and data transfers.

About the WebRTC Session Controller JavaScript API Used in Implementing Audio Calls

The following WebRTC Session Controller JavaScript API classes are used to implement audio calls in your web applications:

  • wsc.Session for the session object

  • wsc.CallPackage for the call package object

  • wsc.Call for the call object

  • wsc.CallConfig for the media configuration in the calls

  • The constants defined in the following enumerators:

    • wsc.SESSIONSTATE

    • wsc.CALLSTATE

    • wsc.MEDIADIRECTION

    • wsc.MEDIASTREAMEVENT

    • wsc.ERRORCODE

    • wsc.LOGLEVEL

You can extend the audio call feature in your application to perform custom tasks by extending these API classes.

Setting Up Audio Calls in Your Applications

Use the WebRTC Session Controller JavaScript API library to set up the audio call feature in your application to suit your deployment environment. The specific logic, web application elements, and controls you implement for the audio call feature in your applications are predicated upon how the audio call feature is used in your web application.

To illustrate the basic logic in setting up call capability in web applications using the WebRTC Session Controller JavaScript API library, this section uses a sample application in which the audio call is its primary and sole feature.

About the Sample Audio Call Application

The sample audio call application referenced in this chapter provides the logic necessary to enable two users to place a call to each other. It uses WebRTC Session Controller JavaScript API and supports audio calls only. The sample audio call application obtains the call information from an input field it provides on the application page. The steps in the development process described below refer to this sample audio call application. See "Sample Audio Call Application" to view the complete code.

Overview of Setting Up the Audio Call Feature in Your Application

To set up an audio call feature in your application, requires implementing logic for the following:

  1. Setting Up the General Elements for the Audio Call Feature

  2. Enabling Users to Make Audio Calls From Your Application

  3. Implementing the Logic to Set Up the Call Session

  4. Enabling Your Application Users to Receive Calls

  5. Monitoring the Call

  6. Ending the Call

Setting Up the General Elements for the Audio Call Feature

To set up the audio call feature in your application, include the following in the <head> section of your application:

  • The <audio> element

    Set up the <audio> element for the local and remote audio according to your browser.

  • The WebRTC Session Controller JavaScript API library (wsc.js)

    Reference the wsc.js file. If your application uses other supporting libraries, reference them, as well.

See "Sample Audio Call Application".

Setting Up the Main Objects and Values

Use the WebRTC Session Controller JavaScript API to set up the main objects and values at the start of your application:

  • Declare a Session object, a CallPackage object, and a variable for the user name.

  • Set the log level as required as described in "Debugging Your Application with wsc.LOGLEVEL".

  • Set up the web Socket uniform resource identifier (URI) for the WebLogic Server and the login and logout URIs, if your application uses them. The WebSocket URI is required when you create a session object in your application.

Example 4-1 shows how the sample application described in this chapter sets up the WebSocket URI and global variables.

Example 4-1 Sample Setup of Global Variables and WebSocket URI

var wscSession, callPackage, userName, caller, callee;
wsc.setLogLevel(wsc.LOGLEVEL.DEBUG);
 
// Save the location from where the user accessed this application.
var savedUrl = window.location;
 
// This application is deployed on WebRTC Session Controller (WSC).
var wsUri = "ws://" + window.location.hostname  + ":" + window.location.port + "/ws/webrtc/sample";
 
//  loginURI is the location from where the user accesses the application.
//  logoutURI is the location to which the user is redirected after logout.
   ...

Here:

  • window.location.hostname and window.location.port define the location for Signaling Engine associated with the audio call application.

  • /ws/webrtc/sample indicates that the sample application is deployed in WebRTC Session Controller.

Current Stage in the Development of the Audio Call Feature

At this point, you have completed the setup for the general elements required for an audio call application. You now need to enable users to make a call from the audio call application.

Enabling Users to Make Audio Calls From Your Application

To enable users to make a call from your application, complete the following tasks:

Setting Up the Configuration for Calls Supported by the Application

The WebRTC Session Controller JavaScript API library provides the CallConfig class object to define the audio, video, and data channel capabilities of a call. To create a CallConfig class object, use the syntax:

wsc.CallConfig(audioMediaDirection, videoMediaDirection, dataChannelConfig)

When you create your application's CallConfig class object, specify the configuration for the local audio media stream in audioMediaDirection and video media stream in videoMediaDirection as described in "Specifying the Configuration for Calls with wsc.CallConfig".

The dataChannelConfigs parameter is used to define data transfers (as in text messaging sessions), and is an array of JavaScript Object Notation (JSON) objects that describe the configuration of the data channel. See "Setting Up the Configuration for Data Transfers in Chat Sessions" for more information on setting up the configuration for data transfers.

Set the local audio, video stream, and data transfer configuration for calls in your application based on your browser properties and your web application's requirements.

The sample audio call application supports audio calls in both directions and creates a call configuration object called callConfig, as shown below in Example 4-2:

Example 4-2 Sample Call Configuration Object

// Create a CallConfig object.
var audioMediaDirection = wsc.MEDIADIRECTION.SENDRECV;
var videoMediaDirection = wsc.MEDIADIRECTION.NONE;
var callConfig = new wsc.CallConfig(audioMediaDirection, videoMediaDirection);

Setting Up the Session Object

The WebRTC Session Controller JavaScript API library provides the wsc.Session class object to encapsulate the session between your web application and WebRTC Session Controller Signaling Engine. To create an instance of the Session class, use the following syntax:

wsc.Session(userName, webSocketUri, successCallback, failureCallback, sessionId) 

Where:

  • userName is the user name.

  • webSocketUri is the WebSocket connection defined earlier in Example 4-1.

  • successCallback is the function to call if the session object was created successfully.

  • failureCallback is the function to call if the session object was not created.

  • sessionId is the Session Id. It is needed if you are refreshing an existing session.

To set up a session object in your application:

  • Create an instance of the wsc.Session object.

  • Set up the logic for the successCallback and failureCallback functions.

  • If your application authenticates its users before allowing them to make calls:

    • Set up an authentication handler for that session. Input the session object when you instantiate the wsc.AuthHandler class.

    • Assign the callback function to the refresh field of your application's authentication handler.

    • Set up the logic for the callback function. See Example 4-8.

  • Specify the values for busyPingInterval, idlePingInterval, and reconnectTime. These settings determine how your application's session is monitored. See "About Monitoring Your Application WebSocket Connection".

  • Manage the changes in the state of your application session in the following way:

    • Assign a callback function to your application's Session.onSessionStateChange event handler.

    • Set up the actions to be performed by the callback function. See "Handling Session State Changes".

The sample audio call application performs these tasks inside a function called setSessionUp. When the sample audio call application page loads, the JavaScript onPageLoad function runs and it calls the setSessionUp function as shown below.

// The onPageLoad event handler.
function onPageLoad() {
    setSessionUp();
}

Within the setSessionUp function, the sample audio call application:

  • Creates an instance of the Session class object called wscSession, with:

    • wsURI as its WebSocket connection.

    • sessionSuccessHandler as the callback function for a successful creation of the session.

    • sessionErrorHandler as the callback function for a successful creation of the session.

  • Registers an authentication handler called authHandler with wscSession.

  • Configures the monitoring time intervals for wscSession.

  • Assigns a callback function called sessionStateChangeHandler to the application's onSessionStateChange event handler. This callback function manages the changes in the application's session state.

Example 4-3 shows the setSessionUp function implemented in the sample audio call application:

Example 4-3 Sample Session Object Setup

// This function sets up and configures the WebSocket connection.
function setSessionUp() {
    console.log("In setSessionUp().");
 
    // Create the session. Here, userName is null. 
    // WSC can determine it using the cookie of the request.
    wscSession = new wsc.Session(null, wsUri, sessionSuccessHandler, sessionErrorHandler);
    // Register a wsc.AuthHandler with session.
    // It provides customized authentication information, such as
    // username and password.
    var authHandler = new wsc.AuthHandler(wscSession);
    authHandler.refresh = refreshAuth;
 
    // Configure the session.
    wscSession.setBusyPingInterval(2 *1000);
    wscSession.setIdlePingInterval(6 * 1000);
    wscSession.setReconnectTime(2 * 1000);
    wscSession.onSessionStateChange = sessionStateChangeHandler;
    console.log("Session configured with authhandler, intervals and sessionStateChange handler.\n");
}

Setting Up the Call Package for the Session

The WebRTC Session Controller JavaScript API library provides the CallPackage class to manage the calls and all the messaging workflow with WebRTC Session Controller Signaling Engine. To create an instance of the CallPackage class, use the following syntax:

wsc.CallPackage(session)

Where session is the instance of the Session object in your application.

To configure the call package to manage the audio calls in your application:

  • Create an instance of the CallPackage class object for the application session.

  • Implement your application logic for incoming calls in the following way:

    • Assign a callback function for the CallPackage.onIncomingCall event handler.

    • Set up the actions to be performed by the callback function.

  • Implement your application logic to refresh a call that was dropped momentarily:

    • Assign a callback function for the CallPackage.onResurrect event handler.

    • Set up the actions to be performed by the callback function.

The sample audio call application sets up a call package called callPackage. It sets up the call package within a callback function called sessionSuccessHandler which is called when the application session is created. To process incoming calls, the sample audio call application assigns a function named onIncomingCall to the Call.onIncomingCall event handler for incoming calls. This callback function is describer later in "Responding to Your User's Actions on an Incoming Call". Additionally, the sample audio call application retrieves the name of the user.

Example 4-4 shows the sessionSuccessHandler callback function.

Example 4-4 Sample CallPackage Setup

function sessionSuccessHandler() {
    console.log(" In sessionSuccesshandler.");

    // Create a CallPackage.
    callPackage = new wsc.CallPackage(wscSession);
    // Bind the event handler of incoming call.
    if(callPackage){
        callPackage.onIncomingCall = onIncomingCall;
    }
    console.log(" Created CallPackage..\n");
    // Get user Id.
    userName = wscSession.getUserName();
    console.log (" Our user is " + userName);
}

Handling Session State Changes

When your application's session state changes, the WebRTC Session Controller JavaScript API Library invokes the application session object's Session.onSessionStateChange event handler. The new session state for the call is provided as input to your application.

Monitor the different states in the callback function you assigned to your application session object's Session.onSessionStateChange event handler. Specify the actions your application must take for each of the state changes you include.

The wsc.SESSIONSTATE enumerator contains the different states of a session defined as constants such as NONE when the session is created, CONNECTED when the session connects with the server, CLOSED when the session closes normally, and so on. See WebRTC Session Controller JavaScript API Reference for more information.

The sample audio call application assigns a callback function named sessionStateChangeHandler to its application session object's Session.onSessionStateChange event handler. In that callback function, the sample audio call application monitors and implements logic for three session states, CONNECTED, FAILED, and RECONNECTING. When the session state is CONNECTED, the sample audio call application calls a function named displayInitialControls to obtain the callee's name.

Example 4-5 shows the sessionStateChangeHandler callback function.

Example 4-5 Sample Session State Handler Callback Function

function sessionStateChangeHandler(sessionState) {
    console.log("sessionState : " + sessionState);
    switch (sessionState) {
        case wsc.SESSIONSTATE.RECONNECTING:
        setControls("<h1>Network is unstable, please wait...</h1>");
        break;
        case wsc.SESSIONSTATE.CONNECTED:
        if (wscSession.getAllSubSessions().length == 0) {
            displayInitialControls();
        }
        break;
        case wsc.SESSIONSTATE.FAILED:
        setControls("<h1>Session Failed, please logout and try again.</h1>");
        break;
    }
}

Obtaining the Callee Information

Your application can obtain the callee information in a number of ways. Ensure that, if the user is given a choice of controls such as canceling the operation or logging out, the corresponding callback functions are invoked in your application.

The sample audio call application uses a function called displayInitialControls to obtain the callee information. In it, the sample audio call application defines a simple user interface consisting of input fields and control buttons to receive the callee's name. The 'onclick'='functionName()' for each button triggers the next step for that event. For example, the onCallSomeOne() function is invoked when the Call button is selected.

Example 4-6 shows the displayInitialControls callback function.

Example 4-6 Sample displayInitialControls Function

function displayInitialControls() {
    console.log ("In displayControls().");
    var controls = "Enter Your Callee: <input type='text' name='callee' id='callee'/><br><hr>"
                    + "<input type='button' name='callButton' id='btnCall'  value='Call' onclick='onCallSomeOne()'/>"
                 + "<input type='button' name='cancelButton' id='btnCancel'  value='Cancel' onclick='' disabled ='true'/><br><br><hr>"
                 + "<input type='button' name='logoutButton' id='Logout'  value='Logout' onclick='logout()'/>"
                 + "<br><br><hr>";
    setControls(controls);
    var calleeInput = document.getElementById("callee");
 
    if (calleeInput) {
        console.log (" Waiting for Callee Input.");
        console.log (" ");
        if(userName != calleeInput) {
            calleeInput.focus();
        }
 
    }
}

Current Stage in the Development of the Audio Call Feature in Your Application

At this stage in the development of the audio call feature in your application:

  • The general elements required for audio calls are set.

  • Your application can obtain the callee information.

  • The application logic for the following functions is implemented:

    • successCallback function invoked when the application's session object is created

    • failureCallback function invoked when the application's session object is not created

    • The callback function assigned to the Session.onSessionStateChange event handler

    • The callback function assigned to the CallPackage.onIncomingCall event handler

    • The callback function assigned to the CallPackage.onResurrect event handler

Your application now needs the logic to handle both end points, the caller's side which must handle connecting the caller to the callee; and the callee's side which must respond to the callee accepting or declining the incoming call.

Initial Actions of the Sample Audio Call Application

Table 4-1 reports on the sample audio call's actions in enabling a user to make a call from the application. It describes the events that occur on the sample audio call application page, the actions taken by the sample audio call application, and the messages logged by the console.log method for this segment of the application code.

Table 4-1 Initial Actions Performed by the Sample Audio Call Application

Sample Audio Call Application Page Events Actions Taken by the Sample Audio Call Application Console Log for the Caller (bob1) Console Log for the Callee (bob2)

When the page loads, the page displays the control buttons and input fields to allow the user to make a call.

The initial actions taken by the audio call application before the user starts the call or receives a call:

  • CallConfig, which defines the calling capability, is configured.

  • When the page loads, the wscSession object is created and configured.

  • The session is now in a CONNECTED state.

  • Controls are displayed on the application page. For the audio call, they consist of a callee input field, Call, Cancel and Logout buttons.

  • The call package is created inside the callback for the session success event handler.

The example code retrieves the user Id for debugging purposes.

Created CallConfig with audio stream only.

Page has loaded. Setting up the Session.

In setSessionUp(). 
Session configured with authhandler,
 intervals and sessionStateChange handler.

sessionState : CONNECTED

In displayControls().
Waiting for Callee Input.

In sessionSuccesshandler.
Created CallPackage..

Our user is bob1@example.com
Created CallConfig with audio stream only.

Page has loaded. Setting up the Session.

In setSessionUp(). 
Session configured with authhandler,
 intervals and sessionStateChange handler.

sessionState : CONNECTED

In displayControls().
Waiting for Callee Input.

In sessionSuccesshandler.
Created CallPackage..

Our user is bob2@example.com 

Implementing the Logic to Set Up the Call Session

When your application has obtained the callee information, it can start the process to establish a call session between the caller and the callee.

To implement the logic to start a call from your application, complete the following tasks:

Starting a Call From Your Application

The WebRTC Session Controller JavaScript API library provides the wsc.Call class object to represent a call with any combination of audio/video/data channel capability. Use the createCall method of the CallPackage class to create your application's call object. The syntax to create your application's Call object is:

callPackage.createCall(target, callConfig, errorCallback) 

Where:

  • target is the callee.

  • callConfig is audio/video/data channel capability of calls defined earlier in Example 4-2.

  • errorCallback is the function to call if the call was not created.

When you obtain the callee information, implement the logic to start the call in the following way:

  • Create an instance of the wsc.Call object.

  • To handle changes in the call session state:

    • Assign a callback function for the Call.onCallStateChange event handler.

    • Set up the actions to be performed by the callback function.

  • To handle changes in the state of the media stream:

    • Assign a callback function for the Call.onMediaStreamEvent event handler.

    • Set up the actions to be performed by the callback function.

  • To handle any updates to the call:

    • Assign a callback function for the Call.onUpdate event handler.

    • Set up the actions to be performed by the callback function.

  • To handle any error in the call creation:

    • Set up the actions to be performed by your application's errorCallback function.

  • Start the call with the Call.start method.

  • Set up other actions as dictated by the environment in which your application is deployed.

The sample audio call application invokes a function called onCallSomeOne, when it receives the callee information. In this onCallSomeOne function, the sample audio call application does the following:

  • Sets up a call object named call.

  • Configures one function called setEventHandlers which handles the changes to the call states and the media stream states in its call object.

    The setEventHandlers function invokes callStateChangeHandler for changes in the call state and mediaStreamEventHandler for media stream or data transfer changes in the call. See "Sample Audio Call Application" for more information on the setEventHandlers function.

  • Starts the call using the start method of the call object.

  • Sets up the controls which allow the user to hang up or cancel the call.

  • If the user prematurely ends the call, ends the call using the end method of its Call object.

Example 4-7 Sample Function to Set Up Call for Caller

function onCallSomeOne() {

    // Need the caller callee name. Also storing caller.
    callee = document.getElementById("callee").value;
    caller = userName;
    console.log ("Name entered is " + callee);
    
    // Check to see if user gave a valid input. Omitted here. See "Sample Audio Call Application".
    ... 
    // To call someone, create a Call object first.
    var call = callPackage.createCall(callee, callConfig, doCallError);
    console.log ("Created the call.");
    console.log (" ");

    if (call != null) {
        console.log ("Calling setEventHandlers from onCallSomeOne() with call data.");
        console.log (" ");
        setEventHandlers(call);
        // Then start the call.
        console.log ("In onCallSomeOne(). Starting Call. ");
        call.start();
        ...
    }
}

Retrieving the Appropriate Authentication Headers

This section applies to your application if it uses an authentication mechanism before allowing users access to its audio call feature.

If an authentication handler has been assigned to your application's Session object and your application starts a call or receives a call, the authentication function assigned to the AuthHandler.refresh event is called. See Example 4-3.

Set up logic in the callback function assigned to your application's AuthHandler.refresh event.

The sample audio call application uses Representational State Transfer (REST) based authentication. The refreshAuth function shown in Example 4-8 is for your reference. See "Setting Up Security" for more information on the SERVICE and Traversal Using Relays around Network address translation (TURN) authentication seen in the code below.

Example 4-8 Template for the refreshAuth Function()

function refreshAuth(authType, authHeaders) {
    //Set up the response object by calling a function.
    var authInfo = null;

    if(authType==wsc.AUTHTYPE.SERVICE){
        //Return JSON object according to the content of the "authHeaders".
        // For the digest authentication implementation, refer to RFC2617.
        authInfo = getSipAuth(authHeaders);
 
    } else if(authType==wsc.AUTHTYPE.TURN){
 
        //Return JSON object in this format:
        // {"iceServers" : [ {"url":"turn:test@<aHost>:<itsPort>", "credential":"nnnn"} ]}.
        authInfo = getTurnAuth();
    }
    return authInfo;
};

If your application uses Digest access authentication, ensure that it sets up the response using the headers in the authHeaders object it retrieves. For more information on Digest access authentication, see http://www.ietf.org/rfc/rfc2617.txt.

About Digest Access Authentication

If a Session Initiation Protocol (SIP) network does not support an identity mapping between a web identity and a SIP identity, it might choose to challenge the messages from the application using a WWW-authenticate header as stipulated by RFC 2617. On receiving the WWW-authenticate header, WebRTC Session Controller Signaling Engine sends a JavaScript Object Notation (JSON) form of this header to the WebRTC Session Controller JavaScript API library. In turn, the WebRTC Session Controller JavaScript API library invokes the callback function assigned to your application's AuthHandler.refresh event handler.

To provide the appropriate challenge response, do the following in the callback function assigned to your application's AuthHandler.refresh event handler:

  • Retrieve the appropriate credentials from the user, using your application-specific logic.

  • Create your application's challenge response in JSON format and constructed, as stipulated by RFC 2617.

  • Return the challenge response to the WebRTC Session Controller JavaScript API library.

This challenge response is used to authenticate your application user with the SIP network.

Example 4-9 shows a sample authHeader received by an application that uses Digest authentication. The authHeader object is in JSON format.

Example 4-9 Digest Authentication Header Received by an Application

{
    "scheme": "Digest",
    "nonce": "a12e8f74-af01-4e74-9714-4d65bae4e024",
    "realm": "example.com",
    "qop": "auth",
    "challenge_code": "407",
    "opaque": "YXBwLTNjOHFlaHR2eGRhcnxiYWNkMTIxMWFmZDlkNmUyMThmZmI0ZDc4ZmY3ZmY1YUAxMC4xODIuMTMuMTh8Mzc3N2E3Nzc0ODYyMGY4",
    "charset": "utf-8",
    "method": "REGISTER",
    "uri": "sip:<host>:<port>"
}

Where:

  • <host> is the host name for the SIP registrar.

  • <port> is the listening port for the SIP registrar.

Creating the authHeader Object for the Response

Example 4-10 shows a sample function used by an application to set up the authHeaders in its response.

Example 4-10 Sample createResponseHeaders Function

function createResponseHeaders(authHeaders) {
// cnonce is the string provided by the client.
//The application MUST implement the MD5 algorithm.
    var 
        userName = "alice@example.com",
        password = "********",
        realm = authHeaders['realm'],
        method = authHeaders['method'],
        uri = authHeaders['uri'],
        qop = authHeaders['qop'],
        nc = '00000001',
        nonce = authHeaders['nonce'],
        cnonce = "", 
        ha1 = hex_md5(userName + ":" + realm + ":" + password), 
        ha2 = hex_md5(method + ":" + uri),
        response;
 
    if(!qop){
        response = hex_md5(ha1 + ":" + nonce + ":" + ha2);
    } else if(qop=="auth") {
        response = hex_md5(ha1 + ":" + nonce + ":" + nc + ":" + cnonce + ":" + qop + ":" + ha2);
    }
           
    // add client calculated header to the headers.
    authHeaders['username'] = username;
    authHeaders['cnonce'] = cnonce;
    authHeaders['response'] = response;
    authHeaders['nc'] = nc;
    return authHeaders;
};

Setting Up the Event Handler for Call State Changes

When your application's call state changes, the WebRTC Session Controller JavaScript API Library invokes your application's Call.onSessionStateChange event handler. The new state for the call is provided as input to your application.

The many states of a call, such as ESTABLISHED, ENDED, and FAILED are defined as constants in the wsc.CALLSTATE enumerator. See WebRTC Session Controller JavaScript API Reference for more information.

Use as many of the constants in wsc.CALLSTATE to meet your application's needs. Specify the actions your application must take for each of the state changes you include in the callback function you assigned to your application's Call.onCallStateChange event handler, as described in "Starting a Call From Your Application".

Example 4-11 shows how the sample audio call application handles call state changes. It sets up a callback function called callStateChangeHandler to monitor for three call states, wsc.CALLSTATE.ESTABLISHED, wsc.CALLSTATE.ENDED, and wsc.CALLSTATE.FAILED. When the sample audio call application's callback function in invoked with wsc.CALLSTATE.ESTABLISHED as the new call state, it calls a function called callMonitor to monitor the call. See Example 4-14. For the remaining two states, this callback function merely displays the user interface required to place a call.

Example 4-11 Sample Call State Change Handler

function callStateChangeHandler(callObj, callState) {
    console.log (" In callStateChangeHandler().");
    console.log("callstate : " + JSON.stringify(callState));
    if (callState.state == wsc.CALLSTATE.ESTABLISHED) {
        console.log (" Call is established. Calling callMonitor. ");
        console.log (" ");
        callMonitor(callObj);
    } else if (callState.state == wsc.CALLSTATE.ENDED) {
        console.log (" Call ended. Displaying controls again.");
        console.log (" ");
        displayInitialControls();
    } else if (callState.state == wsc.CALLSTATE.FAILED) {
        console.log (" Call failed. Displaying controls again.");
        console.log (" ");
        displayInitialControls();
    }
}

Setting Up the Event Handler for the Media Streams

When there is a change in the state of the local or remote media stream, the WebRTC Session Controller JavaScript API Library invokes your application's Call.onMediaStreamEvent event handler. The new state for the media stream is provided as input to your application.

The wsc.MEDIASTREAMEVENT enumerator defines the states of the local or remote media stream as LOCAL_STREAM_ADDED, REMOTE_STREAM_REMOVED, LOCAL _STREAM_ERROR, and so on. See WebRTC Session Controller JavaScript API Reference for more information.

Use as many of the constants in wsc.MEDIASTREAMEVENT to meet your application's needs. Specify the actions your application must take for each of the state changes you include in the callback function you assigned to your application's Call.onMediaStreamEvent event handler. Whenever this callback function is invoked with a new state for the media stream, your application logic should perform the action required for the new state.

Example 4-11 shows how the sample audio call application handles media stream state changes using a callback function called mediaStreamEventHandler.

Example 4-12 Sample Media Stream Event Handler

// This event handler is invoked when a  media stream event is fired.
// Attach media stream to HTML5 audio element.
function mediaStreamEventHandler(mediaState, stream) {
    console.log (" In mediaStreamEventHandler.");
    console.log("mediastate : " + mediaState);
    console.log (" ");
 
    if (mediaState == wsc.MEDIASTREAMEVENT.LOCAL_STREAM_ADDED) {
        attachMediaStream(document.getElementById("selfAudio"), stream);
    } else if (mediaState == wsc.MEDIASTREAMEVENT.REMOTE_STREAM_ADDED) {
        attachMediaStream(document.getElementById("remoteAudio"), stream);
    }
}

Current Stage in the Development of the Audio Call Feature in Your Application

At this stage in the development of the audio call feature in your application:

  • The general elements required for audio calls are set.

  • Your application can obtain the callee information.

  • Your application can retrieve the call information and start a call.

  • The application logic for the following functions is implemented:

    • errorCallback function invoked when the call is not created

    • The callback function assigned to the Call.onCallStateChange event handler

    • The callback function assigned to the Call.onMediaStreamEvent event handler

    • The callback function assigned to the Call.onDataTransfer event handler

    • The callback function assigned to the Call.onUpdate event handler

You can now provide the logic to handle an incoming call.

How the Sample Audio Call Application Starts a Call

Table 4-2 reports on the sample audio call's actions in setting up a call session. It describes the events that occur on the sample audio call application page, the actions taken by the sample audio call application, and the messages logged by the console.log method for this segment of the application code. The focus of actions for this part of the application is the caller.

Table 4-2 Sample Audio Call Application Actions in Setting Up a Call

Sample Audio Call Application Page Events Actions Taken by the Sample Audio Call Application Console Log for the Caller (bob1)

Signaling Engine asks the user for permission to use the microphone.

The call workflow starts.

For the caller (bob1) side, the application does the following in the onCallSomeOne() callback function:

  • Creates a call object with the callee's id, the configuration for calls in this browser, and the necessary call error handler function.

  • Sets up the general event handler to handle changes in the call.

  • Issues the command call.start.

  • Enables the controls to cancel the call before it is set up.

  • Defines the call and media state change handlers.

The browser requests the user to allow access to audio media. If the user gives permission, the local media stream is added.

In onCallSomeOne()
Name entered is bob2 
Adding string to name 
Caller, bob1@example.com, wants to call bob2@example.com, the Callee.
Creating call object to call bob2@example.com
 Created the call.
 
Calling setEventHandlers from onCallSomeOne() with call data.
In setEventHandlers
 
In onCallSomeOne(). Starting Call.
Enabled bob1@example.com to cancel call.
 
In mediaStreamEventHandler.
mediastate : LOCAL_STREAM_ADDED 
 
In callStateChangeHandler().
callstate : {"state":"STARTED","status":
{"code":null,"reason":"start call"}}
In callStateChangeHandler()
callstate : {"state":"RESPONSED","status":
{"code":180,"reason":"Ringing"}}

Enabling Your Application Users to Receive Calls

The focus of the actions taken in this section is the callee.

To enable application users to receive calls, do the following:

  1. Provide the logic to respond to the callee's actions with respect to the incoming call. See "Responding to Your User's Actions on an Incoming Call".

  2. Verify that you have defined the logic for the following tasks with respect to the callee:

Responding to Your User's Actions on an Incoming Call

When a user is logged in to your application and WebRTC Session Controller Signaling Engine receives a call for the user, the WebRTC Session Controller JavaScript API library invokes the CallPackage.onIncomingCall event handler in your application. It sends the incoming call object and the call configuration for that incoming call object as parameters to the CallPackage.onIncomingCall event handler.

Define the actions to process the incoming call in the callback function assigned to the onIncomingCall event handler in the following way:

  • Provide the interface and logic necessary for the callee to accept or decline the call.

  • Provide logic for the following events in association with the incoming call object:

    • User accepts the call. Run the accept method for the incoming call object. This will return the success response to the caller.

    • User declines the call. Run the decline method for the incoming call object. This will return the failure response to the caller.

  • Assign the callback functions to the event handlers of the incoming call object. These should already have been defined earlier. See "Starting a Call From Your Application".

Example 4-13 shows the onIncomingCall callback function used by the sample audio call application:

Note:

Example 4-13 uses the simplest set of controls embedded in the onIncomingCall() function to inform the user that there is an incoming call.

You can set up your application to filter the information in the remote call object and its configuration to determine how to handle the incoming call, prior to informing the user about the call.

Example 4-13 Sample onIncomingCall Function

function onIncomingCall(callObj, callConfig) {

// Draw two buttons for users to accept or decline the incoming call.
// Attach onclick event handlers to these two buttons.
    console.log ("In onIncomingCall(). Drawing up Control buttons to accept or decline the call.");
    var controls = "<input type='button' name='acceptButton' id='btnAccept' value='Accept "
    + callObj.getCaller()
    + " Incoming Audio Call' onclick=''/><input type='button' name='declineButton' id='btnDecline'  value='Decline Incoming Audio Call' onclick=''/>"
    + "<br><br><hr>";
    setControls(controls);

    document.getElementById("btnAccept").onclick = function() {
        // User accepted the call.                                      

        //  Store the caller and callee names.
        callee = userName;
        caller = callObj.getCaller;
        console.log (callee + " accepted the call from caller " + caller);
        console.log (" ");

        // Send the message back.
        callObj.accept(callConfig);
    }
    document.getElementById("btnDecline").onclick = function() {
        // User declined the call. Send a message back. 

        // Get the caller name.
        callee = userName;
        caller = callObj.getCaller;
        console.log (callee + " declined the call from caller, " + caller);
        console.log (" ");

        // Send the message back.
        callObj.decline();
    }

    // User accepted the call. Bind the event handlers for the call and media stream.
    console.log ("Calling setEventHandlers from onIncomingCall() with remote call object ");
    setEventHandlers(callObj);
}

Current Stage in the Development of the Audio Call Feature in Your Application

At this stage in the development of the audio call feature in your application:

  • The general elements required for audio calls are set.

  • Your application can obtain the callee information.

  • Your application can retrieve the call information and start a call.

  • Your application can alert the user about an incoming call and respond appropriately to the user accepting or declining the incoming call.

  • The application logic for the following functions is implemented:

    • Callback functions assigned to the Session Object's event handlers

    • The success and error callback functions invoked when a Session object is not created

    • Callback functions assigned to the CallPackage Object's event handlers

    • Callback functions assigned to the Call Object's event handlers

    • The error callback function invoked when a Call object is not created

How the Sample Audio Call Application Handles Incoming Calls

Table 4-3 reports on the sample audio call's actions in enabling a user to receive a call. It describes the events that occur on the sample audio call application page, the actions taken by the sample audio call application, and the messages logged by the console.log method for this segment of the application code. The focus here is on the callee.

Table 4-3 A breakdown of the Application Actions Needed to Receive a Call

Sample Audio Call Application Page Events Actions Taken by the Sample Audio Call Application Console Log for the Callee (bob2)

A call is received.

If the user accepts the call, Signaling Engine asks the user for permission to use the microphone.

When permission is given, the local and remote streams are added.

For the callee (bob2) side:

Signaling Engine, on receiving the call invitation from the caller, triggers the function configured in the application to handle incoming calls.

This is the call object's onIncomingCall() callback function that was assigned in Example 4-4.

The application does the following:

  • Sets up the actions in the callback function to handle changes in the call.

  • Displays control buttons to enable the callee to accept or decline the call.

In onIncomingCall(). Drawing up Control buttons to accept or deny the call. 
Calling setEventHandlers from onIncomingCall() with callObj 
In setEventHandlers
  
User Accepted the call.
In callStateChangeHandler(). 
callstate : {"state":"STARTED","status":
{"code":null,"reason":"receive call"}}
Invoking getTurnAuthInfo 
 In mediaStreamEventHandler. 
mediastate : LOCAL_STREAM_ADDED
  
 In mediaStreamEventHandler. 
mediastate : REMOTE_STREAM_ADDED 

How a Call is Established in the Sample Audio Call Application

This section uses the sample audio call application as an example to describe what happens during the interval between the caller and callee requesting and accepting the call and when the call actually starts.

At the start of the flow, the sample audio call application on the caller's side sends the START or INVITE message to WebRTC Session Controller Signaling Engine which routes the message through the network to the receiving end point. For more information on this, please see WebRTC Session Controller Extension Developer's Guide.

At appropriate points in the message flow, the caller and callee are requested to allow access to the audio element in the browser.

The log output taken from the console when the sample audio call application was run is shown in Table 4-4. Note the log output from the call state and media state transfer event handlers. All the action is done by Signaling Engine and the sample audio call application merely receives the final state (ESTABLISHED or FAILED).

Table 4-4 A Log of the Call Flow

Sample Audio Call Application Page Events Actions Taken by the Sample Audio Call Application Console Log for the Caller (bob1) Console Log for the Callee (bob2)

(Activity that takes place behind the browser activity)

For the caller, the media state changes to include the remote media stream only after the call is established.

The console log describes the flow of the call to the point where the two parties are connected and can hear each other.

The application displays the control button enabling either party to conclude the call.

In callStateChangeHandler(). 
callstate : {"state":"RESPONSED","status":
{"code":200,"reason":"got success response"}}
In callStateChangeHandler().    
callstate : {"state":"ESTABLISHED","status":
{"code":null,"reason":"sent complete"}}
 In callStateChangeHandler(). 
callstate : {"state":"RESPONSED","status":
{"code":200,"reason":"sent success response"}}

 In callStateChangeHandler(). 
callstate : {"state":"ESTABLISHED","status":
{"code":null,"reason":"got complete"}}

Monitoring the Call

The call is established when the callee accepts the call. However, your application needs to provide some way for both parties to end the call.

Note:

A call can be ended by either party (caller/callee).

When a call is ended by one party, the other party will receive a message from the browser that the call has ended and this ENDED state will trigger the message stream event handler to release the local media stream.

See the Console Log for the Caller and Console Log for the Callee columns in Table 4-6.

To monitor the call and take action, do the following in your application:

  • Display the user interface necessary for the user to end the call.

  • Provide the logic for the caller or the callee to end the call.

  • Take appropriate actions for the following events:

    • A user actively ends the call.

    • The other party ends the call.

As shown in Example 4-14, the sample audio call application does the following:

  • Displays two control buttons for the users: "Hang Up" and "Logout".

  • Responds to the selection:

    • If Hang Up is clicked, ends the call (which ends the call session and releases the call resources).

    • If Logout is selected, ends the session (which ends the call and releases the session's resources).

Example 4-14 Monitoring the Established Call

function callMonitor(callObj) {
    console.log ("In callMonitor");
    console.log ("Monitoring the call. Setting up controls to Hang Up.");
    console.log (" ");

    // Draw 2 buttons.
    // "Hang Up" button ends the call, but user stays on the application page.
    // "Logout" button ends the session, and user leaves the application.
    // For the complete code, see "Sample Audio Call Application".
    ...
    document.getElementById("btnHangup").onclick = function() {
        ....
        callObj.end();
    };
}

How the Sample Audio Call Application Monitors a Call

Table 4-5 reports on the sample audio call application's actions in monitoring a call session. It describes the events that occur on the sample audio call application page, the actions taken by the sample audio call application, and the messages logged by the console.log method for this segment of the application code.

Table 4-5 How the Sample Audio Call Application Monitors the Call

Sample Audio Call Application Page Events Actions Taken by the Sample Audio Call Application Console Log for the Caller (bob1) Console Log for the Callee (bob2)

The remote stream is added for the caller.

The call takes place.

Control buttons are displayed to enable either party to end the call.

When the call state is ESTABLISHED, the application does the following on the caller's (bob1) side:

  • Sets up the controls to enable the caller to end the call.

  • Adds the remote media stream enabling the caller to hear the "Hello?"

On the callee's (bob2) side:

Sets up the controls to enable the callee to end the call.

In callStateChangeHandler(). 
callstate : {"state":"ESTABLISHED","status":
{"code":null,"reason":"sent complete"}}
Call is established. Calling callMonitor.

In callMonitor.
Monitoring the call. Setting up controls to Hang Up.  
 In mediaStreamEventHandler.  
mediastate : REMOTE_STREAM_ADDED
 In callStateChangeHandler(). 
callstate : {"state":"ESTABLISHED","status":
{"code":null,"reason":"got complete"}}
 Calling callMonitor.

Call established. Setting up controls to Hang Up.  

Ending the Call

When either the callee or caller ends the call, the call state goes to ENDED which triggers the browser to stop the call. The local media stream is removed from each browser application.

Set up the next action according to your application's requirements.

In the sample audio call application as shown in Example 4-11, the application calls the displayInitialControls() function which renders the controls to make calls.

Table 4-6 reports on the sample audio call application's actions in ending a call session. It describes the events that occur on the sample audio call application page, the actions taken by the sample audio call application, and the messages logged by the console.log method for this segment of the application code.

Table 4-6 A Breakdown of How the Sample Audio Call Ends

Sample Audio Call Application Page Events Actions Taken by the Sample Audio Call Application Console Log for the Caller (bob1) Console Log for the Callee (bob2)

One or the other party can end the call.

In this example, bob1, the caller, ended the call.

The console log for the caller from the callMonitor() function specifies who ended the call.

At this point note the differences in the console log entries for the caller and callee.

The example code also once again displays the input buttons for the user to make a call.

  • Either the caller or the callee clicks the control button to end the call.

  • The state of the call changes to ENDED.

  • The local media stream for the browser is disconnected.

  • At this point, your application's logic may vary.

  • In this example, the controls to make a call are displayed once again.

In callMonitor.
Caller, bob1@example.com, clicked the Hang Up button.
Calling call.end now.
 
In callStateChangeHandler(). 
callstate : {"state":"ENDED","status":
{"code":null,"reason":"stop call"}}
 Call ended. Displaying controls again.
 
In displayControls().
 Waiting for Callee Input. 
 
In mediaStreamEventHandler. 
mediastate : LOCAL_STREAM_REMOVED 
In callStateChangeHandler().
callstate : {"state":"ENDED","status":
{"code":null,"reason":"stop call"}}
 Call ended. Displaying controls again.
 
In displayControls().
 Waiting for Callee Input. 
 
 In mediaStreamEventHandler.
mediastate : LOCAL_STREAM_REMOVED

Current Stage in the Development of the Audio Call Feature in Your Application

At this stage in the development of the audio call feature in your application:

  • The general elements required for audio calls are set.

  • Your application can obtain the callee information.

  • Your application can retrieve the call information and start a call.

  • Your application can alert the user about an incoming call and respond appropriately to the user accepting or declining the incoming call.

  • The application logic for the following functions should be implemented:

    • Callback functions assigned to the Session Object's event handlers

    • The success and error callback functions invoked when a Session object is not created

    • Callback functions assigned to the CallPackage Object's event handlers

    • Callback functions assigned to the Call Object's event handlers

    • The error callback function invoked when a Call object is not created

  • Your application can monitor the established call, take action as necessary when there is a change to the call in any way.

  • When one user ends the call, our application can close the call connection successfully.

Closing the Session When the User Logs Out

The close() method of the Session API is used to close a session with WebRTC Session Controller Signaling Engine. The syntax is:

wscSession.close();

Set up the logic to close the session according to your application's requirements.

In the sample audio call application, when the user clicks the Logout button, the application calls the logout function to close the session as shown in Example 4-15. Additionally, the user is sent back to the location specified in logoutUri (which was defined in Example 4-1 at the start of this sample code.

Example 4-15 Sample Logout Function

function logout() {
    if (wscSession) {
        wscSession.close();
    }
    // Send the user back to where he came from.
    window.location.href = logoutUri;
}

In your environment, the call feature may be one of the many features of your application. For this example, and at this point, the sample audio call application has completed its task. All that remains is to provide the closing entries for the HTML element tags.

The code for the sample audio call application discussed in this chapter can be seen under "Sample Audio Call Application".

Other Actions on Calls

This section describes some of the other actions your application can take on calls.

Gathering Information on the Current Call

You can obtain the following data about the current call by using the methods of the Call object:

  • The caller or the callee by using the Call.getCaller or Call.getCallee method respectively.

  • The call configuration by using the Call.getCallConfig method.

  • The call state by using the Call.getCallState method.

  • The data transfer object by its label using the Call.getDataTransfer(label) method.

  • The RTCPeerConnection (peer-to-peer connection) of the current call by using the Call.getPeerConnection method. For example, when the call employs dual-tone multi-frequency (DTMF) signal tones, use its getPeerConnection method to perform operations directly on the WebRTC PeerConnection connection.

    Note:

    The peer connection for the current call may change. Always retrieve its current value using the getPeerConnection method for your call object, and then use the result.

Supporting Multiple Calls Using CallPackage

Since the CallPackage class object can handle an array of calls, you can configure your application to set up and manage an array of calls (both incoming and outgoing). The basic logic outlined in Overview of Setting Up the Audio Call Feature in Your Application can be used in this scenario. Update this logic so that your application properly manages each specific call session in the array of calls with respect to maintaining the details of the call details, handling changes to the call, media or session states.

See "Extending Your Applications Using WebRTC Session Controller JavaScript API" for more information on extending the Call and CallPackage API.

Managing Interactive Connectivity Establishment Interval

Your application can configure the time period within which the WebRTC Session Controller JavaScript API library uses the Interactive Connectivity Establishment (ICE) protocol to set up the call session. This procedure comes into play when your application is the caller and your application starts the call setup with its Call.start command.

About the Use of ICE and ICE Candidate Trickling

ICE is a technique which determines the best possible pairing of the local IP address and the remote IP address that can be used to establish the call session between the two applications associated with the caller and the callee. Each user agent (caller or callee's browser) has an entity (such as WebRTC Session Controller Signaling Engine) which acts as the ICE agent and collects and shares possible IP addresses. The final pair of IP addresses is elected after gathering and checking possible candidates (IP addresses) and taking into account the security of the end point applications and of the call connection. The media connection is established only after the ICE procedure finds an appropriate pair of IP addresses with which to communicate.

ICE candidate trickling is an extension of ICE. In this technique, a caller's ICE agent may incrementally provide candidates to the callee's ICE agent after the initial offer (the request which requires a response) has been dispatched. This ICE candidate trickling process allows the callee's application to begin acting upon the call and setting up the necessary protocol connections immediately, without waiting for the caller to gather all possible candidates. Doing so results in faster call startup in cases where gathering is not performed prior to initiating the call.

For more information on Interactive Connectivity Establishment, see http://tools.ietf.org/html/draft-rescorla-mmusic-ice-trickle

About WebRTC Session Controller Signaling Engine and the ICE Interval

WebRTC Session Controller Signaling Engine enables your applications to limit the time taken by the ICE agent to set up a call session by enabling you to specifying the ICE interval your application allows for this deliberation process.

The default value ICE interval for a call setup is 2000 milliseconds.

Signaling Engine checks the status of the ICE candidate periodically. If new candidates are gathered, the ICE agent will attempt to send this information in JSON format in the START message to the other peer.

Retrieving the Current ICE Interval for the Call

To retrieve the current ICE interval, use the getIceCheckInterval method of your application's call object. The interval is returned in milliseconds.

Setting Up the ICE Interval for the Call

To set the current ICE interval, provide the time interval in milliseconds when you call the setIceCheckInterval method of your application's Call object.

Updating a Call

When a call is in an ESTABLISHED state, the caller or the callee may wish to update the call in one of a set of supported or configured ways. For example, one or the other party may select or deselect the mute button on a call, or move from an audio to a video format for the call. As a result, your application may need to update the call for the specific reason.

In order to handle this scenario,

  • Set up the necessary interface to capture the information your application user provides on:

    • The type of update the user wishes to make

    • The accept or decline response to the update request

  • From the point of view of the person initiating the update:

    • Set up the callback function to invoke when your application user requests the update.

    • Configure the parameters (CallConfig, and localStreams) required for the update.

    • Invoke the Call.update method with the CallConfig, and localStreams parameters.

    • Provide the required logic in the callback function assigned to your application's Call.onCallStateChange event handler for each of the possible call state changes relating to updates, wsc.CALLSTATE.UPDATED and wsc.CALLSTATE.UPDATE_FAILED.

    • Save any data specific to your application.

    • Set up the actions in response to the other party declining the update.

  • From the point of view of the person receiving the update:

    • Set up the callback function you assign to the Call.onUpdate event handler when your application receives the update request from Signaling Engine.

    • Process the parameters (CallConfig, and localStreams) required for the update.

    • Invoke the Call.accept method with CallConfig, and localStreams parameters.

    • Set up the required logic in the callback function assigned to your application's Call.onCallStateChange for each of the possible call state changes relating to updates, wsc.CALLSTATE.UPDATED and wsc.CALLSTATE.UPDATE_FAILED.

    • Save any data specific to your application.

Reconnecting Dropped Calls

At times, a drop in reception quality or some other event may cause a call that is in progress to be momentarily dropped and reconnected. When a call has been recovered, the WebRTC Session Controller JavaScript API library invokes your application's CallPackage.onResurrect event handler with the rehydrated call as the parameter. Your application can handle this scenario by providing the logic in the callback function assigned to the CallPackage.onResurrect event handler to use the rehydrated call object and resume the call.

Important:

If you create a custom call package, be sure to implement the appropriate logic to resume your application operation and reconnect calls.

To reconnect the call, do the following in your application:

  1. If callPackage is the name of your application's CallPackage object, add the following statement to assign a callback function to its onResurrect event handler:

    callPackage.onResurrect = onResurrect;
    
  2. Set up the callback function (onResurrect in this case).

    In this callback function, be sure to resume the call after you perform any necessary actions. For example,

    function onResurrect(resurrectedCall) {
        ...
        resurrectedCall.resume(onResumeCallSuccess, doCallError);
    }
    
  3. Set up the onResumeCallsuccess success callback for the Call.resume method.

    For example,

    function onResumeCallSuccess(callObj) {
        // Is the call in an established state?
        if (callObj.getCallState().state == wsc.CALLSTATE.ESTABLISHED) {
            // Call is in established state. Take action.
            ...
        } else {
            // Call is not in established state. Take action.
            ...
        }
    }  
    

    The doCallError callback function should have been defined earlier when the application's Call object was created.