5 Setting Up Video Calls in Your Applications

This chapter shows how you can use the Oracle Communications WebRTC Session Controller JavaScript application programming interface (API) library to enable your applications users to make and receive video calls from your applications, when your applications run on WebRTC-enabled browsers.

Note:

See Oracle Communications WebRTC Session Controller JavaScript API Reference for more information on the individual WebRTC Session Controller JavaScript API classes.

About Implementing the Video Call Feature in Your Applications

The WebRTC Session Controller JavaScript API associated with video calls enables your web applications to support video calls made to and received from other WebRTC-enabled browsers and Session Initiation Protocol (SIP)-based applications.

To support the video call feature in your application, update the application logic you used to set up audio calls in the following way:

  • Setting up the <video> element for the video stream to display optimally on the application page.

  • Enabling a user to make or receive a video call.

  • Monitoring the video display for the duration of the video call.

  • Adjusting the display element when the video call ends.

About the WebRTC Session Controller JavaScript API Used in Implementing Video Calls

The WebRTC Session Controller JavaScript API objects and methods you use in implementing video calls are the same API objects you would use to implement the audio call feature in your applications. See "About the WebRTC Session Controller JavaScript API Used in Implementing Audio Calls". You can extend the video call feature in your application to perform custom tasks by extending these APIs.

Setting Up Video Calls in Your Applications

You can use WebRTC Session Controller JavaScript API to set up the video call feature in your application to suit your deployment environment. The specific logic, web application elements, and controls you implement for the video call feature in your applications are predicated upon how the video call feature is used in your web application.

The logic to set up video calls in your applications is based on the basic logic described in "Overview of Setting Up the Audio Call Feature in Your Application". Supporting video calls becomes a matter of modifying that basic logic to set up, manage, and close video calls using the WebRTC Session Controller JavaScript API library and providing the associated display elements and controls on the application page.

When you have the basic code to place and receive audio calls using the WebRTC Session Controller JavaScript API library, update that application logic by doing the following:

Setting Up the Video Display

After assessing your browser's support for video, set up the video display settings based on the requirements of your application and the deployment environment.

In Example 5-1, an application sets up the video interface using the attributes of the HTML <video> tag. It uses the width attribute to specify the display area in percentages and the autoplay attribute to specify that the video should start playing as soon as it is ready.

Example 5-1 Sample Video Display Settings

</table>
 ...
    <!-- HTML5 audio element. -->
    <tr>
        <td width="15%"><video id="selfVideo" autoplay></video></td>
        <td width="15%"><video id="remoteAudio" autoplay></video></td>
</tr>
</table>

Specifying the Video Direction in the Call Configuration

The WebRTC Session Controller JavaScript API library provides the videoMediaDirection parameter to specify the video capability for calls in the CallConfig class object.

Enable the video stream in your application when you create the CallConfig object by setting the video media direction variable (videoMediaDirection). See "Setting Up the Configuration for Calls Supported by the Application".

In Example 5-2, an application enables the user to send and receive video objects by setting the video media direction variable to wsc.MEDIADIRECTION.SENDRECV when it creates its CallConfig object.

Example 5-2 Call Configuration Updated to Include Video

// Create a CallConfig object.
var audioMediaDirection = wsc.MEDIADIRECTION.SENDRECV;
var videoMediaDirection = wsc.MEDIADIRECTION.SENDRECV;
var callConfig = new wsc.CallConfig(audioMediaDirection, videoMediaDirection);
console.log("Created CallConfig with video stream.");
console.log(" ");

Managing the Video Display on Your Application Page

Set up the video to display or be hidden as required by your application and your deployment environment. One way to manage your application page optimally would be to enable the video element in your application when the call is in the required state and not otherwise. When your application deals with a new state in the call, specify the hidden attribute for the media element and set it to the required display state of the video media.

In Example 5-3, an application has a callback function called callStateChangeHandler assigned to its Call.onCallStateChange event handler. The application uses this callback function to manage the video display based on the call state changes. The application sets the media.hidden value to:

  • false when the call is established

  • true for all other call states

Example 5-3 Including Video Display State

function callStateChangeHandler(callObj, callState) {
    console.log (" In callStateChangeHandler().");
    console.log("callstate : " + JSON.stringify(callState));
    if (callState.state == wsc.CALLSTATE.ESTABLISHED) {
        console.log (" Call is established. Calling callMonitor. ");
        console.log (" ");
        callMonitor(callObj);
        media.hidden = false;
    } else if (callState.state == wsc.CALLSTATE.ENDED) {
        console.log (" Call ended. Displaying controls again.");
        console.log (" ");
        displayInitialControls();
        media.hidden = true;
    } else if (callState.state == wsc.CALLSTATE.FAILED) {
        console.log (" Call failed. Displaying controls again.");
        console.log (" ");
        displayInitialControls();
        media.hidden = true;
    }
}

Managing the Video Streams in the Media Stream Event Handler

When the media state changes, the WebRTC session Controller JavaScript API library invokes the event handler you assigned to Call.onMediaStreamEvent in your application and provides it with the new media state. Use this new state to take action on the media stream, attaching or removing it as required.

In Example 5-4, an application has a callback function called mediaStreamEventHandler assigned to its Call.onMediaStreamEvent event handler. The application uses this callback function to manage the video media stream based on the value in mediaState, the new media state the application receives from the WebRTC session Controller JavaScript API library. The callback function retrieves the appropriate video element from document, the Document Object Model (DOM) object and attaches the stream to that video element, using the WebRTC attachmediastream function.

Example 5-4 Attaching Video Streams in the Media Stream Event Handler

// Attach media stream to HTML5 audio element.
function mediaStreamEventHandler(mediaState, stream) {
    console.log (" In mediaStreamEventHandler.");
    console.log("mediastate : " + mediaState);
    console.log (" ");
 
    if (mediaState == wsc.MEDIASTREAMEVENT.LOCAL_STREAM_ADDED) {
        attachMediaStream(document.getElementById("selfVideo"), stream);
    } else if (mediaState == wsc.MEDIASTREAMEVENT.REMOTE_STREAM_ADDED) {
        attachMediaStream(document.getElementById("remoteVideo"), stream);
    }
}