13 Developing WebRTC-Enabled iOS Applications

This chapter shows how you can use the Oracle Communications WebRTC Session Controller iOS application programming interface (API) library to develop WebRTC-enabled iOS applications. The library is delivered in a WebRTC Session Controller SDK framework.

About the iOS SDK

The WebRTC Session Controller iOS SDK enables you to integrate your iOS applications with core WebRTC Session Controller functions. You can use the iOS SDK to implement the following features:

  • Audio calls between an iOS application and any other WebRTC-enabled application, a Session Initialization Protocol (SIP) endpoint, or a public switched telephone network endpoint using a SIP trunk.

  • Video calls between an iOS application and any other WebRTC-enabled application, with suitable video conferencing support.

  • Seamless upgrading of an audio call to a video call and downgrading of a video call to an audio call.

  • Peer to peer data transfers between an iOS application and any other WebRTC-enabled application.

  • Support client notifications when the device goes into hibernation. When the server sends a notification to wake up the client, the application can reestablish the connection, rehydrate the session.

  • Support for Interactive Connectivity Establishment (ICE) server configuration, including support for Trickle ICE.

  • Transparent session reconnection following network connectivity interruption.

  • Session rehydration following a device handover.

The WebRTC Session Controller iOS SDK is built upon several additional libraries and modules as shown in Figure 13-1.

Figure 13-1 iOS SDK Architecture

Surrounding text describes Figure 13-1 .

The WebRTC iOS binding enables the WebRTC Session Controller iOS SDK access to the native WebRTC library which itself provides WebRTC support. The Socket Rocket WebSocket library enables the WebSocket access required to communicate with WebRTC Session Controller.

For additional information about any of the APIs used in this document, see Oracle Communications WebRTC Session Controller iOS API Reference.

Supported Architectures

The WebRTC Session Controller iOS SDK is compliant with iOS9 (arm64) mobile operating system.

About the iOS SDK WebRTC Call Workflow

The general workflow for using the WebRTC Session Controller iOS SDK to place a call is:

  1. Authenticate against WebRTC Session Controller using the WSCHttpContext class. You initialize the WSCHttpContext with necessary HTTP headers and optional SSLContextRef in the following manner:

    1. Send an HTTP GET request to the login URI of WebRTC Session Controller.

    2. Complete the authentication process based on your authentication scheme.

    3. Proceed with the WebSocket handshake on the established authentication context.

  2. Establish a WebRTC Session Controller session using the WSCSession class.

    Two protocols must be implemented:

    • WSCSessionConnectionDelegate: A delegate that reports on the success or failure of the session creation.

    • WSCSessionObserverDelegate: A delegate that signals on various session state changes, including CLOSED, CONNECTED, FAILED, and others.

  3. Once a session is established, create a WSCCallPackage class which manages WSCCall objects in the WSCSession.

  4. Create a WSCCall using the WSCCallPackage createCall method with a callee ID as its argument, for example, alice@example.com.

  5. To monitor call events such as ACCEPTED, REJECTED, RECEIVED, implement a WSCCallObserver protocol which attaches to the WSCCall.

  6. To maintain the nature of the WebRTC call, create a WSCCallConfig object with one of the following settings:

    • Bi-directional or mono-directional audio or video.

    • Bi-directional or mono-directional audio and video.

    • Message exchanges containing raw data.

  7. Create and configure a RTCPeerConnectionFactory object and start the WSCCall using the start method.

  8. When the call is complete, terminate the call using the end method of the WSCCall object.

Prerequisites

Before continuing, make sure you thoroughly review and understand the JavaScript API discussed in the chapters listed below:

The WebRTC Session Controller iOS SDK is closely aligned in concept and functionality with the JavaScript SDK to ensure a seamless transition.

In addition to an understanding of the WebRTC Session Controller JavaScript API, you are expected to be familiar with:

  • Objective C and general object-oriented programming concepts

  • General iOS SDK programming concepts including event handling, delegates, and views.

  • The functionality and use of XCode.

For an introduction to programming iOS applications using XCode and for more background on all areas of iOS application development, see:

https://developer.apple.com/library/ios/referencelibrary/GettingStarted/RoadMapiOS/WhereToGoFromHere.html#/apple_ref/doc/uid/TP40011343-CH12-SW1

iOS SDK System Requirements

In order to develop applications with the WebRTC Session Controller SDK, complete the following software/hardware requirements:

  • An installed and fully configured WebRTC Session Controller installation. See the Oracle Communications WebRTC Session Controller Installation Guide.

  • A Macintosh computer capable of running XCode version 5.1 or later.

  • An actual iOS hardware device.

    You can test the general flow and function of your iOS WebRTC Session Controller application using the iOS simulator. To utilize audio and video functionality fully, a physical iOS device such as an iPhone or an iPad is required.

About the Examples in This Chapter

In order to illustrate the functionality of the WebRTC Session Controller iOS SDK API, the examples and descriptions in this chapter are kept intentionally straightforward. The examples assume no pre-existing interface schemas except when necessary, and then, only with the barest minimum of code. For example, if a particular method requires arguments such as a user name, the code examples show a plain string userName such as "bob@example.com" being passed to the method. It is assumed that in a production application, you would interface with the contact manager of the iOS device.

Installing the iOS SDK

To install the WebRTC Session Controller iOS SDK, do the following:

  1. Install XCode from the Apple App store:

    https://developer.apple.com/library/ios/referencelibrary/GettingStarted/RoadMapiOS/index.html#/apple_ref/doc/uid/TP40011343-CH2-SW1

    Note:

    The WebRTC Session Controller iOS SDK requires XCode version 6 or higher.
  2. Create an iOS project using xCode, adding any required targets:

    https://developer.apple.com/library/ios/referencelibrary/GettingStarted/RoadMapiOS/FirstTutorial.html#/apple_ref/doc/uid/TP40011343-CH3-SW1

    Note:

    iOS version 7 is the minimum required by the WebRTC Session Controller iOS SDK for full functionality.
  3. Download and extract the WebRTC Session Controller iOS SDK compressed (.zip) file. There are three subfolders in the archive, debug, docs and release.

    • The debug folder contains the debug frameworks.

    • The docs folder contains the iOS API Reference.

    • The release folder contains the release frameworks.

  4. Add the WebRTC Session Controller SDK frameworks to your project.

    1. Select your application target in XCode project navigator.

    2. Select the Build Phases tab in the top of the editor pane.

    3. Expand Link Binary With Libraries.

    4. Depending on your requirements, drag the framework files from either release or debug to Link Binary with Libraries.

      Note:

      The webrtc folder contains a single library that supports iOS devices and the iOS simulator.
  5. Import any other system frameworks you require. The following frameworks are recommended:

  6. If you are targeting iOS version 8 or above, add the libstdc++.6.dylib.a framework to prevent linking errors.

Authenticating with WebRTC Session Controller

You use the WSCHttpContext class to set up an authentication context. The authentication context contains the necessary HTTP headers and SSLContext information, and is used when setting up a wsc.Session.

Initialize a URL Object

You then create an NSURL object using the URL to your WebRTC Session Controller endpoint.

Example 13-1 Initializing a URL Object

NSString *urlString=*"http://server:port/login?wsc_app_uri=/ws/webrtc/myapp";
NSURL authUrl=[NSURL URLWithString:urlString];

Configure Authorization Headers

Configure authorization headers as required by your authentication scheme. The following example uses Basic authentication; OAuth and other authentication schemes are similarly configured.

Example 13-2 Initializing Basic Authentication Headers

NSString *authType = @"Basic ";
NSString *username = @"username";
NSString *password = @"password";
NSString * authString = [authType stringByAppendingString:[username
                                  stringByAppendingString:[@":" 
                                  stringByAppendingString:[password]]];

Note:

If you are using Guest authentication, no headers are required.

Connect to the URL

With your authentication parameters configured, you can now connect to the WebRTC Session Controller URL using sendSynchronousRequest, or NSURlRequest, and NSURlConnection, in which case the error and response are returned in delegate methods.

Example 13-3 Connecting to the WebRTC Session Controller URL

NSHTTPURLResponse * response;NSError * error;authUrlNSMutableURLRequest *loginRequest = [NSMutableURLRequest requestWithURL:];
[loginRequest setValue:authString forHTTPHeaderField:@"Authorization"];
[NSURLConnection sendSynchronousRequest:loginRequest returningResponse:&response error:&error];

Configure the SSL Context

If you are using Secure Sockets Layer (SSL), configure the SSL context, using the SSLCreateContext method, depending upon whether the URL connection was successful. For more information about SSLCreateContext, see the following Apple developer content:

https://developer.apple.com/library/mac/documentation/Security/Reference/secureTransportRef/index.html#/apple_ref/c/func/SSLCreateContext

Example 13-4 Configuring the SSLContext

if (error) {
      // Handle an error..
       NSLog("The following error occurred: %@", error.description);
} else {

  // Configure the SSLContext if necessary...
  SSLContextRef sslContext = SSLCreateContext(NULL, kSSLClientSide, kSSLStreamType);
  // Copy the SSLContext configuration to the httpContext builder...
  [builder withSSLContextRef:&sslContext];

  ...
}

Retrieve the Response Headers from the Request

Depending upon the results of the authentication request, you retrieve the response headers from the URL request and copy the cookies to the httpContext builder.

Example 13-5 Retrieving the Response Headers from the URL Request

if (error) {
      // Handle an error..
       NSLog("The following error occurred: %@", error.description);
} else {

  // Configure the SSLContext if necessary, from Example 13-4...
  SSLContextRef sslContext = SSLCreateContext(NULL, kSSLClientSide, kSSLStreamType);
  // Copy the SSLContext configuration to the httpContext builder...
  [builder withSSLContextRef:&sslContext];

  // Retrieve all the response headers...
  NSDictionary *respHeaders = [response allHeaderFields];
  WSCHttpContextBuilder *builder = [WSCHttpContextBuilder create];
  // Copy all cookies from respHeaders to the httpContext builder...
  [builder withHeader:key value:headerValue];      

  ...
}

Build the HTTP Context

Depending upon the results of the authentication request, you then build the WSCHttpContext using WSCHttpContextBuilder.

Example 13-6 Building the HttpContext

if (error) {
      // Handle an error..
       NSLog("The following error occurred: %@", error.description);
} else {

  // Configure the SSLContext if necessary, from Example 13-4...
  SSLContextRef sslContext = SSLCreateContext(NULL, kSSLClientSide, kSSLStreamType);
  // Copy the SSLContext configuration to the httpContext builder...
  [builder withSSLContextRef:&sslContext];

  // Retrieve all the response headers from Example 13-5...

  // Build the httpContext...
  WSCHttpContext *httpContext = [builder build];

  ...
}

Configure Interactive Connectivity Establishment (ICE)

If you have access to one or more STUN/TURN ICE servers, you can initialize the WSCIceServer class. For details on ICE, see "Managing Interactive Connectivity Establishment Interval".

Example 13-7 Configuring the WSCIceServer Class

WSCIceServer *iceServer1 = [[WSCIceServer alloc] initWithUrl:@"stun:stun-server:port"];
WSCIceServer *iceServer2 = [[WSCIceServer alloc] initWithUrl:@"turn:turn-server:port", 
                                                                 @"admin", @"password"];
WSCIceServerConfig *iceServerConfig = [[WSCIceServerConfig alloc] 
                                              initWithIceServers: iceServer1, iceServer2, NIL];

Configuring Support for Notifications

Set up client notifications to enable your applications to operate without impacting the battery life and data consumption with the associated mobile devices.

Whenever a user (for example, Bob) is not actively using your application, your application can hibernate the client session after informing the WebRTC Session Controller server. The WebSocket connecting your application to the WebRTC Session Controller server closes. During that hibernation period, if Bob needs to be alerted of an event (such as a call from Alice on the Call feature of your iOS application), the WebRTC Session Controller server sends a message (about the call invite) to the cloud messaging server.

The cloud messaging server uses a push notification, a short message that it delivers to the specific device (such as a mobile phone). This message contains the registration ID for the application and the payload. On being woken up on that device, your application reconnects with the server, uses the saved session ID to resurrect the session data, and handles the incoming event.

If no event occurs during the specified hibernation period and the period expires, there are no notifications to process and the WebRTC Session Controller server cleans up the session.

The preliminary configurations and registration actions that you perform to support client notifications in your applications provide the WebRTC Session Controller server and the cloud messaging provider the necessary information about the device, the APIs, the application, and so on. The client application running on the mobile device or browser retrieves a registration ID from its notification provider.

About the WebRTC Session Controller Notification Service

The WebRTC Session Controller Notification Service manages the external connectivity with the respective notification providers. It implements the Cloud Messaging Provider specific protocol such as Apple Push Notification service (APNs). The WebRTC Session Controller Notification Service ensures that all notification messages are transported to the appropriate notification providers.

The WebRTC Session Controller server constructs the payload in the push notification it sends by combining the received message payload from your application with the pay load configured in the application settings or the application provider settings you provide to WebRTC Session Controller.

If you plan to use the WebRTC Session Controller server to communicate with the APNs system, then you must register it with Apple. See "The Notification Process Workflow for Your iOS Application".

About Employing Your Current Notification System

At this point, verify if your current installation has an existing notification server that talks to the Cloud Messaging system and that the installation supports applications for your users through this server.

If you currently have such a notification server successfully associated with a cloud messaging system, you can use the pre-existing notification system to send notifications using the REST interface. For more information, see the Oracle Communications WebRTC Session Controller Extension Developer's Guide.

How the Notification Process Works

In its simplest form, the notification process works in this manner:

  1. Bob, a customer, accesses your application on his mobile device. For example, assume this is your iOS Audio Call application.

  2. The client application running on the device/browser fetches a device token from its notification provider.

  3. WebRTC Session Controller iOS SDK sends the information about the client device and the application settings to the WebRTC Session Controller server.

    A WebSocket connection is opened.

  4. When there is no activity on the part of the customer (Bob), your application goes into the background. Your application sends a message to the WebRTC Session Controller server informing the server of its intent to hibernate and specifies a time duration for the hibernation.

    The WebSocket connection closes.

  5. During the hibernation period an event occurs. For example, Alice makes a call to Bob on your iOS Audio Call application.

  6. WebRTC Session Controller server receives this call request from Alice and checks the session state. Since the call invite request came during the time interval set as the hibernation period for that session, the WebRTC Session Controller server uses its notification service to send a notification to the APNs server.

  7. The APNs server delivers the notification to your ioS Call application on the mobile device.

  8. On receiving this notification,

    • Your iOS application reconnects with the notification service using the last session-id and receives the incoming call.

    • WebRTC Session Controller iOS SDK once again establishes the connection to the server WebRTC Session Controller server.

  9. WebRTC Session Controller sends the appropriate notification to your application. The user interface logic in your application informs Bob appropriately.

  10. Bob accepts the call.

  11. Your application logic manages the call to its completion.

Note:

If the time set for the hibernation period completes with no event (such as the call from Alice to Bob), then, the WebRTC Session Controller Server closes the session.

The Session Id and its data are destroyed. Your application must create another session. Your application cannot use that session ID cannot be used to restore the data.

Handling Multiple Sessions

If you have defined multiple applications in WebRTC Session Controller, your customer may have accessed more than one such application. As a result, there may be multiple WebRTC Session Controller-associated sessions associated with the application.

In such a scenario where data for more than one session is involved, all of the associated session data is stored appropriately and can be retrieved by your application instances.

The Notification Process Workflow for Your iOS Application

The process workflow to support notifications in your iOS application are:

  1. The prerequisites to using the notification service are complete. See "About the General Requirements to Provide Notifications".

  2. Your application on the iOS device sends the registration_Id to the WebRTC Session Controller iOS SDK, which then sends it to the WebRTC Session Controller server and saves it locally for future use.

  3. When a notification is to be sent, the WebRTC Session Controller server sends a message with the deviceToken to the appropriate notification provider (APNs).

    Internally, the WebRTC Session Controller iOS SDK passes the device and operating system information about the client to the server to determine what features the client is able to support.

  4. The notification provider (APNs) forwards the notification to the device.

  5. When the notification is clicked on the device, your application is awakened. It re-establishes communication with the WebRTC Session Controller server again and handles the event.

Note:

Apple allows a payload up to 256 bytes for pre- iOS8 systems and 2Kilobytes for later iOS8 systems).

Additionally, the notifications can do one of the following:

  • Display a short text message

  • Play a brief sound

  • Display a number in a badge on the application icon

  • Display a short text message

About the WebRTC Session Controller APIs for Client Notifications

The WebRTC Session Control iOS API objects that enable your applications to handle notifications related to session hibernation are:

  • hibernate

    The hibernate method of the WSCSession object sends a hibernate request to the WebRTC Session Controller server.

  • WSCSessionStateHibernated

    The enum value of the WSCSessionState object indicating that the session is in hibernation.

  • WSCHibernateParams

    The WSCHibernateParams object stores the parameters for the hibernating session.

  • withDeviceToken parameter of WSCSessionBuilder

    When you provide the withDeviceToken parameter, the session is built with the device token obtained from APNs.

  • withHibernationHandler parameter of WSCSessionBuilder

    When you provide the withHibernationHandler parameter, the session is built to handle hibernation.

  • withSessionId method for the WSCSessionBuilder object

    Used for rehydration. When you provide the withSessionId parameter, the session is built with the input session ID.

  • WSCHibernationHandler.h, a new protocol file

For more on these and other WebRTC Session Controller iOS API classes, see AllClasses at Oracle Communications WebRTC Session Controller iOS API Reference.

About the General Requirements to Provide Notifications

Complete the following tasks as required for your application. Some are performed outside of your application:

Registering with Apple Push Notification Service

Register your WebRTC Session Controller installation with the Apple Push Notification service to set up the following:

  1. SSL certificate to communicate with the APN service.

  2. Provisioning profile for the application.

For information about completing these tasks, refer to the Local and Remote Notification Programming Guide in the iOS Developer Library at

https://developer.apple.com/library/ios/navigation/

Obtaining the Device Token

The device token is similar to a phone number and is used in the push notification. The Apple Push Notification service uses this token to locate the specific device on which your iOS application is installed.

Your application should register with Apple Push Notification service to obtain a deviceToken.

For information about how to register with the notification service and obtain a device token, refer to the Local and Remote Notification Programming Guide in the iOS Developer Library documentation.

Enabling Your Applications to Use the WebRTC Session Controller Notification Service

Access the Notification Service tab in the WebRTC Session Controller Administration Console and enter the information about each application for the use of the WebRTC Session Controller Notification Service. For each application, enter the application settings such as the application ID, API Key, the cloud provider for the API service. For more information about completing this task, see "Creating Applications for the Notification Service" in WebRTC Session Controller System Administrator's Guide.

Informing the Device to Deliver Push Notifications to Your Application

Ensure that, after your application launches successfully, your application informs the device that it requires push notifications.

In Example 13-8, the application registers for push notifications.

Example 13-8 Registering for Push Notifications

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
  
// Application lets the device know it wants to receive push notifications
  [[UIApplication sharedApplication] registerForRemoteNotificationTypes:
   (UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeSound | UIRemoteNotificationTypeAlert)];
  
NSDictionary *appDefaults = [NSDictionary
                               dictionaryWithObject:[NSNumber numberWithBool:YES]
                               forKey:@"CacheDataAgressively"];
  [[NSUserDefaults standardUserDefaults] registerDefaults:appDefaults];
  [self registerDefaultsFromSettingsBundle];
  
  if (launchOptions != nil)
  {
    NSDictionary *dictionary = [launchOptions objectForKey:UIApplicationLaunchOptionsRemoteNotificationKey];
    if (dictionary != nil)
    {
      NSLog(@"Launched from push notification: %@", dictionary);
      //[self addMessageFromRemoteNotification:dictionary updateUI:NO];
    }
  }
  return YES;
}
 

Note:

iOS7.0 and later versions support silent remote notification where the silent notification wakes up the application in the background so that it can get new data from the server.

If you remove the Sound, Badge and Alert from the registerForRemoteNotificationTypes call by sending in an empty set, the push service should send your notifications silently.

Storing the Device Token

After you have successfully registered with Apple Push Notification service, store the device token in your application.

The Example 13-9 code excerpt follows from Example 13-8 and shows how an application stores the device token and reports any failure to do so.

Example 13-9 Storing the Token Device

(BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
  // Let the device know we want to receive push notifications
 ... 
return YES;
} 
 ... 
-(void)application:(UIApplication*)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData*)deviceToken {
  DLog(@"My token is: %@", deviceToken);
  [SampleIOSUtils setDeviceToken:deviceToken];
}

- (void)application:(UIApplication*)application didFailToRegisterForRemoteNotificationsWithError:(NSError*)error {
  NSLog(@"Failed to get token, error: %@", error);
}

The following excerpt from an application shows sets up the logic to get and set device tokens.

...
static NSString * const DEVICE_TOKEN_KEY = @"DeviceToken";
static NSData *deviceToken;
...
+(NSData *)getDeviceToken {
  return [[NSUserDefaults standardUserDefaults] objectForKey:DEVICE_TOKEN_KEY];
}
+(void)setDeviceToken:(NSData *)token {
  [[NSUserDefaults standardUserDefaults] setObject:token forKey:DEVICE_TOKEN_KEY];
  [[NSUserDefaults standardUserDefaults] synchronize];
}

Storing the Session ID

Your application should use the various standard storage mechanisms offered by the iOS platforms to persist the session ID. Your application can use this session ID to immediately present "Bob" (the customer) with the last current state of Bob's session with your application.

The getSessionId() method of WSCSession object returns the session identifier as an NSString object. For more information, see the description of WSCSession in Oracle Communications WebRTC Session Controller iOS API Reference.

Implement Session Rehydration

To implement session rehydration in your application:

  • Persist Session Ids

    To provides your customers with a seamless user experience, persist the session ID value in your application. Use the various standard storage mechanisms offered by iOS platform to store user credentials in your iOS applications.

  • Use the appropriate Session ID

    Provide the same session ID that the client last successfully connected with when it hibernated. The WebRTC Session Controller iOS SDK uses the session ID to rehydrate the session. It uses stored credentials to authenticate the client session.

  • Provide for the ability to trigger hydration for more than one session object.

    This scenario occurs when you have multiple applications defined in WebRTC Session Controller and your customer creates a session with more than one of the WebRTC Session Controller applications in their mobile application. In such a scenario, the client application is using more than one WSCSession(s), each with its own session ID.

Handling Hibernation Requests from the Server

At times your application receives a request to hibernate from the WebRTC Session Controller server. To respond to such a request, provide the necessary logic to handle the user interface and other elements in your application.

See "Responding to Hibernation Requests from the Server" for information on how to set up the callbacks to the specific WebRTC Session Controller iOS SDK event handlers.

Tasks that Use WebRTC Session Controller iOS APIs

Use WebRTC Session Controller iOS APIs to do the following:

For information about the WebRTC Session Controller iOS APIs used in the following sections, see Oracle Communications WebRTC Session Controller iOS API Reference.

Associate the Device Token when Building the WebRTC Session

Associate the device token when you build a WebRTC Session Controller session. The withdeviceToken method returns a WSCSessionBuilder object with a device token configuration:

-(WSCSessionBuilder *)withDeviceToken:(NSData *)token;

Pass the stored device token when you initialize the session, as shown in Example 13-17.

In the following sample code excerpt, the application provides a stored device token that it retrieves from its SampleIOSUtils interface.

...
self.wscSession = [[[[[[[[[[[[[[[WSCSessionBuilder create:url]
...
                          withDeviceToken:[SampleIOSUtils getDeviceToken]
...

For information about WSCSessionBuilder, see Oracle Communications WebRTC Session Controller iOS API Reference.

Associate the Hibernation Handler for the Session

Set up the hibernation handling function when you build a WebRTC Session Controller session. The withHibernationHandler method returns a WSCSessionBuilder object with a hibernation handler configuration:

- (WSCSessionBuilder *)withHibernationsHandler:(id<WSCHibernationHandler>)handler

See Example 13-17.

In the following sample code excerpt, the application allocates and initializes a hibernation handler:

...
self.wscSession = [[[[[[[[[[[[[[[WSCSessionBuilder create:url]
...
                    withHibernationsHandler:[[WSCHibernationHandler alloc] init]
...

Implement the HibernationHandler Interface

Implement the HibernationHandler interface to handle the hibernation requests that originate from the server or the client. The HibernationHandler interface has the following event handlers:

  • onFailure: Called when a hibernate request from the client fails.

  • onSuccess: Called when a hibernate request from the client succeeds.

  • onRequest: Called when there is a request from the server to hibernate the client. Returns an instance of WSCHibernateParams.

  • onRequestCompleted: Called when the request from the server end completes. This event handler uses a WSCStatusCode enum value as input parameter.

Example 13-10 Sample HibernationHandler Implementation

-(void)onSuccess {
// Hibernate success. Store the recent sessionId
  DLog("Success Hibernating");
  WSCSession *session = [SampleIOSUtils getActiveSession];
  [SampleIOSUtils setLastSessionId:[session getSessionId]];
  [SampleIOSUtils logout];
}
 
-(void)onFailure:(WSCStatusCode)code {
// Hibernate failed. Issue alert
  DLog("Error Hibernating, status code: %ld",(long)code);
  //[SampleIOSUtils showAlertBox];
  [SampleIOSUtils logout];
 
}
 
// Return the timetolive param set for the hibernation
-(WSCHibernateParams *)onRequest {
  WSCHibernateParams *params = [[WSCHibernateParams alloc] initWithTTL:12000];
  return params;
}
 
-(void)onRequestCompleted:(WSCStatusCode)code {
  
  if(code == WSCStatusCodeOk ){
// WebRTC Session Controller Status 200.. Store the recent sessionId
    WSCSession *session = [SampleIOSUtils getActiveSession];
    [SampleIOSUtils setLastSessionId:[session getSessionId]];
    DLog("Success Hibernating, status code:%ld",(long)code);
  } else {
// WebRTC Session Controller Not OK.. Show Alert
    DLog("Error Hibernating, status code: %ld",(long)code);
    //[SampleIOSUtils showAlertBox];
  }
  [SampleIOSUtils logout];
  
}

For information about WSCHibernationHandler and WSCStatusCode, see Oracle Communications WebRTC Session Controller iOS API Reference.

Implement Session Hibernation

When your iOS application is in the background, your application must send a request back to WebRTC Session Controller stating that it wants to hibernate the session.

Use the appropriate method to release shared resources, invalidate timers, and store the state information necessary to restore your application to its current state, in case it is terminated later. For information about handling the applications life cycle see the UIApplicationDelegate reference section of the iOS Developer Library.

The WebRTC Session Controller iOS SDK provides the following method in the WSCSession object.

-(void)hibernate:(WSCHibernateParams *) params;

Provide the hibernation period with the WSCHibernateParams object, using its static of method to create a holder for the time interval and its unit. For information about the hibernate method, see Oracle Communications WebRTC Session Controller iOS API Reference.

Example 13-11 illustrates how to initiate a hibernate request to the WebRTC Session Controller server.

Example 13-11 Hibernating the Session

-(void)applicationDidEnterBackground:(UIApplication *)application
{
  WSCSession *session = //get current active session
  // Hibernate the session 
  [session hibernate:[session.hibernationHandler]];
}

The WebRTC Session Controller server identifies the client device (going into hibernation) by the deviceToken you provided when building the session object (Example 13-17).

When you implement the hibernate method, provide the maximum period for which the client session should be kept alive on the server. All notifications received within this period are sent to the client device. The WebRTC Session Controller server maintains a maximum interval depending on the policy set for each type of client device. If your application sets an interval greater than this period, the server uses the policy-supported maximum interval.

When the hibernate method completes, the WSCSessionState for the session is WSCSessionStateHibernated. The session with the WebRTC Session Controller server closes. Your application can take no action, such as making a call request.

For information about hibernate method, see the description about WSCSession in Oracle Communications WebRTC Session Controller iOS API Reference.

Send Notifications to the Callee when the Client Session is in Hibernated State

If the client session for the callee is in a hibernated state, any incoming event for that client session may require some time for the call setup so that the callee can accept the call. In your iOS application, add the logic to the callback function to handle incoming call event when the session for the callee is in a hibernated state.

Note:

This section describes how to use the WebRTC Session Controller notification API to send the notification.

If your application connects to a notification system that exposes a REST API, you can use REST API Callouts instead.

Set up a function to handle the onWSHibernated method provided by the Groovy Script library. This method takes NotificationContext object as a parameter.

The NotificationContext object serves as a cache and way for notifications and allows notifications to be marked for consumption after the render life cycle has completed. It allows equal access to notifications across multiple interfaces on a page. Use this object to do the following:

  • Retrieve

    • information about the triggering message, (such as the initiator, the target, package type)

    • Information about the application (Id, version, platform, platform version)

    • The device token

    • The incoming message that triggered this notification, as a normalized message

    • The REST client instance for submitting outbound REST requests (synchronized call outs only).

  • Dispatch the messages through the internal notification service, if configured.

For more information about NotificationContext, see All Classes in Oracle Communications WebRTC Session Controller Configuration API Reference.

Example 13-12 shows a sample code excerpt that creates the JSON message in the msg_payload object. It uses the context.dispatch method to dispatch the message payload through the local notification service.

Example 13-12 Using Groovy Method to Define the Notification Payload

/**
 * This function gets invoked when the client end-point is in a hibernated state when an incoming event arrives for it.
 * A typical action would be to send some trigger/Notification to wake up the client.
 *
 * @param context the notification context
 */
void onWSHibernated(NotificationContext context) {
  // Define the notification payload.
  def msg_payload = "{\"data\" : {\"wsc_event\": \"Incoming " + context.getPackageType() +
          "\", \"wsc_from\": \"" + context.getInitiator() + "\"}}"
  if (Log.debugEnabled) {
    Log.debug("Notification Payload: " + msg_payload)
  }
  // Using local notification gateway
  context.dispatch(msg_payload)
}

Provide the Session ID to Rehydrate the Session

To rehydrate an existing session, use the stored session ID. The withSessionId method to create a session with a stored session ID (value) is:

-(WSCSessionBuilder *)withSessionId:(NSString *)value;

Important:

Invoke this method when attempting to rehydrate an existing session only.

As shown in Example 13-13, you can set up a listener for push notification in your application. Pass the session ID received in the push notification into the session builder. The session builder rehydrates the session by retrieving the hibernated session out of persisted storage using the passed sessionId as the key.

Example 13-13 Rehydrating an Existing Session

-(void)application:(UIApplication*)application didReceiveRemoteNotification:(NSDictionary*)userInfo
{
  NSLog(@"Received notification: %@", userInfo);
  
  NSString *sessionId = //Parse sessionID from userInfo object
  
  ...
  
  // Build a new session object containing the sessionId pulled from userInfo object
  WSCSessionBuilder *builder = [[WSCSessionBuilder alloc] init:webSocketURL]];
  [[[[[builder withHibernationsHandler:hibernationHandler]
                                 ... 
                         withSessionId: sessionId];
  ...
  // Build the new session object
  WSCSession *session = [builder build];

}
 

Responding to Hibernation Requests from the Server

When the server has to force your application to hibernate, it calls the onRequest method in the WSCHibernationHandler. When the hibernation request from the server completes, it calls the onRequestCompleted method in that WSCHibernationHandler.

Provide the necessary logic in your implementation of WSCHibernationHandler to handle the user interface and other elements in your application, as shown in Example 13-14.

Example 13-14 Handling Server-originated Hibernation Requests

@protocol WSCHibernationHandler <NSObject>
 
-(void)onSuccess;
 
-(void)onFailure:(WSCStatusCode)code;
 
-(WSCHibernateParams *)onRequest;
 
-(void)onRequestCompleted:(WSCStatusCode)code;
 
@end
 

Creating a WebRTC Session Controller Session

Once you have configured your authentication method and connected to your WebRTC Session Controller endpoint, you can instantiate a WebRTC Session Controller session object.

Implement the WSCSessionConnectionDelegate Protocol

You must implement the WSCSessionConnectionDelegate protocol to handle the results of your session creation request. See Example 13-15. The WSCSessionConnectionDelegate protocol has two event handlers:

  • onSuccess: Triggered upon a successful session creation.

  • onFailure: Returns a failure status code. Triggered when session creation fails.

Example 13-15 Implementing the WSCSessionConnectionDelegate Protocol

#pragma mark WSCSessionConnectionDelegate
-(void)onSuccess {
  NSLog(@"WebRTC Session Controller session connected.");
    NSLog(@"Connection succeeded. Continuing...");
 }

 -(void)onFailure:(enum WSCStatusCode)code {
   switch (code) {
     case WSCStatusCodeUnauthorized:
       NSLog(@"Unable to connect. Please check your credentials.");
       break;
     case WSCStatusCodeResourceUnavailable:
       NSLog(@"Unable to connect. Please check the URL.");
       break;
     default:
       // Handle other cases as required...
       break;
   }
 }

Implement the WSCSession Connection Observer Protocol

Create a WSCSessionConnectionObserver protocol to monitor and respond to changes in session state, as shown in Example 13-16.

Example 13-16 Implementing the WSCSessionConnectionObserver Protocol

#pragma mark WSCSessionConnectionDelegate
-(void)stateChanged:(WSCSessionState) sessionState {
   switch (sessionState) {
     case WSCSessionStateConnected:
       NSLog(@"Session is connected.");
       break;
     case WSCSessionStateReconnecting:
       NSLog(@"Session is attempting reconnection.");
       break;
     case WSCSessionStateFailed:
       NSLog(@"Session connection attempt failed.");
       break;
     case WSCSessionStateClosed:
       NSLog(@"Session connection has been closed.");
       break;
     default:
       break;
   }
 }

Build the Session Object and Open the Session Connection

With the connection delegate and connection observer configured, you now build a WebRTC Session Controller session and open a connection with the server, as shown in Example 13-17.

Example 13-17 Building the Session Object and Opening the Session Connection

if (error) {
      // Handle an error..
       NSLog("The following error occurred: %@", error.description);
} else {

  // Configure the SSLContext if necessary, from Example 13-4...
  ...

  // Retrieve all the response headers from Example 13-5...
  ...

  // Build the httpContext from Example 13-6...
  ...

  NSString *userName = @"username";
  self.wscSession = [[[[[[[[[[[[WSCSessionBuilder create:urlString]
                                             withConnectionDelegate:WSCSessionConnectionDelegate]
                                           withUserName:userName]
                                         withObserverDelegate:WSCSessionConnectionObserverDelegate]
                                       withPackage:[[WSCCallPackage alloc] init]]
                                     withHttpContext:httpContext]
                                   withIceServerConfig:iceServerConfig]
                                   withObserverDelegate:[WSCSessionObserverDelegate]
                                   withHibernationsHandler:[WSCHibernationHandler]
                                   withDeviceToken:["MydeviceToken"]
                                 build];
  // Open a connection to the server...
  [self.wscSession open];

In Example 13-17, note that the withPackage method registers a new WSCCallPackage with the session that will be instantiated when creating voice or video calls.

Configure Additional WSCSession Properties

You can configure additional properties when creating a session using the WSCSessionBuilder withProperty method, as shown in Example 13-18.

Example 13-18 Configuring WSCSession Properties

if (error) {
      // Handle an error..
       NSLog("The following error occurred: %@", error.description);
} else {

  // Configure the SSLContext if necessary, from Example 13-4...
  ...

  // Retrieve all the response headers from Example 13-5...
  ...

  // Build the httpContext from Example 13-6...
  ...

  self.wscSession = [[[[[[[[[[[[WSCSessionBuilder create:urlString]
                ...
              withProperty:WSC_PROP_IDLE_PING_INTERVAL value:[NSNumber numberWithInt: 20]]
            withProperty:WSC_PROP_RECONNECT_INTERVAL value:[NSNumber numberWithInt:10000]]
           ...
         build];
       [self.wscSession open];
}

For a complete list of properties see the Oracle Communications WebRTC Session Controller iOS SDK API Reference.

Adding WebRTC Voice Support to your iOS Application

This section describes how you can add WebRTC voice support to your iOS application.

Initialize the CallPackage Object

When you created your Session, you registered a new WSCCallPackage object using the withPackage method of the Session object. You now instantiate that WSCCallPackage, as shown in Example 13-19.

Example 13-19 Initializing the CallPackage

WSCCallPackage *callPackage = (WSCCallPackage*)[wscSession getPackage:PACKAGE_TYPE_CALL];

Note:

Use the default PACKAGE_TYPE_CALL call type unless you have defined a custom call type.

Place a WebRTC Voice Call from Your iOS Application

Once you have configured your authentication scheme, and created a Session, you can place voice calls from your iOS application.

Add the Audio Capture Device to Your Session

Before continuing, in order to stream audio from your iOS device you initialize a capture session and add an audio capture device, as shown in Example 13-20.

Example 13-20 Adding an Audio Capture Device to Your Session

- (instancetype)initAudioDevice
{
  self = [super initAudioDevice];
  if (self) {
   self.captureSession = [[AVCaptureSession alloc] initAudioDevice];
   [self.captureSession setSessionPreset:AVCaptureSessionPresetLow];

   // Get the audio capture device and add to our session.
   self.audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
   NSError *error = nil;
   AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput
                                 deviceInputWithDevice:self.audioCaptureDevice error:&error];
   if (audioInput) {
    [self.captureSession addInput:audioInput];
   }
   else {
    NSLog(@"Unable to find audio capture device : %@", error.description);
   }

  return self;
}

Initialize the Call Object

Now, with the WSCCallPackage object created, you then initialize a WSCCall object, passing the callee ID as an argument, as shown in Example 13-21.

Example 13-21 Initializing the Call Object

String callee = @"bob@example.com";
WSCCall *call = [callPackage createCall:callee];

Configure Trickle ICE

To improve ICE candidate gathering performance, you can choose to enable Trickle ICE in your application using the setTrickleIceMode method of the WSCCall object, as shown in Example 13-22.

Example 13-22 Configuring Trickle ICE

NSLog(@"Configure Trickle ICE options, WSCTrickleIceModeOFF, WSCTrickleIceModeHalf, or WSCTrickleIceModeFull...");
[call setTrickleIceMode: WSCTrickleIceModeFull];

For more information, see "Enabling Trickle ICE to Improve Application Performance".

Create a WSCCallObserverDelegate Protocol

You create a WSCCallObserverDelegate protocol, as shown in Example 13-23, so you can respond to the following WSCCall events:

  • callUpdated: Triggered on incoming and outgoing call update requests.

  • mediaStateChanged: Triggered on changes to the WSCCall media state.

  • stateChanged: Triggered on changes to the WSCCall state.

  • onDataTransfer: Triggered when a WSCDataTransfer object is created.

Example 13-23 Creating a WSCCallObserverDelegate Protocol

#pragma mark WSCCallObserverDelegate

-(void)callUpdated:(WSCCallUpdateEvent)event
                    callConfig:(WSCCallConfig *)callConfig
                    cause:(WSCCause *)cause
{
  NSLog("callUpdate request with config: %@", callConfig.description);
  switch(event){
    case WSCCallUpdateEventSent:
      break;
    case WSCCallUpdateEventReceived:
      NSLog("Call Update event received for config: %@", callConfig.description);
      break;
    case WSCCallUpdateEventAccepted:
      NSLog("Call Update accepted for config: %@", callConfig.description);
      break;
    case WSCCallUpdateEventRejected:
      NSLog("Call Update event rejected for config: %@", callConfig.description);
      break;
    default:
    break;
  }
}

-(void)mediaStateChanged:(WSCMediaStreamEvent)mediaStreamEvent
     mediaStream:(RTCMediaStream *)mediaStream
{
  NSLog(@"mediaStateChanged : %u", mediaStreamEvent);
}

-(void)stateChanged:(WSCCallState)callState
     cause:(WSCCause *)cause
{
   NSLog(@"Call State changed : %u", callState);
  switch (callState) {
     NSLog(@"stateChanged: %u", callState);
    case WSCCallStateNone:
       NSLog(@"stateChanged: %@", @"WSC_CS_NONE");
      break;
    case WSCCallStateStarted:
       NSLog(@"stateChanged: %@", @"WSC_CS_STARTED");
      break;
    case WSCCallStateResponded:
       NSLog(@"stateChanged: %@", @"WSC_CS_RESPONDED");
      break;
    case WSCCallStateEstablished:
       NSLog(@"stateChanged: %@", @"WSC_CS_ESTABLISHED");
      break;
    case WSCCallStateFailed:
       NSLog(@"stateChanged: %@", @"WSC_CS_FAILED");
      break;
    case WSCCallStateRejected:
       NSLog(@"stateChanged: %@", @"WSC_CS_REJECTED");
      break;
    case WSCCallStateEnded:
       NSLog(@"stateChanged: %@", @"WSC_CS_ENDED");
      break;
    default:
      break;
  }
}

Register the WSCCallObserverDelegate Protocol with the Call Object

You register the WSCCallObserverDelegate protocol with the WSCCall object, as shown in Example 13-24.

Example 13-24 Registering a WSCCallObserverDelegate Protocol

call.observerDelegate = WSCCallObserverDelegate;

Create a WSCCallConfig Object

You create a WSCCallConfig object to determine the type of call you wish to make. The WSCCallConfig constructor takes two parameters, audioMediaDirection and videoMediaDirection. The first parameter configures an audio call while the second configures a video call.

The values for audioMediaDirection and videoMediaDirection parameters are:

  • WSCMediaDirectionNone: No direction; media support disabled.

  • WSCMediaDirectionRecvOnly: The media stream is receive only.

  • WSCMediaDirectionSendOnly: The media stream is send only.

  • WSCMediaDirectionSendRecv: The media stream is bi-directional.

Example 13-25 shows the configuration for a bi-directional, audio-only call.

Example 13-25 Creating an Audio CallConfig Object

WSCCallConfig *callConfig = [[WSCCallConfig alloc] initWithAudioVideoDirection:WSCMediaDirectionSendRecv video:WSCMediaDirectionNone];

Configure the Local MediaStream for Audio

With the WSCCallConfig object created, you then configure the local audio MediaStream using the WebRTC PeerConnectionFactory, as shown in Example 13-26.

Example 13-26 Configuring the Local MediaStream for Audio

RTCPeerConnectionFactory *)pcf = [call getPeerConnectionFactory];
RTCMediaStream* localMediaStream = [pcf mediaStreamWithLabel:@"ARDAMS"];
[localMediaStream addAudioTrack:[pcf audioTrackWithID:@"ARDAMSa0"]];
NSArray *streamArray = [[NSArray alloc] initWithObjects:localStream, nil];

For information about the WebRTC SDK API, see https://webrtc.org/native-code/native-apis/.

Start the Audio Call

Finally, you start the audio call using the start method of the WSCCall object and passing it the WSCCallConfig object and the streamArray, as shown in Example 13-27.

Example 13-27 Starting the Audio Call

[call start:callConfig streams:streamArray];

Terminating the Audio Call

To terminate the audio call, use the WSCCall object end method:

[call end];

Receiving a WebRTC Voice Call in Your iOS Application

This section describes configuring your iOS application to receive WebRTC voice calls.

Create a WSCCallPackageObserverDelegate

To be notified of an incoming call, create a WSCCallPackageObserverDelegate and attach it to your WSCCallPackage, as shown in Example 13-28.

Example 13-28 Creating a CallPackageObserver Delegate

Creating a CallPackageObserver Delegate
#pragma mark WSCCallPackageObserverDelegate
-(void)callArrived:(WSCCall *)call
        callConfig:(WSCCallConfig *)callConfig
        extHeaders:(NSDictionary *)extHeaders {

    NSLog(@"Registering a WSCCallObserverDelegate...");
    call.setObserverDelegate = WSCCallObserverDelegate;

    NSLog(@"Configuring the media streams...");
    RTCPeerConnectionFactory *)pcf = [call getPeerConnectionFactory];
    RTCMediaStream* localMediaStream = [pcf mediaStreamWithLabel:@"ARDAMS"];
    [localMediaStream addAudioTrack:[pcf audioTrackWithID:@"ARDAMSa0"]];

    if (answerTheCall) {
      NSLog(@"Answering the call...");
      [call accept:self.callConfig streams:localMediaStream];
    } else {
      NSLog(@"Declining the call...");
      [call decline:WSCStatusCodeBusyHere];
    }
  }
}

In Example 13-28, the callArrived event handler processes an incoming call request:

  1. The method registers a WSCCallObserverDelegate for the incoming call. In this case, it uses the same WSCCallObserverDelegate, from the example in "Create a WSCCallObserverDelegate Protocol".

  2. The method then configures the local media stream, in the same manner as "Configure the Local MediaStream for Audio".

  3. The method determines whether to accept or reject the call based on the value of the answerTheCall boolean using the accept or decline methods of the WSCCall object.

    Note:

    The answerTheCall boolean will most likely be set by a user interface element in your application such as a button or link.

Bind the CallPackage Observer to the CallPackage

With the WSCCallPackageObserverDelegate object created, you bind it to your WSCCallPackage object:

[callPackage setObserverDelegate:WSCCallPackageObserverDelegate;

Adding WebRTC Video Support to your iOS Application

This section describes how you can add WebRTC video support to your iOS application. While the methods are almost completely identical to adding voice call support to an iOS application, additional preparation is required.

Add the Audio and Video Capture Devices to Your Session

As with an audio call, you initialize the audio capture device as shown in Example 13-20. In addition, you initialize the video capture device and add it to your session, as shown below in Example 13-29.

Example 13-29 Adding the Audio and Video Capture Devices to Your Session

- (instancetype)initAudioVideo
{
  self = [super initAudioVideo];
  if (self) {
   self.captureSession = [[AVCaptureSession alloc] initAudioVideo];
   [self.captureSession setSessionPreset:AVCaptureSessionPresetLow];

   // Get the audio capture device and add to our session.
   self.audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
   NSError *error = nil;
   AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput
                                 deviceInputWithDevice:self.audioCaptureDevice error:&error];
   if (audioInput) {
    [self.captureSession addInput:audioInput];
   }
   else {
    NSLog(@"Unable to find audio capture device : %@", error.description);
   }

   // Get the video capture devices and add to our session.
   for (AVCaptureDevice* videoCaptureDevice in [AVCaptureDevice
                                                devicesWithMediaType:AVMediaTypeVideo]) {
    if (videoCaptureDevice.position == AVCaptureDevicePositionFront) {
      self.frontVideCaptureDevice = videoCaptureDevice;
      AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput
                                          deviceInputWithDevice:videoCaptureDevice error:&error];
      if (videoInput) {
        [self.captureSession addInput:videoInput];
      } else {
        NSLog(@"Unable to get front camera input : %@", error.description);
      }
    } else if (videoCaptureDevice.position == AVCaptureDevicePositionBack) {
      self.backVideCaptureDevice = videoCaptureDevice;
    }
   }
  }
  return self;
}

Configure a View Controller to Display Incoming Video

You add a view object to a view controller to display the incoming video. In Example 13-30, when the MyWebRTCApplicationViewController view controller is created, its view property is nil, which triggers the loadView method

Example 13-30 Creating a View to Display the Video Stream

@implementation MyWebRTCApplicationViewController

- (void)loadView {

  // Create the view, videoView...
  CGRect frame = [UIScreen mainScreen].bounds;
  MyWebRTCApplicationView *videoView = [[MyWebRTCApplication alloc] initWithFrame:frame];

  // Set videoView as the main view of the view controller...
  self.view = videoView;
}

@end

Next you set the view controller as the rootViewController, which adds videoView as a subview of the window, and automatically resizes videoView to be the same size as the window, as shown in Example 13-31.

Example 13-31 Setting the Root View Controller

#import "MyWebRTCApplicationAppDelegate.h"
#import "MyWebRTCApplicationViewController.h"

@implementation MyWebRTCApplicationAppDelegate

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:
                                                 (NSDictionary *)launchOptions {
  self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];

  MyWebRTCApplicationViewController *myvc = [[MyWebRTCApplicaitonViewController alloc] init];
  self.window.rootVewController = myvc;

  self.window.backgroundColor = [UIColor grayColor];
  return YES;
}

Placing a WebRTC Video Call from Your iOS Application

To place a video call from your iOS application, complete the coding tasks contained in the following sections:

In addition, complete the coding tasks for an audio call contained in the following sections:

Note:

Audio and video call work flows are identical with the exception of media directions, local media stream configuration, and the additional considerations described earlier in this section.

Create a WSCCallConfig Object

You create a WSCCallConfig object as described in "Create a WSCCallConfig Object", in the audio call section. Set both arguments to WSCMediaDirectionSendRecv, as shown in Example 13-32.

Example 13-32 Creating an Audio/Video WSCCallConfig Object

WSCCallConfig *callConfig = [[WSCCallConfig alloc] initWithAudioVideoDirection:WSCMediaDirectionSendRecv video:WSCMediaDirectionSendRecv];

Configure the Local WSCMediaStream for Audio and Video

With the CallConfig object created, you then configure the local video and audio MediaStream objects using the WebRTC PeerConnectionFactory. In Example 13-33, the PeerConnectionFactory is used to first configure a video stream using optional constraints and mandatory constraints (as defined in the getMandatoryConstraints method), and is then added to the localMediaStream using its addVideoTrack method. Two boolean arguments, hasAudio and hasVideo, enable the calling function to specify whether audio or video streams are supported in the current call. The audioTrack is added as well and the localMediaStream is returned to the calling function.

For information about the WebRTC PeerConnectionFactory and mandatory and optional constraints, see https://webrtc.org/native-code/native-apis/.

Example 13-33 Configuring the Local MediaStream for Audio and Video

-(RTCMediaStream *)getLocalMediaStreams:(RTCPeerConnectionFactory *)pcf 
                                       enableAudio:(BOOL)hasAudio enableVideo:(BOOL)hasVideo {
  NSLog(@"Getting local media streams");
  if (!localMediaStream) {
      NSLog("PeerConnectionFactory: createLocalMediaStream() with pcf : %@", pcf);
      localMediaStream = [pcf mediaStreamWithLabel:@"ALICE"];
      NSLog(@"MediaStream1 = %@", localMediaStream);
  }
  
  if(hasVideo && (localMediaStream.videoTracks.count <= 0)){
    if (hasVideo) {
      RTCVideoCapturer* capturer = [RTCVideoCapturer
                          capturerWithDeviceName:[avManager.frontVideCaptureDevice localizedName]];
      RTCPair *dtlsSrtpKeyAgreement = [[RTCPair alloc] initWithKey:@"DtlsSrtpKeyAgreement"
                                          value:@"true"];
      NSArray * optionalConstraints = @[dtlsSrtpKeyAgreement];
      NSArray *mandatoryConstraints = [self getMandatoryConstraints];
      RTCMediaConstraints *videoConstraints = [[RTCMediaConstraints alloc]
                                                initWithMandatoryConstraints:mandatoryConstrainta
                                                optionalConstraints:optionalConstraints];
      RTCVideoSource *videoSource = [pcf videoSourceWithCapturer:capturer
                                                 constraints:videoConstraints];
      RTCVideoTrack *videoTrack = [pcf videoTrackWithID:@"ALICEv0" source:videoSource];
      if (videoTrack) {
        [localMediaStream addVideoTrack:videoTrack];
      }
    }
  }
    
  if (localMediaStream.audioTracks.count <= 0 && hasAudio) {
    [localMediaStream addAudioTrack:[pcf audioTrackWithID:@"ALICEa0"]];
  }
    
  if (!hasVideo && localMediaStream.videoTracks.count > 0) {
    for (RTCVideoTrack *videoTrack in localMediaStream.videoTracks) {
      [localMediaStream removeVideoTrack:videoTrack];
    }
  }
    
  if (!hasAudio && localMediaStream.audioTracks.count > 0) {
    for (RTCAudioTrack *audioTrack in localMediaStream.audioTracks) {
      [localMediaStream removeAudioTrack:audioTrack];
    }
  }
 
  NSLog(@"MediaStream = %@", localMediaStream);
  return localMediaStream;
}

-(NSArray *)getMandatoryConstraints {

  RTCPair *localVideoMaxWidth = [[RTCPair alloc] initWithKey:@"maxWidth" value:@"640"];
  RTCPair *localVideoMinWidth = [[RTCPair alloc] initWithKey:@"minWidth" value:@"192"];
  RTCPair *localVideoMaxHeight = [[RTCPair alloc] initWithKey:@"maxHeight" value:@"480"];
  RTCPair *localVideoMinHeight = [[RTCPair alloc] initWithKey:@"minHeight" value:@"144"];
  RTCPair *localVideoMaxFrameRate = [[RTCPair alloc] initWithKey:@"maxFrameRate" value:@"30"];
  RTCPair *localVideoMinFrameRate = [[RTCPair alloc] initWithKey:@"minFrameRate" value:@"5"];
  RTCPair *localVideoGoogLeakyBucket = [[RTCPair alloc] 
                                               initWithKey:@"googLeakyBucket" value:@"true"];

  return @[localVideoMaxHeight,
        localVideoMaxWidth,
        localVideoMinHeight,
        localVideoMinWidth,
        localVideoMinFrameRate,
        localVideoMaxFrameRate,
        localVideoGoogLeakyBucket];
}

Bind the Video Track to the View Controller

As shown in Example 13-34, bind the video track to the view controller you created in "Configure a View Controller to Display Incoming Video".

Example 13-34 Binding the Video Track to the View Controller

if(localMediaStream.videoTracks.count >0) {  [MyWebRTCApplicationViewController
                             localVideoConnected:localMediaStream.videoTracks[0]];}

Start the Video Call

Finally, start the audio/video call using the Call object's start method and passing it the WSCCallConfig object and the MediaStream stream array.

Example 13-35 Starting the Video Call

[call start:callConfig streams:streamArray];

Terminate the Video Call

To terminate the video call, use the WSCCall object's end method:

Example 13-36 Terminating the Video Call

[self.call end];

Receiving a WebRTC Video Call in Your iOS Application

Receiving a video call is identical to receiving an audio call as described here, "Receiving a WebRTC Voice Call in Your iOS Application". The only difference is the configuration of the WSCMediaStream object, as described in "Configure the Local WSCMediaStream for Audio and Video".

Supporting SIP-based Messaging in Your iOS Application

You can design your iOS application to send and receive SIP-based messages using the messaging package in WebRTC Session Controller iOS SDK.

To support messaging, define the logic for the following in your application:

  • Setup and management of the various activities associated with the states of the various objects, such as the session and the message transfer.

  • Enabling users to send or receive messages

  • Handling the incoming and outgoing message data

  • Managing the required user interface elements to display the message content throughout the call session.

About the Major Classes Used to Support SIP-based Messaging

The following major classes and protocols of the WebRTC Session Controller iOS SDK enable you to provide data channel support in your iOS application:

  • WSCMessagingPackage

    This package handler enables messaging applications. You can send SIP-based messages to any logged-in user with an object of the WSCMessagingPackage class. This object also dispatches received messages to the registered delegate.

  • WSCMessagingDelegate

    This class acts as a listener for incoming messages and their acknowledgements. It holds the following event handlers:

    • onNewMessage

      This event handler is called when your application receives a new SIP-based message.

    • onSuccessResponse

      This event handler is called when your application receives an accept/positive acknowledgment for a sent message.

    • onErrorResponse

      This event handler is called when your application receives a reject/negative acknowledgment for a sent message.

  • WSCMessagingMessage

    This class is used to hold the payload for SIP-based messaging.

  • withPackage

    This method belongs to the WSCSessionBuilder class. It is used to hold to build a session that supports a package, such as the messaging package.

For more on these and other WebRTC Session Controller iOS API classes, see AllClasses at Oracle Communications WebRTC Session Controller iOS API Reference.

Setting up the SIP-based Messaging Support in Your iOS Application

Complete the following tasks to setup SIP-based messaging support in your iOS applications:

  1. Enabling SIP-based Messaging

  2. Sending SIP-based Messages

  3. Handling Incoming SIP-based Messages

Enabling SIP-based Messaging

To enable SIP-based messaging in your iOS application, create and assign an instance of a messaging package.

When you set up the WSCSession class, pass this messaging package in the withPackage parameter of the WSCSession builder API, as shown in Example 13-37.

Example 13-37 Building a Session with a Messaging Package

#import "WSCMessagingPackage.h"
 ...
WSCSession *wscSession = [[[[[[[[WSCSessionBuilder create: wsUrl]
                            withConnectionDelegate: self]
                                      withUserName: userName]
                              withObserverDelegate: self]
                                       withPackage: [[WSCCallPackage alloc] init]]
                                       withPackage: [[WSCMessagingPackage alloc] init]]
                                   withHttpContext: httpContext]
                                                    build];

Ensure that your application implements the onSuccess and OnFailure event handlers in the WSCSessionConnectionDelegate object. WebRTC Session Controller iOS SDK sends asynchronous messages to these event handlers, based on its success or failure to build the session.

Sending SIP-based Messages

To send a SIP-based message, invoke the send method of the WSCMessagePackage object. The signature of the send method is:

(NSString *)send:(NSString *)textMessage target:(NSString *)target extHeaders:(NSDictionary *)extHeaders

In Example 13-38, destination is "bob@example.com" and the text is "Hi There." Bob sees the message from the sending party.

Example 13-38 Sending a SIP-based Message

...
WSCMessagingPackage *msgPackage = (WSCMessagingPackage *)[self.wscSession getPackage:PACKAGE_TYPE_MESSAGING];
 
NSString *peer = @"alice@example.com";
[msgPackage send:@"Hi There" target:peer];
...
 

Handling Incoming SIP-based Messages

Set up your application to handle incoming messages and acknowledgements. Register a WSCMessagingDelegate to be notified when a new message is received. Set the ObserverDelegate property of the WSCMessagePackage object, as shown in Example 13-39:

Example 13-39 Registering the Observer for the Message Package

...
WSCSession session;
// Register an observer for listening to incoming messaging events.
WSCMessagingPackage *msgPackage = (WSCMessagingPackage *)[self.wscSession getPackage:PACKAGE_TYPE_MESSAGING];
[msgPackage setObserverDelegate:self];
...

When a new message comes in, the onNewMessage event handler of the WSCMessagingDelegate object is called. In the callback function you implement for the onNewMessage event handler, accept or reject the message received using the appropriate APIs.

Set up the logic to handle the acknowledgements appropriately:

  • The accept method of the WSCMessagePackage object. When the receiver of the message accepts the message, the onSuccessResponse event is triggered on the sender's side (that originated the message).

  • The reject method of the WSCMessagePackage object. When the receiver of the message rejects the message, the onErrorResponse event is triggered on the sender's side (that originated the message).

Example 13-40 Example of a Delegate Set up for a Message Package

// Class that observes for incoming messages from Messaging.
#pragma mark WSCMessagingDelegate
 
- (void) onNewMessage:(WSCMessagingMessage *)message {
  NSLog(@"Got Messaging: onNewMessage with content:%@", message.content);
  
  // Show message content in some UITextView 
  
  // Accept message as received
  [self.wscMsging accept:message];
}
 
-(void) onSuccessResponse:(WSCMessagingMessage *)message{
  NSLog(@"Messaging: onSuccessResponse");
}
 
-(void) onErrorResponse:(WSCMessagingMessage *)message cause:(WSCCause *)cause reason:(NSString *)reason {
  NSLog(@"Messaging: onErrorResponse");
}

Adding WebRTC Data Channel Support to Your iOS Application

This section describes how you can add WebRTC data channel support to the calls you enable in your iOS application. For information about adding voice call support to an iOS application, see "Adding WebRTC Voice Support to your iOS Application".

To support calls with data channels, define the logic for the following in your application:

  • Setup and management of the various activities associated with the states of the various objects such as the session, the data transfer.

  • Enabling users to make or receive calls with data channels set up with or without the audio and video streams

  • Handling the incoming and outgoing data

  • Managing the required user interface elements to display the data content throughout the call session.

About the Major Classes and Protocols Used to Support Data Channels

The following major classes and protocols enable you to provide data channel support in your iOS application:

  • WSCCall

    This object represents a call with any combination of audio, video, and data channel capabilities. It creates a data channel and initializes the WSCDataTransfer object for the data channel when the call starts or when accepting the Call if the Call has capability of data channel.

  • WSCCallConfig

    The WSCCallConfig object represents a call configuration. It describes the audio, video, or data channel capabilities of a call.

  • WSCDataChannelOption

    The WSCDataChannelOption object describes the configuration items in the data channel of a call such as whether ordered delivery is required, the stream id, maximum number of retransmissions and so on.

  • WSCDataChannelConfig

    The WSCDataChannelConfig object describes the data channel of a call, including its label and WSCDataChannelOption.

  • WSCDataTransfer

    The WSCDataTransfer object manages the data channel. If the WSCCallConfig object includes the data channel, the WSCCall object creates an instance of the WSCDataTransfer object.

  • WSCDataSender

    The WSCDataSender object exposes the capability of a WSCDataTransfer to send raw data over a data channel. The instance is created by WSCDataTransfer.

  • WSCDataReceiver

    The WSCDataReceiver object exposes the capability of a WSCDataTransfer to receive raw data over the established data channel. The instance is created by WSCDataTransfer.

  • onDataTransfer

    The onDataTransfer method associated with WSCCallObserverDelegate indicates that a WSCDataTransfer is created.

  • WSCDataTransferObserverDelegate

    The WSCDataTransferObserverDelegate acts as an observer protocol for WSCDataTransfer.

    Your application can implement this protocol to be informed of changes in WSCDataTransfer.

  • WSCDataReceiverObserverDelegate

    The WSCDataReceiverObserverDelegate acts as an observer protocol for WSCDataReceiver, the receiver of the data transfer.

For more information on these and other WebRTC Session Controller iOS API classes, see AllClasses at Oracle Communications WebRTC Session Controller iOS API Reference.

About the Sample Code Excerpts in This Section

The sample code excerpts shown in this section are taken from a sample iOS application which supports data-channels. The sample interface extends the WSCDataTransferObserverDelegate and WSCDataReceiverObserverDelegate Observers in addition to existing WSCCallObserverDelegate observers. Each excerpt is simple and attempts to illustrate the functionality under that discussion only.

About the Data Transfers and Data Channels

If the data channel is enabled, then, for both incoming and outgoing calls, a WSCDataTransfer object is created and passed back to your iOS application in the callback for the onDataTransfer method of the WSCCallObserverDelegate protocol.

Setting Up DataTransferObserverDelegate Protocol to Handle Data Transfers

When the onDataTransfer method of the WSCCallObserverDelegate protocol object is called, set the data transfer observer delegate to be informed of changes in the data transfer.

-(void)onDataTransfer:(WSCDataTransfer *)dataTransfer {
 
  // register delegate to listen to the state change
  dataTransfer.observerDelegate = self;
 
  // keep this data transfer object for later use
  _dataTransfer = dataTransfer;
}

In order for your application to respond to the various states of the data channel, implement the following methods of the WSCDataTransferObserverDelegate protocol:

  • onOpen

    - (void)onOpen:(WSCDataTransfer *)dataTransfer
    

    This method is called when the data channel of the WSCDataTransfer object is open. The state of the WSCDataTransfer object is denoted by the enum value WSCDataTransferOpen.

    Your application can send and receive messages, as appropriate.

  • onClose

    The method is called when the data channel of the WSCDataTransfer object is closed. The state of the WSCDataTransfer object is denoted by the enum value WSCDataTransferClosed.

  • onError

    - (void)onError:(WSCDataTransfer *)dataTransfer
    

    This method is called when the data channel of the WSCDataTransfer object encounters an error. The state of the WSCDataTransfer object is denoted by the enum value WSCDataTransferError.

Initialize the CallPackage Object

When you created your Session, you registered a new WSCCallPackage object using the Session object's withPackage method. You now instantiate that WSCCallPackage, as shown in Example 13-41.

Example 13-41 Initializing the CallPackage

WSCCallPackage *callPackage = (WSCCallPackage*)[wscSession getPackage:PACKAGE_TYPE_CALL];

Use the default PACKAGE_TYPE call type unless you have defined a custom call type. For more information, see "Initialize the CallPackage Object" under the audio call section of this chapter.

Sending Data from Your iOS Application

To send data from your iOS application, complete the coding tasks contained in the following sections.

Complete the coding tasks for an audio call contained in the following sections:

Configure the Data Channel for the Data Transfers

Configure the data channel with WSCDataChannelConfig before you set up the WSCCallConfig object. The WebRTC Session Controller client iOS SDK supports multiple data channels in a call.

If you create one WSCDataChannelConfig object for a call, then one instance of the WSCDataTransfer object is created to support the call. Assign a label for the data channel configuration object, to allow your application to access the corresponding WSDDataTransfer with this label.

Example 13-42 shows one data channel that is assigned the label, Sample.

Example 13-42 Configuring a Single Data Channel for the Call

...
// Set up WSCDataChannelOption
  WSCDataChannelOption *option = [[WSCDataChannelOption alloc]init];
  // Set various options on WSCDataChannelOption 
  //For example, option.maxRetransmits = 5;

// create WSCDataChannelConfig
WSCDataChannelConfig *dcConfig = [[WSCDataChannelConfig alloc] initWithLabel:@"sample" withOption:option];  
  NSArray *dcConfigs = [[NSArray alloc] initWithObjects:dcConfig, nil]  
...

You can create multiple WSCDataChannelConfig objects. When you do so, the WebRTC Session Controller iOS SDK creates the required number of WSCDataTransfer objects to support your requirement.

Example 13-43 shows the code sample defining two data channels, each with its own label and placing them in myDataChannelConfig, its DataChannelConfig object.

Example 13-43 Configuring Multiple Data Channels for the Call

  // This code sample sets up 2 different data channels both using default values.
  WSCDataChannelOption *firstDcOption = [[WSCDataChannelOption alloc] init];
  WSCDataChannelConfig *firstDcConfig = [[WSCDataChannelConfig alloc] initWithLabel:@"firstDataChannel" withOption:firstDcOption];
 
  WSCDataChannelOption *secondDcOption = [[WSCDataChannelOption alloc] init];
  WSCDataChannelConfig *secondDcConfig = [[WSCDataChannelConfig alloc] initWithLabel:@"secondDataChannel" withOption:secondDcOption];
 
  // Create an array containing both data channel definitions.
  NSArray *myDataChannelConfigs = [[NSArray alloc] initWithObjects:firstDcConfig, secondDcConfig, nil];
  
  // Finally initialise the call configuration by including both data channels.
  WSCCallConfig* myCallConfig = [[WSCCallConfig alloc] initWithAudioDirection:audioMediaDirection
                                               withVideoDirection:videoMediaDirection
                                                  withDataChannel:myDataChannelConfigs];

Handling the Data Channel States

Implement the onOpen, onClose, and onError methods of the DataTransferObserverDelegate protocol so you can respond to changes in the states of the data channel.

For a description of the methods, see "Setting Up DataTransferObserverDelegate Protocol to Handle Data Transfers".

Create a WSCCallConfig Object with Data Channel Option

Having defined the data channel setup for the call, you can now create a WSCCallConfig object to determine the type of call you wish to make.

In Example 13-44, the sample application uses the IBAction return type to use the method as the action of a UI control.

Example 13-44 Configuring WSCCallConfig with a DataChannel

...
    NSArray *dataChannelConfigs = nil;
    WSCDataChannelOption *dcOption = [[WSCDataChannelOption alloc] init];
    WSCDataChannelConfig *dcConfig = [[WSCDataChannelConfig alloc] initWithLabel:@"sample" withOption:dcOption];
    dataChannelConfigs = [[NSArray alloc] initWithObjects:dcConfig, nil];
    
    WSCCallConfig* newConfig = [[WSCCallConfig alloc] initWithAudioDirection:self.callConfig.audioConfig
                                                          withVideoDirection:self.callConfig.videoConfig
                                                             withDataChannel:dataChannelConfigs];
    

Configure the Local MediaStream for Audio and Video

If the calls in your application support an audio and/or video stream also, configure the local video and audio MediaStream objects using the WebRTC PeerConnectionFactory.

For more information about configuring the audio and video streams, see:

Set Up Your Application to Receive Incoming Data

Set up a function to process the incoming data as shown in Example 13-45.

The onMessage method of DataReceiverObserverDelegate protocol is invoked when a message is received.

Tip:

Process the message inside the function handling onMessage, or copy it for later use.

Example 13-45 Receiving Data from WSCDataReceiver

-(void)onMessage:(RTCDataBuffer *)buffer
{
/**
 * Invoked when there is message received.
 *
 * @param data RTCDataBuffer which is received by this data channel.
 */
-(void) onMessage:(RTCDataBuffer *)data {
  DLog(@"DataTransfer %@ receive message %@", self.dataTransfer.label, data);
  if (data.isBinary) {
    ALog(@"DataTransfer %@ receive binary message", self.dataTransfer.label);
  } else {
    NSString *received = [[NSString alloc] initWithData:data.data encoding:NSUTF8StringEncoding];
    DLog(@"DataTransfer %@ receive text message: %@", self.dataTransfer.label, received);
    DLog(@"content of the data chage view: %@", self.dataChatView.text);
    [self appendIncomingMessageToHistory:received];
  }
  
}

The WSCDataReceiver object created by the WSCDataTransfer object, can receive raw data over the underlying data channel.

Start the Data Channel Call

Start the call using the WSCCall object's start method. When you call this method, provide the instance of the WSCCallConfig object that contains the call's capabilities and the streams object with the local media streams to be attached to call.

Invoke the appropriate method:

  • Without extension headers, as shown in Example 13-46:

    -(void)start:(WSCCallConfig *)config     headers:(NSDictionary *)headers     streams:(NSArray *)localStreams;
    

    Example 13-46 Start a Call

    [call start:callConfig streams:streamArray];
    
  • With extension headers:

    -(void)start:(WSCCallConfig *)config     headers:(NSDictionary *)headers     streams:(NSArray *)localStreams;
    

    See "About Extension Headers and JSON Messages".

Send the Data Content

Send data using the send method of the sender in the WSCDataTransfer object.

Use the label for the data channel to retrieve the WSCDataTransfer object from the WSCCall object. Set up the WSCDataSender object. Verify that the state of the WSCDataTransferState object is WSCDataTransferOpen. Invoke the send method of this WSCDataSender object to send data. Example 13-47 shows a text message sent by the sample code.

Note:

Invoke the send: method, when the data channel of the data transfer object is in an open state.

Example 13-47 Sending Text Strings as Data

-(void)sendData:(NSString *)message {
 
  RTCDataBuffer *buffer = [[RTCDataBuffer alloc] initWithData:[message dataUsingEncoding:NSUTF8StringEncoding] isBinary:NO];
 
// Send Data after verifying that the data channel is in WSCDataTransferOpen state.
... 
  // send the message through WSCDataSender
  [dataTransfer.sender send:buffer];  
}

Terminate the Data Channel in the Call

To terminate the audio call, use the appropriate method:

Receiving Data Content in Your iOS Application

This section describes the steps specific to configuring your iOS application to receive WebRTC data transfers.

Implement WSCCallPackageObserverDelegate Protocol to Verify Data Channel Capability

Use the callArrived:callConfig:extHeaders: method of WSCCallPackageObserverDelegate to initialize the WSCCallObserverDelegate protocol that acts as the ObserverDelegate for the Call.

As Example 13-48 shows, you can also verify from the CallConfig object that a data channel is enabled for this call.

Example 13-48 Initializing the ObserverDelegate for a Data Channel Call

-(void)callArrived:(WSCCall *)call callConfig:(WSCCallConfig *)callConfig extHeaders:(NSDictionary *)extHeaders {
  if (callConfig.dataChannelConfigs) {
    // This is data channel enabled call
  }
  call.observerDelegate = self;
}

Handling the Data Channel States

Implement the onOpen, onClose, and onError methods of the DataTransferObserverDelegate protocol so you can respond to changes in the states of the data channel.

For a description of the methods, see "Setting Up DataTransferObserverDelegate Protocol to Handle Data Transfers".

Implement the DataReceiverObserverDelegate Protocol to Listen for Messages

When the data channel is in an open state, initialize the ObserverDelegate of receiver, the receiving object of the DataTransfer, as shown in Example 13-49. This object serves to listen for incoming messages.

Example 13-49 The ObserverDelegate for the Receiver of the Data Transfer

-(void)onOpen:(WSCDataTransfer *)dataTransfer {
 
  // after the callback completes, this data transfer is ready to send/receive message.
 
  // register delegate on receiver to listen to the incoming message
  dataTransfer.receiver.observerDelegate = self;
 
}

Accept the Call

To accept an incoming call, invoke the appropriate method:

  • accept:streams: method, invoked as

    - (void)accept:(WSCCallConfig *)config streams:(RTCMediaStream *)localStreams

  • accept:extHeaders:streams: method when you support extension headers. This is invoked as

    - (void)accept:(WSCCallConfig *)config streams:(RTCMediaStream *)localStreams

    See "About Extension Headers and JSON Messages".

Receiving Data

Obtain the incoming message from the data channel by setting up the logic for the onMessage method of WSCDataReceiverObserverDelegate. This method is invoked when there is an incoming message, as shown in Example 13-50.

Example 13-50 Receiving Data

-(void)onMessage:(RTCDataBuffer *)buffer
{
  if (data.isBinary) {
    // raw data
  } else {
    NSString *received = [[NSString alloc] initWithData:data.data encoding:NSUTF8StringEncoding];        
  } 
}

Upgrading and Downgrading Calls

This section describes how you can handle upgrading an audio call to an audio video call and downgrading a video call to an audio-only call in your iOS application.

Handle Upgrade and Downgrade Requests from Your Application

To upgrade from a voice call to a video call as a request from your application, you can bind a user interface element such as a button class containing the WSCCall update logic using the forControlEvents action:

[requestUpgradeButton addTarget:self action:@selector(videoUpgrade) 
                              forControlEvents:UIControlEventTouchUpInside];
[requestDowngradeButton addTarget:self action:@selector(videoDowngrade) 
                              forControlEvents:UIControlEventTouchUpInside];

You handle the upgrade or downgrade workflow in the videoUpgrade and videoDowngrade event handlers for each button instance, as shown in Example 13-51.

Example 13-51 Sending Upgrade/Downgrade Requests from Your Application

- (void) videoUpgrade: {
  // Set the criteria for the current call...
  self.hasVideo = NO;
  self.hasAudio = YES;

  // Fetch local streams using the the getLocalMediaStreams function from Example 13-33
  [self getLocalMediaStreams:[self.call getPeerConnectionFactory] enableVideo:hasVideo
                                                                  enableAudio:hasAudio];

  // Bind the video stream to the view controller as in Example 13-34
  if(localMediaStream.videoTracks.count >0) {    [MyWebRTCApplicationViewController
                             localVideoConnected:localMediaStream.videoTracks[0]];  }

  // Audio -> Video upgrade
  WSCCallConfig *newConfig = [[WSCCallConfig alloc]
                               initWithAudioVideoDirection:WSCMediaDirectionSendRecv
                               video:WSCMediaDirectionSendRecv]];
  [call update:newConfig headers:nil streams:@[localMediaStream]];
}

- (void) videoDowngrade: {
  // Set the criteria for the current call...
  self.hasVideo = YES;
  self.hasAudio = YES;

  // Fetch local streams using the the getLocalMediaStreams function from Example 13-33
  [self getLocalMediaStreams:[self.call getPeerConnectionFactory] enableVideo:hasVideo
                                                                  enableAudio:hasAudio];

  // Bind the video stream to the view controller as in Example 13-34
  if(localMediaStream.videoTracks.count >0) {    [MyWebRTCApplicationViewController
                             localVideoConnected:localMediaStream.videoTracks[0]];  }

  // Video -> Audio downgrade
  WSCCallConfig *newConfig = [[WSCCallConfig alloc]
                               initWithAudioVideoDirection:WSCMediaDirectionSendRecv
                               video:WSCMediaDirectionNone]];
  [call update:newConfig headers:nil streams:@[localMediaStream]];
}

Handle Incoming Upgrade Requests

You configure the callUpdated method of your WSCCallObserverDelegate class to handle incoming upgrade requests based on the WSCCallUpdateEventReceived state change. See Example 13-52.

Note:

The declineUpgrade boolean must be set by some other part of your application's user interface.

Example 13-52 Handling an Incoming Upgrade Request

- (void)callUpdated:(WSCCallUpdateEvent)event
         callConfig:(WSCCallConfig *)callConfig
              cause:(WSCCause *)cause
{
  NSLog("callUpdate request with config: %@", callConfig.description);
  switch(event){
    case WSCCallUpdateEventSent:
      break;
    case WSCCallUpdateEventReceived:
      if(declineUpgrade) {
         NSLog(@"Declining upgrade.");
         [self.call decline:WSCStatusCodeDeclined];
      } else {
          NSLog(@"Accepting upgrade.");
          NSLog(@"Call config: %@", updateConfig.description);
          BOOL hasAudio;
          BOOL hasVideo;
          if (updateConfig.audioConfig == WSCMediaDirectionNone) {
            hasAudio = NO;
          }
          if (updateConfig.videoConfig == WSCMediaDirectionNone) {
            hasVideo = NO;
          }
          self.callConfig = updateConfig;
          [self getLocalMediaStreams:[self.call getPeerConnectionFactory] enableAudio:hasAudio
                                                                          enableVideo:hasVideo];
          [self.call accept:updateConfig streams:localMediaStream];
          [callViewController updateView:self.callConfig];
      }
    case WSCCallUpdateEventAccepted:
      break;
    case WSCCallUpdateEventRejected:
      break;
    default:
      break;
  }

Handling Session Rehydration When the User Moves to Another Device

When your customer is using your application on one device (a cellphone), the customer may move to another device (a laptop softphone that uses the same account and is authenticated by WebRTC Session Controller). A Session (along with its subsessions) currently active in your application on one device belonging to your customer becomes active on your application on another device belonging to the same customer.

For example, your customer Alice, accesses a web browser from a cellphone to talk about a purchase selection with Bob, a customer support representative active in that browser session. While Alice is on the call, she switches over to her laptop to look at the purchase selection in greater detail.

You can use the WebRTC Session Controller to configure applications that support such handovers of session information between devices successfully. Your application then manages the rehydration of the session and all its data on the target device (in this example, the tablet laptop) such that Alice's call to Bob continues in an uninterrupted fashion.

This section described how your application can work to present the customer with the session state recreated on another device.

Note:

In a device-handover scenario, WebRTC Session Controller manages the data associated with the subsessions of your application session. It keeps their states intact through the handovers that may occur during the life of an application session.

The focus of the handover logic in your application is the Session within which a call, a message, or a video session lives.

About the Supported Operating Systems

You can design your applications using WebRTC Session Controller such that you support handover to your applications programmed for iOS, Web and Android systems.

Note:

For such a handover to be successful, your application must be active on the various devices belonging to a user, the associated user name and account be authenticated by WebRTC Session Controller, and the applications supported on the various operating systems.

This chapter deals with setting up your iOS application to support handovers using the WebRTC Session Controller iOS SDK. For information about supporting handovers to:

Configuring WebRTC Session Controller to Support Transfer of Session Data

In a device handover, the same WebSocket sessionID is used to transfer an application session state that is active in the current client device (for example Alice's cellphone) and present that state on the subsequent device (Alice's laptop).

When one client uses another client's websocket sessionID to connect with WebRTC Session Controller, the WSC server checks the value in the system property, allowSessionTransfer. The default value of allowSessionTransfer is false. This value causes WebRTC Session Controller to consider the request as a hacking attack and reject the request.

In order to allow the same user or tenant to connect with the WebRTC Session Controller server using the same WebSocket session Id, set the startup command option allowSessionTransfer to true in the WebRTC Session Controller Administration Console. For more information on how to set the startup option allowSessionTransfer to true in WebRTC Session Controller, see "Supporting Session Rehydration for Device Handover Scenarios" in Oracle Communication WebRTC Session Controller System Administrator's Guide.

About the WebSocket Disconnection

When the device handover occurs, the WebSocket connection immediately closes.

The WebRTC Session Controller signaling engine keeps the session alive for a time period specified as WebSocket Disconnect Time Limit in the WebRTC Session Controller Administration Console.

Note:

If the target device fails to pick up the session within the WebSocket Disconnect Time Limit period, the device handover fails.

About the Normalized Session Data User to Support Handovers

A user may move from an application where your application uses one type of SDK to a device where your application uses a different SDK. The supported Client SDKs are:

  • Web

  • Android

  • iOS

WebRTC Session Controller supports a normalized uniform session data format to transfer the session state information between these systems. The session state information is sent as a binary large object (BLOB).

About the Handover Scenario on the Original Device

When the original device (for example, Alice's cellphone Device A-1) triggers a handover, the following events occur:

  1. On the Original Device:

    1. Your application on DeviceA-1 suspends the active session on the WebRTC Session Controller server. The WebRTC Session Controller iOS SDK API returns the session data to your application. The WebSocket connection closes.

      See "Suspending the Session on the Original Device".

    2. Your application transfers the session data (stateInfo) to be received and processed by the application on the subsequent device Device A-2.

      See "Sending the Session Data to the Application Service".

    The WebRTC Session Controller Signaling engine keeps this session alive for a time period specified as WebSocket Disconnect Time Limit in the WebRTC Session Controller Administration Console.

  2. On the device receiving the handover

    The subsequent device that receives the handover is Alice's laptop DeviceA-2 in this discussion. The following events occur:

    1. Your application on the subsequent device retrieves the stateInfo string from the Restful server. See "Requesting for the Session Data from the Application Service".

    2. Your application uses the session state information to recreate the session. See "Recreating the Application Session with the StateInfo Object".

    3. Your application rehydrates the call with WebRTC Session Controller iOS SDK.

      See "Rehydrating a WebRTC Call After a Device Handover".

    4. The active call resumes on the subsequent device.

About the WebRTC Session Controller iOS APIs for Device Handover

The WebRTC Session Control iOS APIs that enable your applications to handle notifications related to session Rehydration in another device are:

  • withStateInfo parameter of WSCSessionBuilder

    When you provide the withStateInfo parameter, the WebRTC Session Controller iOS SDK rehydrates the session using the StateInfo object. If there is a call subsession in the StateInfo object, WebRTC Session Controller iOS SDK rehydrates the call.

  • suspend

    The suspend method of the WSCCallPackage object suspends the active session. This method returns a JSON string object containing the session data to use in session rehydration.

  • WSCSessionStateSuspended

    The WSCSessionStateSuspended constant represents the state of WSCSession after the session is handed over.

  • callResurrected

    The callResurrected method of the WSCCallPackageObserverDelegate is called when the call is resurrected after a session is rehydrated. Its parameters are WSCCall (the call that was resurrected) and the WSCCallConfig of the resurrected call.

For more on these and other WebRTC Session Controller iOS API classes, see AllClasses at Oracle Communications WebRTC Session Controller iOS API Reference.

Completing the Tasks to Support Session Rehydration in Another Supported Device

This section described the tasks to complete in your application to hand over a session to another device and to receive a session handed over from another device. Complete the following tasks to support session transfers and rehydration with the transferred session state information:

Suspending the Session on the Original Device

In order to implement a handover, your application on the original device DeviceA-1 suspends the active session on the WebRTC Session Controller server. One scenario would be to set up a handover function and suspend the session within its logic.

Note:

The logic surrounding the detection of the actual device handover is beyond the scope of this document.

The WebRTC Session Controller iOS API method to suspend a session is suspend. The WebRTC Session Controller iOS SDK API returns the session data to your application in JSON string format. The WebSocket connection closes.

In your application logic that handles the user interface related to the handover, call the WebRTC Session Controller iOS API method, WSCSession.suspend().

Example 13-53 Suspending a Session

...
// Handover detected
...
/**
 * Shut down the currently open socket and return StateInfo Json String
 *
 * @return NSString
 */
-(NSString *)suspend;
...

Sending the Session Data to the Application Service

Your application on the original device (for example, Alice's cellphone called Device A-1) sends the session state information in a handover request to the Application Service (your application, a Web application, or a RESTful service).

You can configure how your application performs this task in the way that suits your environment. For example, your application can push the stateInfo to the other device or allow the other device to pull the stateInfo.

When the suspend method completes, your application has the session data with which to rehydrate the session. The session data is in JSON format. In your implementation of the logic for the WSCSessionHandedOver state of WSCSession, set up the information to transfer to the Application service.

Include this JSON string object and any other relevant information in the data you send with the handover request to the application service.

Requesting for the Session Data from the Application Service

In the application logic that handles this trigger, send a request to the application service asking for the session state information. The application service returns the stateInfo, the session data for rehydration in a JSON String format.

Recreating the Application Session with the StateInfo Object

Your application on the subsequent device sends the session state information to the WebRTC Session Controller iOS SDK.

In your application logic that handles session rehydration following a device handover, set up a session object with the session state data you received from the application service.

Set up the Builder for the WSCSession by providing the session state data in the withStateInfo method of the WSCSessionBuilder object. This method returns a WSCSessionBuilder object with a stateInfo configuration:

- (WSCSessionBuilder *)withStateInfo:(NSString *)stateInfo

In the following sample code excerpt, the application provides the session data in dhSessionStateInfo and builds the session.

Example 13-54 Building the Session with StateInfo

...
self.wscSession = [[[[[[[[[[[[[[[WSCSessionBuilder create:url]
...
                          withStateInfo:[NSString dhSessionStateInfo]
...
                           build];

For information about WSCSessionBuilder, see Oracle Communications WebRTC Session Controller iOS API Reference.

Ensure that you implement the logic for onSuccess and OnFailure event handlers in the WSCSessionConnectionDelegate object. WebRTC Session Controller iOS SDK sends asynchronous messages to these event handlers, based on its success or failure to build the session.

A subsession, such as a call session that was part of the application session on the original device, is also suspended when that device suspends the application session. In such a scenario, WebRTC Session Controller iOS SDK creates the subsession (for example, the WSCCall object) and passes the object to your application's WSCCallObserver.

Rehydrating a WebRTC Call After a Device Handover

A call session that was part of the application session on the original device, is also suspended when that device suspends the application session. In such a scenario, WebRTC Session Controller iOS SDK creates the subsession objects for your application. For example, if there is a WSCCall object, it passes the call configuration object to your application's WSCCallObserverDelegate.

Note:

After the new device connects to the session, WebRTC Session Controller does not send out a new incoming call request to connect a call that is part of the handover. To recreate the call connection, the WebRTC Session Controller iOS SDK uses the information in the stateInfo object.

If there is no "re-invite" flow, the ongoing call is not established, because the client IP, port, or codec has changed.

In your application, complete the candidates re-negotiation with the peer side, as dictated by the iOS system.

If the rehydrated session contains a call session, WebRTC Session Controller iOS SDK will attempt to create the WSCCall object from the stored stateInfo object.

On successful creation of the WSCCall object, WebRTC Session Controller iOS SDK triggers the callResurrected delegate of the WSCCallPackageObserverDelegate protocol object and provides the WSCCall object and the WSCCallConfig objects for the resurrected call.

Implement the following logic in your iOS application:

  • In your application logic that handles the callResurrected event of the WSCCallPackageObserverDelegate protocol object, rehydrate the call using this WSCCall object and its WSCCallConfig object.

  • To be informed of the updated call, implement the CallUpdated method in the WSCCallObserverDelegate protocol object.

Following a device handover, the general workflow for rehydrating a call with video streams or data channels is identical to the workflow for rehydrating a call with audio stream. Any specificity lies in how your application logic handles the WSCCallConfig object to maintain the video streams and data transfers associated with the call.

Extending Your Applications with WebRTC Session Controller iOS SDK

This section describes how you can extend the Oracle Communications WebRTC Session Controller iOS application programming interface (API) library.

Note:

Before you proceed, review the discussion about how the WebRTC Session Control JavaScript APIs assist in extending WebRTC applications. See "Extending Your Applications Using WebRTC Session Controller JavaScript API" for more information.

You can extend your iOS applications by doing the following:

Extending an Existing Package

To extend existing packages in your iOS application, extend the associated iOS classes. The supported classes are:

  • WSCSession:

    Your application can create only one session with the WebRTC Session Controller server for each user. However, you can do the following:

  • WSCMessagingPackage

    To extend the SIP-based messaging, your application can create a WSCMessagingPackage class and register it when you create a session. See "Supporting SIP-based Messaging in Your iOS Application".

  • WSCFrame

    WSCFrame is the master data transfer object class for all JSON messages.

  • WSCFrameFactory

    Helper class to create JSON WSCFrame instances.

  • WSCCall

    • Accept a call with specific configuration.

      - (void)accept:(WSCCallConfig *)config extHeaders:(NSDictionary *)extHeaders streams:(RTCMediaStream *)localStreams
      
    • Decline a call with specific configuration.

      - (void)decline:(NSInteger)code headers:(NSDictionary *)headers
      

      The available reason codes are 486 (busy here), 603 (decline), and 600 (busy everywhere).

    • Start a call with specific configuration.

      - (void)start:(WSCCallConfig *)config headers:(NSDictionary *)headers streams:(NSArray *)localStreams
      
    • Update the audio and video capabilities of a call.

      - (void)update:(WSCCallConfig *)config headers:(NSDictionary *)headers streams:(NSArray *)localStreams
      
    • End a call with specific configuration.

      - (void)end:(NSDictionary *)headers
      

For more information on these and other WebRTC Session Controller iOS API classes, see AllClasses at Oracle Communications WebRTC Session Controller iOS API Reference.

Building an Extended Session

Use extension headers to extend the session when you build it with the WSCSessionBuilder. The following entry creates a new WSCSessionBuilder with extension headers which are sent as part of the session connection:

- (WSCSessionBuilder *)withExtHeaders:(NSDictionary *)value

Where, value contains the headers.

See "About Extension Headers and JSON Messages".

Creating a WSC Frame

You can create a WSCFrame object with one of the following:

  • initWithJson

    Use the initiwithJson method when you have a JSON string that contains the required data. In Example 13-55, the code excerpt passes content encoded as a JSON string with the initiwithJson parameter.

    Example 13-55 Creating a WSCFrame from File Name

    + (WSCFrame*)frameFromFileName:(NSString*)fileName{
      NSString *path = [[NSBundle bundleForClass:[self class]] pathForResource:fileName ofType:@"json"];
      NSString *content = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil];
      WSCFrame* frame = [[WSCFrame alloc]initWithJson:content];
      return frame;
    }
    
  • createFrame

    Use the createFrame method of the WSCFrameFactory class, as shown in Example 13-56.

    Example 13-56 Creating a WSCFrame from Factory

    + (WSCFrame*)frameFromFactory:(MySubSession *)mySubSession {
       WSCFrame *frame = [WSCFrameFactory createFrame: WSC_MT_MESSAGE
                                                 verb:@"myMadeUpVerb"
                                                 pack:MY_PACKAGE_TYPE
                                            sessionId:[self.session getSessionId]
                                         subSessionId:[mySubSession getSubSessionId]
                                        correlationId:[mySubSession getCorrelationId]]; 
      return frame;
    }
    

Sending a WSC message Frame to the Session

To send a WSC message frame to a session, use the following syntax:

- (void)sendMessage:(WSCFrame *)frame

Where, frame is the WSC message frame that is sent to the session. See Example 13-57.

Adding a SubSession to the Session

The putSubSession method adds a sub session to the WSCSession object. Example 13-57 shows how to add a sub session to a session (MySubSession) and send a WSC message frame to the session.

Example 13-57 Adding a WSCSubSession and Sending a WSC Message Frame

self.wscSession = [self createWSCSession];
 
MySubSession *mySubSession = [self createSubSession];
[self.wscSession putSubSession:mySubSession];
 
WSCFrame *myFrame = [self frameFromFactory:mySubSession];
[self.wscSession sendMessage:myFrame];
 

Adding a New Package

"New packages can be added in both SDKs". How?

About Extension Headers and JSON Messages

When you provide extension header to a method that supports the extHeaders parameter, the contents are inserted into the JSON message. For example, suppose that you input an extension header formatted as:

{'customerKey1':&rsquor;value1',&rsquor;customerKey2':&rsquor;value2'}

It is formatted into the JSON message as:

{ { ”control” : {}, ”header” :
 {…,&rsquor;customerKey1':&rsquor;value1',&rsquor;customerKey2':&rsquor;value2'}, ”payload” : {}}