2 General Description
This document defines the procedures needed to upgrade an in-service DSR from the source release to the target release. A major upgrade advances the DSR from the source release to the target release. An incremental upgrade advances the DSR from an earlier DSR 8.6.0.7.0 source release to later version of the same target release.
Note:
With any incremental upgrade, the source and target releases must have the same value of x. For example, advancing a DSR from 8.4.0.0.0_84.x.y to 8.4.0.0.0_84.z.k is an incremental upgrade. But, advancing a DSR running a 8.0 release to an 8.6.0.7.0 target release constitutes a major upgrade.2.1 Supported Upgrade Paths to 8.6.0.7.0
The following table provides information about the supported upgrade paths:
Source Release | Target Release |
---|---|
8.6.0.0.0 | 8.6.0.7.0 |
8.6.0.1.0 | 8.6.0.7.0 |
8.6.0.2.0 | 8.6.0.7.0 |
8.6.0.3.0 | 8.6.0.7.0 |
8.6.0.4.0 | 8.6.0.7.0 |
8.6.0.5.0 | 8.6.0.7.0 |
8.6.0.6.0 | 8.6.0.7.0 |
2.2 Supported Hardware
If hardware is not provided by Oracle, then all Gen6 and Gen7 blades must be replaced with supported hardware before upgrading to release 8.6.0.7.0.
Due to the enhanced processing capabilities and requirements of DSR release 8.6.0.7.0, HP Gen6 and Gen7 hardware are NOT supported. All Gen6 and Gen7 blades must be replaced with supported hardware before upgrading to release 8.6.0.7.0.
WARNING:
HP GEN6 and GEN7 hardware are not supported in DSR 8.6.0.7.0. All GEN6 and GEN7 blades must be replaced with supported hardware before upgrading to 8.6.0.7.0.2.3 Geo-Diverse Site Configuration
With a geo-diverse site, the upgrade of the SOAM active/standby servers also includes an upgrade of the spare SOAM at the geo-redundant site in the same maintenance window.
2.4 Firmware Updates
This section is not applicable to Software Centric upgrades.
Firmware upgrades are not in the scope of this document but may be required before upgrading DSR. It is assumed that these are completed when needed by the hardware, and there is typically not a dependency between a firmware version and the DSR release. See the DSR Release Notes for any dependencies.
2.5 TVOE Upgrade
TVOE (Virtual Operating Environment) is a hypervisor, which hosts multiple virtual servers on the same hardware. It is typically used to make more efficient use of a hardware server (Rack Mount or Blade), while maintaining application independence, for DSR applications that do not require the full resources of a modern hardware server.
In DSR architecture, TVOE hosts are typically used to host several functions, including:
- PMAC
- DSR NOAM and SOAM Applications
- SDS SOAM Applications
- IDIH
TVOE host servers may also be used to host other DSR functions, including DA-MPs and IPFEs in a small deployment.
TVOE host servers (that is, servers running TVOE + one or more DSR applications) must be upgraded before upgrading the guest applications, to assure compatibility. However, TVOE is backward compatible with older application versions, so the TVOE host and the applications do not have to be upgraded in the same maintenance window.
The TVOE server hosting PMAC, as well as the PMAC application, must be upgraded before other TVOE host upgrades, since PMAC is used to perform the TVOE upgrades.
There are three supported strategies for site TVOE upgrades (Options A, B and C):
- Option A: Upgrade TVOE environments as a separate activity that is planned and executed days or weeks before the application upgrades (perhaps site-at-a-time)
- Options to Upgrade TVOE and applications in the same maintenance window:
- Option B: Upgrade a TVOE and application, followed by another TVOE and application. For example: for standby SOAM upgrade – stop the application, upgrade TVOE, upgrade the application, start the application; then repeat for the active SOAM. (preferred)
- Option C: Upgrade multiple TVOE hosts at a site, and then start upgrading the applications (same maintenance window)
Note:
- TVOE upgrades require a brief shutdown of the guest application(s) on the server.
- The TVOE virtual hosts may be hosting NOAM or SOAM applications. These applications are also affected, including a forced switchover if the active NOAM/SOAM is shut down.
- Database (DB) replication failure alarms may display during an Automated and Manual Site Upgrade or during an event that resets multiple servers in parallel. The DB on the child servers is not updated until resolved.
2.6 PMAC Upgrades
Each site may have a PMAC (Management Server) that provides support for maintenance activities at the site. The upgrade of the PMAC (and the associated TVOE) is documented in PMAC Incremental Upgrade Guide. PMAC must be upgraded before the other servers at the site are upgraded.
2.7 SDS Upgrade
It is recommended to upgrade the SDS topology (NOAMs, SOAMs, DPs) before the DSR topology. See SDS Software Upgrade Guide for SDS upgrade documentation.
Caution:
SDS Upgrade If the customer deployment has both the FABR and PCA features enabled, then upgrade the DSR nodes first before upgrading the SDS nodes.2.8 Traffic Management During Upgrade
The upgrade of the NOAM and SOAM servers are not expected to affect traffic processing at the DA-MPs and other traffic-handling servers.
WARNING:
SCTP Datagram Transport Layer Security change.Oracle introduced SCTP Datagram Transport Layer Security (DTLS) in DSR by enabling SCTP AUTH extensions by default. SCTP AUTH extensions are required for SCTP DTLS. However, there are known impacts with SCTP AUTH extensions as covered by the CVEs referenced in DTLS Feature Activation Guide. These known impacts are managed by enabling SCTP AUTH Extensions. It is highly recommended that customers upgrading to Release 8.6.0.7.0 must prepare clients before upgrading DSR. This ensures the DSR-to-Client SCTP connection establishes with DTLS with SCTP AUTH extensions enabled.
If customers do not prepare clients to accommodate the DTLS changes, then the SCTP connections to client devices do not restore after the DSR is upgraded to DSR 8.6.0.7.0. In this event, follow the procedure to enable or disable DTLS in DSR Cloud Installation Guide.
2.9 RMS Deployments
All RMS deployments are 3-Tier. In these smaller deployments, the Message Processing (DA-MP and IPFE) servers are also virtualized (deployed on a Hypervisor Host) to reduce the number of servers required.
When an RMS-based DSR has no geographic redundancy, there is just a single RMS geographic site, functioning as a single RMS Diameter site. The upgrade of this DSR deployment should be done in two maintenance windows: one for the NOAMs, and the second for all remaining servers.
When an RMS-based DSR includes geographic redundancy, there are two RMS geographic sites (but still functioning as a single RMS Diameter site). The primary RMS site contains the NOAM active/standby pair that manages the network element, while the geo-redundant RMS site contains a disaster recovery NOAM pair. Each RMS geographic site includes its own SOAM pair, but only the SOAMs at the primary RMS site are used to manage the signaling network element. The SOAMs at the geo-redundant site are for backup purposes only.
The upgrade of an RMS DSR deployment should be done in three maintenance windows: one for the NOAMs; a second for the SOAMs and MPs (DA-MP and IPFE) at the geo-redundant backup RMS site; and a third for the SOAMs and MPs (DA-MP and IPFE) at the primary RMS site.
2.10 Automated Site Upgrade
In DSR, there are multiple methods available for upgrading a site. The newest and most efficient way to upgrade a site is the Automated Site Upgrade feature. As the name implies, this feature upgrades an entire site (SOAMs and all C-level servers) with a minimum of user interaction. Once the upgrade is initiated, the upgrade automatically prepares the server(s), performs the upgrade, and sequences to the next server or group of servers until all servers in the site are upgraded. The server upgrades are sequenced in a manner that preserves data integrity and processing capacity.
Automated Site Upgrade can be used to upgrade the DSR servers. However, Automated Site Upgrade cannot be used to upgrade PMAC, TVOE, or IDIH servers at a site.
An important definition with regard to a site upgrade is the site. For the purposes of DSR site upgrade, a site is defined as a SOAM server group plus all subtending servers of that server group, regardless of physical location. To demonstrate this definition, the following image shows three physical locations, labeled TSite 1, TSite 2, and TSite 3. Each site contains a SOAM server group and an MP server group. Each SOAM server group has a spare SOAM that, although physically located at another site, is a member of the site that “owns” the server group. With site upgrade, SOA-Sp is upgraded with the Site 1 SOA server group, and SOB-sp is upgraded with the Site 2 SOB server group. The MP server groups are upgraded in the same maintenance window as their respective site SOAMs. These sites conform to the Topological Site.
With this feature, a site upgrade can be initiated on SO-A SG and all of its children (in this example, MP1 SG) using a minimum of GUI selections. The upgrade performs the following actions:
- Upgrades SOA-1, SOA-2, and SOA-sp
- Upgrades the servers in MP1 SG based on an availability setting and HA roles
- Immediately begins the upgrade of any other server groups which are also children of SO-A SG (not shown). These upgrades begin in parallel with step 2.
Server groups that span sites (for example, SOAMs and SBRs) are upgraded with the server group to which the server belongs. This results in upgrading spare servers that physically reside at another site, but belong to a server group in the SOAM that is targeted for site upgrade.
Note:
Automated Site Upgrade does not automatically initiate the upgrade of TSite 2 in parallel with TSite 1. However, the feature does allow the user to initiate Automated Site Upgrade of multiple sites in parallel manually.Figure 2-1 Upgrade Perspective of DSR Site Topology

Caution:
Limitations in Limitations of Automated Server Group and Automated Site Upgrade for Automated Site Upgrade can be solved by rearranging/adding the upgrade cycles. If the user does not want to create a custom upgrade plan by rearranging/adding cycles, then manual upgrade section 5.3 method should be used2.10.1 Pre-check
Before continuing with upgrade, check the HA state of the servers.
$ ha.mystate
Figure 2-2 Pre-check

Note:
In case there are more than one server in the same HA state (active), then manually switchover the server HA state using HA management screen before continuing the upgrade procedure.cat /proc/meminfo |grep MemTotal
cat /proc/cpuinfo |grep processor
2.10.2 Site Upgrade Execution
Figure 2-3 Site Upgrade – NOAM View

After selecting a SOAM site tab on the Upgrade Administration screen, the site summary screenappears. The first link on the site summary screen displays the
view. In the view, all of the server groups for the site appear in table form, with each server group populating one row. You can can view the upgrade summary of the server groups in the table columns:- The Site Upgrade Options. column shows how the server group is upgraded. The upgrade method is derived from the server group function and the bulk availability option. For more information on bulk availability, see
- The column groups the servers by state, indicating the number of servers in the server groups in each state.
- The column indicates the current application version, indicating the number of servers in the server group existing in each version.
Figure 2-4 Site Upgrade - Entire Site View

- Server has not been upgraded yet
- The FullDBParts and FullRunEnv backup files exist in the filemgmt area
A site is eligible for Automated Site Upgrade when at least one server in the site is upgrade-ready.
Click
from the screen to display the screen. The screen presents the as a series of upgrade cycles. For the upgrade, Cycle 1 upgrades the spare and standby SOAMs in parallel.Note:
This scenario assumes default settings for the site upgrade options as described in Site Upgrade Options. The specific servers to be upgraded in each cycle are identified in the column of the screen. Cycle 1 is an atomic operation, meaning that Cycle 2 cannot begin until Cycle 1 is complete. Once the spare and standby SOAMs are in Accept or Reject state, the upgrade sequences to Cycle 2 to upgrade the active SOAM. As Cycle 2 is also atomic, Cycle 3 does not begin until Cycle 2 completes.Note:
IPFE servers require special handling for upgrade because IPFE servers are clustered into Target Sets and assigned an IP address. This process is known as Target Set Assignment (TSA). While upgrading IPFE servers, Auto Site Upgrade ensures that there is no service outage for IPFE while upgrade is in progress, that is, IPFE servers in same TSA are not upgraded in the same cycle. If IPFE server address is not configured on screen on active SOAM GUI, that IPFE server is not included in the Upgrade Cycle; therefore, is not considered for upgrade using Automated Site Upgrade.Figure 2-5 Site Upgrade - Site Initiate Screen

Cycles 3 through 5 upgrade all of the C-level servers for the site. These cycles are not atomic. Cycle 3 consists of IPFE1, IPFE3, MP1, MP4, and SBR3 because some servers can take longer to upgrade than others. Consequently, there may be some overlap in Cycle 3 and Cycle 4. For example, if IPFEs 1 and 3 complete the upgrade before SBR3 is finished (all are in Cycle 3), the upgrade allows IPFEs 2 and 4 to begin, even though they are part of Cycle 4. This is to maximize the maintenance window efficiency. The primary reason for upgrading the C-level servers is the upgrade method for the server group function (for example, bulk by HA, serial). The site upgrade is complete when every server in the site is in the Accept or Reject state.
In selecting the servers that are included in each upgrade cycle, particularly C-level, consideration is given to the server group function, the upgrade availability option, and the HA designation. The following table describes the server selection considerations for each server group function.
Note:
The minimum availability option is a central component of the server selections for site upgrade . The effect of this option on server availability is described in detail in Minimum Server Availability.Table 2-1 Server Selection vs. Server Group Function
SG Function | Selection Considerations |
---|---|
DSR (multi-active cluster) (for example, DA-MP) | The selection of servers is based primarily on the minimum server availability option. Servers are divided equally (to the extent possible) among the number of cycles required to enforce minimum availability. For DA-MPs, an additional consideration is given to the MP Leader. The MP with the Leader designation is the last DA-MP to be upgraded to minimize leader changes1. |
DSR (active/standby pair) (for example, SOAM) | The SOAM upgrade method is dependent on the Site SOAM Upgrade option on the General Options page. See Site Upgrade Options. |
SBR | SBRs are always upgraded serially, thus the primary consideration for selection is the HA designation. The upgrade order is spare – spare – standby – active. |
IP Front End | IPFEs require special treatment during upgrade. One consideration for selection is the minimum server availability, but the primary consideration is traffic continuity. Regardless of minimum availability, IPFE A1 is never upgraded at the same time as IPFE A2. It is always upgraded serially. The same restriction applies to IPFE B1 and B2. If minimum availability permits, IPFE A1 can be upgraded with IPFE B1, and IPFE A2 can be upgraded with B2. |
To initiate the site upgrade, you need to select a target ISO from the ISO picklist in the Upgrade Settings section of the Site Initiate screen. Once you click OK, the upgrade starts and control returns to the Upgrade Administration. Once you select the Entire Site link, a summary of the upgrade status for the selected site is displayed. This summary identifies the server group(s) currently upgrading, the number of servers within each server group that are upgrading, and the number of servers that are pending upgrade. This view can be used to monitor the upgrade status of the overall site. Select the individual sever group links to obtain the detailed status. The server group view shows the status of each individual server within the selected server group.
Figure 2-6 Site Upgrade Monitoring

When a server group link is selected on the upgrade administration screen, the table rows are populated with the upgrade details of the individual servers within that server group as displayed in the following figure.
Figure 2-7 Server Group Upgrade Monitoring

Upon completion of a successful upgrade, every server in the site is in the Accept or Reject state. See Site Upgrade Options for a description of canceling and restarting the Automated Site Upgrade.
2.10.3 Minimum Server Availability
The concept of Minimum Server Availability plays a key role during an upgrade using Automated Site Upgrade. The goal of server availability is to ensure that at least a specified percentage of servers (of any given type) remains in service to process traffic and handle administrative functions while other servers are upgrading.
For example, if the specified minimum availability is 50% and there are eight servers of type X, then four remain in service while the other four upgrade. However, if there are nine servers of type X, then the minimum availability requires that five remain in service while four upgrade. The minimum availability calculation automatically rounds up in the event of a non-zero fractional remainder.
To meet the needs of a wide-ranging customer base, the minimum availability percentage is a user- configurable option, which allows for settings of 50%, 60%, and 75% minimum availability. There is also a setting of 0% for lab upgrade support. This option is described in detail in Site Upgrade Methodology.
Table 2-2 Site Upgrade Availability vs. Server Group Function
Server Group Function | Server Availability |
---|---|
DSR (Multi-active cluster) | In a multi-active cluster, the availability percentage applies to all of the servers in the server group. The number of servers required to achieve minimum availability is calculated from the pool of in-service servers. |
SBR | Availability percentage does not apply to SBR server groups. SBRs are upgraded in a very specific order: spare – spare – standby – active. |
IP Front End | IPFEs require special treatment during upgrade. The primary consideration is traffic continuity. Regardless of minimum availability, IPFE A1 is never upgraded at the same time as IPFE A2. They are always upgraded serially. The same restriction applies to IPFE B1 and B2. |
When calculating the number of servers required to satisfy the minimum server availability, all servers in the server group (or server group cluster) are considered. Servers that are OOS or otherwise unable to perform their intended function, are included, as are servers that have already been upgraded. For example, cons ider a DA-MP server group with 10 servers; four have already been upgraded, one is OOS, and five are ready for upgrade. With a 50% minimum availability, only four of the servers that are ready for upgrade, can be upgraded in parallel. The four servers that have already been upgraded count toward the five that are needed to satisfy minimum availability. The OOS server cannot be used to satisfy minimum availability, so one of the upgrade-ready servers must remain in-service for minimum availability, thus leaving four servers to be upgraded together. Upgrading the last server would require an additional upgrade cycle.
2.10.4 Site Upgrade Options
Figure 2-8 Auto Site Upgrade General Options

The first option that affects the upgrade sequence is the Site Upgrade SOAM Method. This option determines the sequence in which the SOAMs are upgraded. The default value of 1 considers the OAM HA role of the SOAMs to determine the upgrade order. In this mode, all non-active SOAM servers are upgraded first (in parallel), followed by the active SOAM. This upgrade method requires at most two upgrade cycles to upgrade all of the SOAMs, regardless of how many are present. If there are no spare SOAMs, then this setting has no effect on the SOAM upgrade. Regardless of the SOAM upgrade method, the active SOAM is always upgraded after the standby and spare SOAMs.
The second option that affects the upgrade sequence is the Site Upgrade Bulk Availability setting. This setting determines the number of C-level servers that remain in service during the upgrade. The default setting of 1 equates to 50% availability, meaning that a minimum of one-half of the servers stay in service during the upgrade. The default setting is the most aggressive setting for upgrading the site, requiring the minimum number of cycles, thus the least amount of time. The settings of 66% and 75% increase the number of servers that remain in service during the upgrade.
Note:
Increasing the availability percentage may increase the overall length of the upgrade.The application of minimum server availability varies for the different types of C-level servers. For example, for a multi-active DA-MP server group, the minimum availability applies to all of the DA-MPs within the server group. This same setup applies to IPFEs as well. Table 2-2 defines how the Site Upgrade Bulk Availability setting on the General Options page affects the various server group function types.
The Site Upgrade General Options cannot be changed while a site upgrade is in progress. Attempting to change either option while a site upgrade is in progress results in:
[Error Code xxx] - Option cannot be changed because one or more automated site upgrades are in progress
2.10.5 Cancel and Restart Auto Site Upgrade
When an Auto Site Upgrade is initiated, several tasks are created to manage the upgrade of the individual server groups as well as the servers within the server groups. You can monitor and manage these tasks via the Active Tasks screen. Click Status & Manage, then Tasks, and then Active Tasks.
Figure 2-9 Site Upgrade Active Tasks

To cancel the site upgrade, select the site upgrade task and click Cancel. A screen requests confirmation of the cancel operation. The status changes from running to completed. The Result Details column updates to display site upgrade task cancelled by user. All server group upgrade tasks that are under the control of the main site upgrade task immediately transition to completed state. However, the site upgrade cancellation has no effect on the individual server upgrade tasks that are in progress. These tasks continue until completion. The following figure shows the Active Tasks screen after a site upgrade has been canceled.
Figure 2-10 Canceled Site Upgrade Tasks

Figure 2-11 Partially Upgraded Site

Figure 2-12 Restart Site Upgrade

2.11 Automated Server Group Upgrade
The Automated Server Group (ASG) upgrade feature allows the user to upgrade all of the servers in a server group automatically by specifying a set of controlling parameters.
The purpose of ASG is to simplify and automate segments of the DSR upgrade. The DSR has long supported the ability to select multiple servers for upgrade. However, in doing so, it was incumbent on the user to determine ahead of time which servers could be upgraded in parallel, considering traffic impact. If the servers were not carefully chosen, the upgrade could adversely impact system operations.
When a server group is selected for upgrade, ASG upgrades each of the servers serially, or in parallel, or a combination of both, while enforcing minimum service availability. The number of servers in the server group that are upgraded in parallel is user selectable. The procedures in this document provide the detailed steps specifying when to use ASG, as well as the appropriate parameters that should be selected for each server group type. ASG is the default upgrade method for most server group types associated with the DSR. However, there are some instances in which the manual upgrade method is utilized. In all cases where ASG is used, procedures for a manual upgrade are also provided.
Note:
To use ASG on a server group, no servers in that server group can be already upgraded either by ASG or manually.DSR continues to support the parallel upgrade of server groups, including any combination of automated and manual upgrade methods.
2.11.1 Cancel and Restart Automated Server Group Upgrade
When a server group is upgraded using ASG, each server within that server group is automatically prepared for upgrade, upgraded to the target release, and returned to service on the target release. Once an ASG upgrade is initiated, the task responsible for controlling the sequencing of servers entering upgrade can be manually canceled by navigating to Status & Manage, then Tasks, and then Active Tasks screen as shown in the following figure, if necessary. Once the task is canceled, it cannot be restarted. However, a new ASG task can be restarted via the Upgrade Administration screen.
Figure 2-13 Active Tasks Screen

In the event that a server fails to upgrade, it automatically rolls back to the previous release in preparation for backout_restore and fault isolation. Any other servers in that server group that are in the process of upgrading continue to upgrade to completion. However, the ASG task itself is automatically canceled and no other servers in that server group are upgraded. Automatic cancelation triggers troubleshooting to correct the problem. Once the problem is solved, the users can again initiate a new server group upgrade on the upgrade screen.
2.11.2 Site Accept
You can accept the upgrade of some or all servers for a given site by clicking
Site Accept on the upgrade GUI. When you
click Site Accept a subsequent screen as shown in the
following figure displays the servers that are ready for the
Accept action. However, normal procedure calls for the Accept
Upgrade to be applied to all the servers at a site only after
the upgrade to the new release is stable and the back out option
is no longer needed. After verifying that the information
presented is accurate, Click OK to confirm the intended
action. The Accept command is issued to the site servers at a
rate of approximately one server every second. The command takes
approximately 10 seconds per server to complete. As the commands
are completed, the server status on the Upgrade Administration
screen transitions to Backup Needed.