|Oracle® Communications Service Broker Netra 6000 High Availability Manager Administrator's Guide
Part Number E20234-01
This chapter provides an overview of Oracle Communications Service Broker Netra 6000 High Availability Manager (HA Manager).
Before you read this chapter, you should be familiar with Oracle Communications Service Broker concepts and architecture. See Oracle Communications Service Broker Concepts Guide.
Service Broker offers service interaction and mediation capabilities, enabling you to control and orchestrate multiple services in real time, across diverse network types, covering legacy SS7 networks, SIP networks, and Diameter networks.
Service Broker is a software product, normally deployed on multiple hardware machines.
HA Manager is a software module providing management of a complete Service Broker deployment that includes hardware, operating system software and Service Broker software. The HA Manager consists of the Service Broker software and an integrated management software operating the hardware and software processes of a Service Broker deployment.
Simplifies and automates the setting up of a Service Broker deployment (hardware, operating system, Service Broker software, and so on)
Simplifies and automates upgrading of the Service Broker software components
Automates provisioning new hardware, enabling you to dynamically extend traffic capacity and dynamically replace failed units
Provides redundancy of the integrated software and hardware components at all levels
Provides a redundant interconnection for all Service Broker network interfaces
Provides integrated Operations and Management for all software and hardware components
HA Manager supports one or more Sun Netra 6000 chassis running the HA Manager software. Within each chassis, there are up to ten Sun Netra X6270 blades.
Depending on the software that it runs, each blade supports one of the following roles:
Bootstrap Blades run the following system-level services that Worker Blades depend on:
System facilities, such as Dynamic Host Configuration Protocol (DHCP) server and Network Time Protocol (NTP) server
Bootstrap Blades do not process communications traffic and they carry a relatively low load.
Bootstrap Blades are not required to be online and functional for Worker Blades to operate normally. Bootstrap Blades need to be active only when a Worker Blade boots or a Worker Blade process restarts. However, services provided by Bootstrap Blades are critical for recovering from failures.
Worker Blades run the Service Broker Signaling Server and Processing Server processes. Worker Blades do not have disk storage or any kind of persistent storage. They rely on Bootstrap Blades for startup, after which they run independently. A Worker Blade receives its identity and instance-specific profile based on the chassis slot in which it is running. See "Worker Blade Profiles" for more information.
Figure 1-1 shows the key components of an HA Manager deployment. It shows one chassis with Bootstrap Blades and Worker Blades and the system-level functions running on the Bootstrap Blades.
For high availability, an HA Manager deployment includes a pair of Bootstrap Blades, which is sufficient to provide services to Worker Blades on one or more chassis. A single, full chassis consists of two Bootstrap Blades and eight Worker Blades. A minimal deployment consists of two Bootstrap Blades and two Worker Blades - you can add more Worker Blades as required.
Bootstrap Blades run in a primary and secondary configuration; that is, only one Bootstrap Blade, the primary blade, actively provides services at one time. The other blade, the secondary blade, is synchronized and is ready to take over if the primary blade fails or needs to be replaced. The primary and secondary blades share a virtual IP address, which makes the primary-to-secondary transition transparent to processes that use the services running on Bootstrap Blades.
You manage service availability and failover between primary and secondary blades using the Red Hat Cluster Suite, which is part of Oracle Enterprise Linux.
Worker Blades run Service Broker Signaling Servers and Processing Servers. At the minimum, an HA Manager deployment can include two instances of Signaling Servers and multiple instances of Processing Servers, based on your requirements. A deployment requires fewer instances of Signaling Servers than Processing Servers. Each instance of a Signaling Server runs on a different Worker Blade. Multiple instances of Processing Servers can run on the same Worker Blade. A typical deployment includes a maximum of three Processing Servers per Worker Blade. Figure 1-2 shows an example of an HA Manager minimum deployment.
Worker Blades are installed in slots according to the specific processes configured for each slot. Figure 1-3 shows an example of an HA Manager medium deployment.
When adding Worker Blades, use a general ratio of one Signaling Server instance to four Processing Server instances. Figure 1-4 shows an HA Manager large deployment that complies with this rule.
A Worker Blade has one of two profiles:
The processing-only profile means that the Worker Blade runs only Processing Servers. When this profile is assigned, a Worker Blade runs three Processing Server instances.
Signaling and processing
The signaling and processing profile means that the Worker Blade runs both Signaling Servers and Processing Servers. When this profile is assigned, a Worker Blade runs one instance of a Signaling Server and two instances of Processing Servers.
Profile assignment is static and depends on the chassis slot into which a Worker Blade is inserted. The signaling-processing profile is assigned to Worker Blades inserted into slots 2 through 5. The processing-only profile is assigned to Worker Blades inserted into slots 6 through 9. If you replace a Worker Blade, the new Worker Blade inherits the same profile, based on the chassis slot.
Each profile of the two different profiles is captured in a Pre-Execution Environment image. The images are stored on the Bootstrap Blades. See "Boot Images" for more information.
Each process running on a blade has a logical identifier within the blade. This identifier is called a Process Instance Identity (PII). A PII is derived from the blade's IP address and the process's fixed order relative to other processes running on the blade.
See Chapter 4, "Connecting to the Network" for more information on blades' IP addresses.
A PII remains consistent even when the blade or process is restarted, as opposed to operating system processor identifiers (PIDs), which change between processes and blade restarts.
Each Signaling Server instance and Processing Server instance has a PII. PIIs are used internally to reference Signaling Server and Processing Server instances. For example, PIIs are exposed by logs to identify the server that generated a log.
From a management perspective, an HA Manager deployment is a standard Service Broker deployment that includes two domains:
Signaling Domain: Manages all Signaling Server instances
Processing Domain: Manages all Processing Server instances
You manage each domain using a different instance of the Administration Console. See "Administration Console" for more information.
Domain software and configuration are bundled together into domain images. A domain image is a group of JAR files and deployment packages containing the software binaries and associated configuration.
See Chapter 11, "Upgrading Service Broker Netra 6000 High Availability Manager" for information about deployment packages.
There are two domain images: one for the Signaling Domain and another for the Processing Domain. Domain images are stored on Bootstrap Blades. When a Signaling Server or a Processing Server starts up, it pulls the binaries and related configuration from the corresponding DI.
Service Broker upgrades are upgrades of domain images. You can upgrade domain images using the Administration Console. See Chapter 11, "Upgrading Service Broker Netra 6000 High Availability Manager" for more information.
Bootstrap Blades provide the following system-level services:
HA Manager extends Service Broker Administration Console capabilities to provide integrated management of a deployment's hardware components.
An HA Manager deployment includes three instances of the Administration Console, each enabling different administration tasks as follows:
System Administration Console
The System Administration Console provides an overall system view of software and hardware components, including blades, processes, alarms, logs and system-level configuration. See "Configuring Network Connectivity with the System Administration Console" for more information.
Signaling Servers Administration Console
Use the Signaling Servers Administration Console to manage the Service Broker Signaling Domain. You can configure and upgrade the SIP SSU, Diameter SSU, and SS7 SSU components. See "Configuring Signaling Traffic with the Signaling Servers Administration Console" for more information.
Processing Servers Administration Console
Use the Processing Servers Administration Console to manage the Service Broker Processing Domain. You can configure and upgrade IM and SM components. See "Configuring Processing Traffic with the Processing Servers Administration Console" for more information.
You access each Administration Console instance through a web browser, using different port numbers. The default ports are 9000 (Processing Servers), 9001 (Signaling Servers), and 9002 (System). You can navigate between the three Administration Console instances from within the Administration Console GUI.
See Chapter 3, "About System Administration" for more information about using the Administration Console.
In an HA Manager deployment, each Bootstrap Blade includes two onboard disks. In total, a deployment requires four disks with 300 GB of space on each disk. However, the effective storage capacity of a system is 300 GB because the four disks are used for mirroring and redundancy.
Within each Bootstrap Blade, the pair of disks is arranged in a software Redundant Array of Independent Disks (RAID), provided by Oracle Enterprise Linux. In addition, disk data is replicated across the primary and secondary Bootstrap Blades. The HA Manager is configured to work with DRBD to accomplish this.
Each disk has to consist of two partitions:
A local boot/swap/var partition for the Bootstrap Blade itself
A service partition containing:
Pre-execution Environment server
Logging Server and logs
To boot, Worker Blades use Pre-Execution Environment (PXE) images stored on the Bootstrap Blades.
A PXE image contains the operating system, the external Management Agent, and configuration scripts. There are two PXE images, one for each Worker Blade profile. See "Worker Blade Profiles" for more information.
The Logging Server runs on the Bootstrap Blades and collects logs generated by the Signaling Servers and Processing Servers. Logs are stored on the bootstrap disk storage and can be viewed in the Administration Console's Log tab.
Each log contains the PII of the server that generated the log. See "Process Instance Identity" for more information. In the file system, logs for each server are stored in a different directory.
Logging is based on the Apache log4j logging framework. Therefore, logs layout is configured using standard log4j configuration files.
See Chapter 8, "Logging" for more information about the Logging Server.
The following are additional standard facilities that must be available on the Bootstrap Blades:
Network File System (NFS)
State persistency protects the HA Manager from loss of data if a Processing Servers fails or restarted, if a blade fails or restarted, or if you replace a blade.
State persistency is stored on the Bootstrap Blades.
See Appendix A, "Component List" for information about the hardware and software components included in an HA Manager deployment.
Blades within a chassis communicate using two rack-mounted Sun Blade 6000 Ethernet Switched NEM 24p 10 GbE switches, which are included in every chassis.
Each switch has a single port connection to every blade in the chassis. Accordingly, each blade has one Network Interface Card (NIC) connected to every switch, providing full connection redundancy. However, an HA Manager deployment can function fully with only one operational switch.
Traffic between blades within a chassis, and between blades and network elements outside the chassis, are of the following types:
SIP and Diameter: SIP and Diameter traffic running between Signaling Servers and network elements outside the chassis
SIGTRAN: IP-based SS7 traffic running between Signaling Servers and SS7 network elements outside the chassis
OSS OAM: Management connection associated with Operational Support Systems (OSS) and Operations, Administration and Maintenance (OA&M) activities, running between Bootstrap Blades and Worker Blades, such as JMX and log aggregation.
SYS ADMIN: Operating System root-level administration connection required for certain system activities such as booting and sending DHCP traffic.
Internal: Internal communication between Signaling Servers and Processing Servers.
Inside a chassis, HA Manager uses different Virtual Local Area Networks (VLANs) for each type of traffic. The use of VLANs lets you enforce a different bandwidth for each type of traffic.
Table 1-1 shows the VLANs used by each type of blade inside a chassis.
|Blade Type||SIP& Diameter||SIGTRAN||OSS OAM||SYS ADMIN||Internal|
Workers, running Processing Servers
Worker, running Signaling Servers and Processing Servers
To connect a chassis to an external network or to another chassis, Oracle recommends that you use an external switch, such as the Sun Network 10 GbE Switch 72p Top of Rack switch.
See Chapter 4, "Connecting to the Network" for more information about how to configure network connectivity.
You can manage blades and the following processes that run on them:
SS7 stack processes
Administration Console instances
Process and hardware management is handled internally by two system components:
external Management Agent (eMA)
An eMA runs on Worker Blades and Bootstrap Blades. An eMA manages the blade and the processes running on that blade. The eMA can start, stop, and terminate individual processes. It also controls the life cycle of Signaling Servers and Processing Servers. The eMA process is automatically started and stopped as part of the operating system boot and shutdown process.
external Management Controller (eMC)
An eMC runs within the same process as the Web Administration Console, on Bootstrap Blades, and controls eMA instances. An eMC manages individual processes running on Worker Blades and Bootstrap Blades.
The eMA and eMC communicate using JMX, over the OSS OAM VLAN. See "Network Connectivity" for more information.
You can manage processes and hardware using the Administration Console GUI. The Administration Console displays the state of the blades and the processes running on each blade. See Chapter 5, "Managing and Monitoring Hardware and Processes" for information about managing processes and hardware using the Administration Console.
HA Manager imposes the security model described in “Configuring Security” in Oracle Communication Service Broker Administrator's Guide.
You can modify the security of the following external system interfaces, if required:
Operating system users
Integrated Lights Out Manager (ILOM)
Web Administration Console
See Chapter 2, "Getting Started" for information about changing passwords for these interfaces.