Skip Headers
Oracle® Communications Service Broker Netra 6000 High Availability Manager Administrator's Guide
Release 5.0

Part Number E20234-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

1 Overview of Service Broker Netra 6000 High Availability Manager

This chapter provides an overview of Oracle Communications Service Broker Netra 6000 High Availability Manager (HA Manager).

Before you read this chapter, you should be familiar with Oracle Communications Service Broker concepts and architecture. See Oracle Communications Service Broker Concepts Guide.

Introduction to Service Broker Netra 6000 High Availability Manager

Service Broker offers service interaction and mediation capabilities, enabling you to control and orchestrate multiple services in real time, across diverse network types, covering legacy SS7 networks, SIP networks, and Diameter networks.

Service Broker is a software product, normally deployed on multiple hardware machines.

HA Manager is a software module providing management of a complete Service Broker deployment that includes hardware, operating system software and Service Broker software. The HA Manager consists of the Service Broker software and an integrated management software operating the hardware and software processes of a Service Broker deployment.

HA Manager:

System Architecture

HA Manager supports one or more Sun Netra 6000 chassis running the HA Manager software. Within each chassis, there are up to ten Sun Netra X6270 blades.

Bootstrap Blades and Worker Blades

Depending on the software that it runs, each blade supports one of the following roles:

  • Bootstrap Blade

    Bootstrap Blades run the following system-level services that Worker Blades depend on:

    • Administration Console

    • Disk storage

    • Boot images

    • Logging Server

    • System facilities, such as Dynamic Host Configuration Protocol (DHCP) server and Network Time Protocol (NTP) server

    • Persistent state

    Bootstrap Blades do not process communications traffic and they carry a relatively low load.

    Bootstrap Blades are not required to be online and functional for Worker Blades to operate normally. Bootstrap Blades need to be active only when a Worker Blade boots or a Worker Blade process restarts. However, services provided by Bootstrap Blades are critical for recovering from failures.

  • Worker Blade

    Worker Blades run the Service Broker Signaling Server and Processing Server processes. Worker Blades do not have disk storage or any kind of persistent storage. They rely on Bootstrap Blades for startup, after which they run independently. A Worker Blade receives its identity and instance-specific profile based on the chassis slot in which it is running. See "Worker Blade Profiles" for more information.

Figure 1-1 shows the key components of an HA Manager deployment. It shows one chassis with Bootstrap Blades and Worker Blades and the system-level functions running on the Bootstrap Blades.

Figure 1-1 Key Components of an HA Manager Deployment

Shows system's main components

For high availability, an HA Manager deployment includes a pair of Bootstrap Blades, which is sufficient to provide services to Worker Blades on one or more chassis. A single, full chassis consists of two Bootstrap Blades and eight Worker Blades. A minimal deployment consists of two Bootstrap Blades and two Worker Blades - you can add more Worker Blades as required.

Primary and Secondary Bootstrap Blades

Bootstrap Blades run in a primary and secondary configuration; that is, only one Bootstrap Blade, the primary blade, actively provides services at one time. The other blade, the secondary blade, is synchronized and is ready to take over if the primary blade fails or needs to be replaced. The primary and secondary blades share a virtual IP address, which makes the primary-to-secondary transition transparent to processes that use the services running on Bootstrap Blades.

You manage service availability and failover between primary and secondary blades using the Red Hat Cluster Suite, which is part of Oracle Enterprise Linux.

Signaling Servers and Processing Servers on Worker Blades

Worker Blades run Service Broker Signaling Servers and Processing Servers. At the minimum, an HA Manager deployment can include two instances of Signaling Servers and multiple instances of Processing Servers, based on your requirements. A deployment requires fewer instances of Signaling Servers than Processing Servers. Each instance of a Signaling Server runs on a different Worker Blade. Multiple instances of Processing Servers can run on the same Worker Blade. A typical deployment includes a maximum of three Processing Servers per Worker Blade. Figure 1-2 shows an example of an HA Manager minimum deployment.

Figure 1-2 Example of a Minimum HA Manager Deployment

Shows a minimal deployment

Worker Blades are installed in slots according to the specific processes configured for each slot. Figure 1-3 shows an example of an HA Manager medium deployment.

Figure 1-3 Example of a Medium HA Manager Deployment

Shows a medium deployment

When adding Worker Blades, use a general ratio of one Signaling Server instance to four Processing Server instances. Figure 1-4 shows an HA Manager large deployment that complies with this rule.

Figure 1-4 Example of a Large HA Manager Deployment

Shows a large deployment with four Signaling Servers

Worker Blade Profiles

A Worker Blade has one of two profiles:

  • Processing only

    The processing-only profile means that the Worker Blade runs only Processing Servers. When this profile is assigned, a Worker Blade runs three Processing Server instances.

  • Signaling and processing

    The signaling and processing profile means that the Worker Blade runs both Signaling Servers and Processing Servers. When this profile is assigned, a Worker Blade runs one instance of a Signaling Server and two instances of Processing Servers.

Profile assignment is static and depends on the chassis slot into which a Worker Blade is inserted. The signaling-processing profile is assigned to Worker Blades inserted into slots 2 through 5. The processing-only profile is assigned to Worker Blades inserted into slots 6 through 9. If you replace a Worker Blade, the new Worker Blade inherits the same profile, based on the chassis slot.

Each profile of the two different profiles is captured in a Pre-Execution Environment image. The images are stored on the Bootstrap Blades. See "Boot Images" for more information.

Process Instance Identity

Each process running on a blade has a logical identifier within the blade. This identifier is called a Process Instance Identity (PII). A PII is derived from the blade's IP address and the process's fixed order relative to other processes running on the blade.

See Chapter 4, "Connecting to the Network" for more information on blades' IP addresses.

A PII remains consistent even when the blade or process is restarted, as opposed to operating system processor identifiers (PIDs), which change between processes and blade restarts.

Each Signaling Server instance and Processing Server instance has a PII. PIIs are used internally to reference Signaling Server and Processing Server instances. For example, PIIs are exposed by logs to identify the server that generated a log.

Signaling Domain and Processing Domain

From a management perspective, an HA Manager deployment is a standard Service Broker deployment that includes two domains:

  • Signaling Domain: Manages all Signaling Server instances

  • Processing Domain: Manages all Processing Server instances

You manage each domain using a different instance of the Administration Console. See "Administration Console" for more information.

Domain Images

Domain software and configuration are bundled together into domain images. A domain image is a group of JAR files and deployment packages containing the software binaries and associated configuration.

See Chapter 11, "Upgrading Service Broker Netra 6000 High Availability Manager" for information about deployment packages.

There are two domain images: one for the Signaling Domain and another for the Processing Domain. Domain images are stored on Bootstrap Blades. When a Signaling Server or a Processing Server starts up, it pulls the binaries and related configuration from the corresponding DI.

Service Broker upgrades are upgrades of domain images. You can upgrade domain images using the Administration Console. See Chapter 11, "Upgrading Service Broker Netra 6000 High Availability Manager" for more information.

Bootstrap Services

Bootstrap Blades provide the following system-level services:

Administration Console

HA Manager extends Service Broker Administration Console capabilities to provide integrated management of a deployment's hardware components.

An HA Manager deployment includes three instances of the Administration Console, each enabling different administration tasks as follows:

You access each Administration Console instance through a web browser, using different port numbers. The default ports are 9000 (Processing Servers), 9001 (Signaling Servers), and 9002 (System). You can navigate between the three Administration Console instances from within the Administration Console GUI.

See Chapter 3, "About System Administration" for more information about using the Administration Console.

Disk Storage

In an HA Manager deployment, each Bootstrap Blade includes two onboard disks. In total, a deployment requires four disks with 300 GB of space on each disk. However, the effective storage capacity of a system is 300 GB because the four disks are used for mirroring and redundancy.

Within each Bootstrap Blade, the pair of disks is arranged in a software Redundant Array of Independent Disks (RAID), provided by Oracle Enterprise Linux. In addition, disk data is replicated across the primary and secondary Bootstrap Blades. The HA Manager is configured to work with DRBD to accomplish this.

Each disk has to consist of two partitions:

  • A local boot/swap/var partition for the Bootstrap Blade itself

  • A service partition containing:

    • DHCP server

    • Pre-execution Environment server

    • NTP server

    • Domain Images

    • Logging Server and logs

Boot Images

To boot, Worker Blades use Pre-Execution Environment (PXE) images stored on the Bootstrap Blades.

A PXE image contains the operating system, the external Management Agent, and configuration scripts. There are two PXE images, one for each Worker Blade profile. See "Worker Blade Profiles" for more information.

Logging Server

The Logging Server runs on the Bootstrap Blades and collects logs generated by the Signaling Servers and Processing Servers. Logs are stored on the bootstrap disk storage and can be viewed in the Administration Console's Log tab.

Each log contains the PII of the server that generated the log. See "Process Instance Identity" for more information. In the file system, logs for each server are stored in a different directory.

Logging is based on the Apache log4j logging framework. Therefore, logs layout is configured using standard log4j configuration files.

See Chapter 8, "Logging" for more information about the Logging Server.

System Facilities

The following are additional standard facilities that must be available on the Bootstrap Blades:

  • NTP Server

  • DHCP Server

  • Network File System (NFS)

  • PXE Server

State Persistency

State persistency protects the HA Manager from loss of data if a Processing Servers fails or restarted, if a blade fails or restarted, or if you replace a blade.

State persistency is stored on the Bootstrap Blades.

Hardware and Software Components

See Appendix A, "Component List" for information about the hardware and software components included in an HA Manager deployment.

Network Connectivity

Blades within a chassis communicate using two rack-mounted Sun Blade 6000 Ethernet Switched NEM 24p 10 GbE switches, which are included in every chassis.

Each switch has a single port connection to every blade in the chassis. Accordingly, each blade has one Network Interface Card (NIC) connected to every switch, providing full connection redundancy. However, an HA Manager deployment can function fully with only one operational switch.

Traffic between blades within a chassis, and between blades and network elements outside the chassis, are of the following types:

Inside a chassis, HA Manager uses different Virtual Local Area Networks (VLANs) for each type of traffic. The use of VLANs lets you enforce a different bandwidth for each type of traffic.

Table 1-1 shows the VLANs used by each type of blade inside a chassis.

Table 1-1 VLANs Used by Each Blade

Blade Type SIP& Diameter SIGTRAN OSS OAM SYS ADMIN Internal

Bootstrap

No

No

Yes

Yes

No

Workers, running Processing Servers

No

No

Yes

Yes

Yes

Worker, running Signaling Servers and Processing Servers

Yes

Yes

Yes

Yes

Yes


To connect a chassis to an external network or to another chassis, Oracle recommends that you use an external switch, such as the Sun Network 10 GbE Switch 72p Top of Rack switch.

See Chapter 4, "Connecting to the Network" for more information about how to configure network connectivity.

Process and Hardware Management

You can manage blades and the following processes that run on them:

Process and hardware management is handled internally by two system components:

The eMA and eMC communicate using JMX, over the OSS OAM VLAN. See "Network Connectivity" for more information.

You can manage processes and hardware using the Administration Console GUI. The Administration Console displays the state of the blades and the processes running on each blade. See Chapter 5, "Managing and Monitoring Hardware and Processes" for information about managing processes and hardware using the Administration Console.

Security

HA Manager imposes the security model described in “Configuring Security” in Oracle Communication Service Broker Administrator's Guide.

You can modify the security of the following external system interfaces, if required:

See Chapter 2, "Getting Started" for information about changing passwords for these interfaces.