C H A P T E R  1

Programming Environment

This chapter provides an overview of the software environment that forms the basis for developing applications for the Netra CT server:


Netra CT Server

The Netra CT server system consists of a host CPU, an alarm card which is the nexus of system management, optionally one or several satellite CPUs, and one or several CompactPCI (cPCI) I/O cards. Different software combinations run on each of these elements as is shown in FIGURE 1-1.


Hardware Description

This section provides brief descriptions of the Netra CT server board components and hot-swapping capabilities. See the Netra CT Server Product Overview (816-2480) for more information.

Alarm Card

An alarm card is used in the Netra CT 810 and Netra CT 410 servers to control system functions. The board is plugged into slot 8 for the Netra CT 810, and into slot 1 for the Netra CT 410 server. ChorusOS 5.0 is the operating system running on the alarm card, and the boot environment is controlled by boot control firmware. Developers use a command-line interface (CLI) to provide an administrative interface to the system. Drawer-level monitoring and control of the system is accomplished through Managed Object Hierarchy (MOH) and Processor Management Service (PMS) software.

Host CPU Board

The host CPU board is the same for both Netra 810 and Netra CT 410 servers. The board is plugged into slot 1 for the Netra CT 810, and into slot 3 for the Netra CT 410 server. The Solaris OS runs on these boards. MOH and PMS provide local and drawer-level monitor and control functions.

Satellite CPU Boards

Several satellite CPU cards can occupy the I/O slots and perform normal CPU functions independently. MOH and PMS provide local monitor and control functions.

I/O Boards

One or more cPCI boards can occupy I/O slots. The I/O boards are controlled by the host CPU and the Solaris OS running on the host CPU board.

Hot-Swapping Capabilities

Boards and other field-replaceable units (FRUs) can be swapped while the system is running, depending on whether or not they conform to the Hot Swap Specification PICMG 2.1 R 2.0. This ability to hot swap is a feature that is controllable by software if the board itself is hot-swap compliant. For further information on hot swap issues, see the Netra CT Server Product Overview (816-2480), Netra CT Server System Administration Guide (816-2483), and Netra CT Server Service Manual (816-2482).


Software Description

  FIGURE 1-1 Netra CT Server Software

Diagram displaying the functional overview of the Netra CT software environment.[ D ]

The abbreviations shown in FIGURE 1-1 are identified in TABLE 1-1.

TABLE 1-1 Netra CT Server Software Overview

Abbreviation

Name

Description

Solaris

Solaris Operating Environment

Installed by the user. Runs on the host CPU card and on any satellite CPU cards.

ChorusOS

ChorusOS operating environment

Factory-installed on the alarm card. Manages all elements of the Netra CT server that are connected to the midplane.

CLI

Command-line interface

The primary user interface to the alarm card.

MOH

Managed Object Hierarchy

Application that manages the hardware and software components of the system.

PMS

Processor Management Service

Manages processor elements used by client applications.

OBP

OpenBoot PROM firmware and diagnostics

Boot firmware and diagnostics on CPU cards.

BCF

Boot control firmware

Firmware on the alarm card to control booting.

BMC

BMC firmware

Baseboard management controller of the IPMI Controller on the alarm card, which provides a command nexus between Satellite CPU and RMC client during hot swap unconfiguration operations.

SMC

SMC firmware

System Management Controller firmware is related to IPMI Controller on CPU cards. SMC APIs provide client access to local resources such as temperature sensors, watchdog subsystems, and local I2C bus devices; and access to IPMI bus devices.

IPMI

IPMI

Intelligent platform management Interface is a communication channel over the cPCI backplane.

MCNet

MCNet

MCNet is a PICMG 2.14 communication protocol over the cPCI backplane. It can be used to communicate between the alarm card, the host CPU card, and any satellite CPU cards which are MCNet capable.


Operating System Specifics

ChorusOS on the alarm card provides chassis management features that support real-time, multi-threaded applications, and POSIX interfaces to support easy porting of POSIX/UNIX (Solaris) applications. For details of ChorusOS 5.0, refer to the ChorusOS documentation.

Solaris 9 OS on the host and satellite CPU cards provides APIs such as platform information and control library (PICL), reconfiguration coordination manager (RCM), and cfgadm (1M), as explained in Chapter 8. The kernel layer interacts with device drivers to control hardware components of the system such as the CPU cards and the I/O boards. These device drivers bind to the kernel using the device driver interfaces (DDI) and driver kernel interfaces (DKI).

Managed Object Hierarchy

The Managed Object Hierarchy (MOH) is a distributed management application that runs on the alarm card, and host and satellite CPUs. MOH on the alarm card provides drawer-level monitoring of the system. MOH on the CPUs, both host and satellite, provides local views of the board on which it runs, and collaborates to provide the status of its components to the MOH on the alarm card. The various MOHs communicate with one another over MCNet. MOH is discussed further in Chapter 6.

Processor Management Services

Processor management services (PMS) software is an extension to the Netra CT platform services software that addresses the requirements of high-availability application frameworks. PMS software enables client applications to manage the operation of the processor CPU board elements within a single Netra CT system or within a cluster of multiple Netra CT systems.

PMS ensures high availability by monitoring a processor element's fault condition, such as OS hangs, deadlock, and panic. The alarm card provides a server-level view showing the state of each CPU card as a plug-in unit. PMS services are enabled separately on the alarm card and on the host CPU. PMS services are discussed further in Chapter 7.

Multicomputing Network

MCNet uses the cPCI backplane on the Netra CT platform to provide Ethernet-like interface to the CPU cards and the alarm card.

Solaris MCNet driver provides standard DLPI v2 interface to higher level protocols and applications. It appears like any other network interface in Solaris when plumbed.

Platform Information Control Library

This Solaris library provides a method for publishing platform-specific information that clients can access in a way that is not specific to the platform. PICL is discussed further in Chapter 8.

Management Framework

The Java Dynamic Managementtrademark Kit (JDMK) development package provides a framework of managed objects and their associated interfaces. SNMP uses a management information base (MIB), which defines managed objects for the elements within the Netra CT server platform. The managed objects are abstract representations of the resources and services within the system. The following interfaces can be used to manage Netra CT system.

SNMP/MIB Support

The netract agent supports the following parts of the MIB:

SNMP Interface

The netract agent operates on the alarm card, the system host CPU card, and the satellite CPUs in a distributed manner. They all provide the SNMP interface version 2, and Netra CT-specific instrumentation monitoring.

RMI Interface

The netract agent uses JDMK service to support common client/server protocols. These include Remote Method Invocation (RMI) which is the mechanism used to support remote, or distributed access to the managed object hierarchy (MOH).

Developing Applications Using PMS

PMS can run on both the alarm card, and host and satellite CPUs. To develop applications that use PMS on the alarm card you need Solaris 9 OS, ChorusOS 5.0, C compiler version, PMS API, and libraries as described in Chapter 7.

To develop applications that use PMS on host and satellite CPUs you need Solaris 9 OS, C compiler version, PMS API, and libraries as described in Chapter 7.

For more information about ChorusOS refer to the ChorusOS 5.0 Features and Architecture Overview (806-6897).

Developing Applications to Interface with MOH or SNMP

To develop applications to interface with MOH or SNMP, you need the Solaris 9 OS, Java Virtual Machine (JVM) and the Java Dynamic Management Kit (JDMK) and the Netra CT agent library. For more information about JDMK refer to Java Dynamic Management Kit 4.2 Tutorial (806-6633).

Developing Applications to Run on Host or Satellite CPU Boards

To develop applications to run on host or satellite CPU cards you require Solaris 9 OS to access services such as dynamic reconfiguration (DR) framework, and platform information and control library (PICL) API. Standard Solaris tools such as the cfgadm(1) command enable service operations such as configuring and unconfiguring system FRUs.