|C H A P T E R 1|
This chapter provides an overview of the software environment that forms the basis for developing applications for the Netra CT server:
The Netra CT server system consists of a host CPU, an alarm card which is the nexus of system management, optionally one or several satellite CPUs, and one or several CompactPCI (cPCI) I/O cards. Different software combinations run on each of these elements as is shown in FIGURE 1-1.
This section provides brief descriptions of the Netra CT server board components and hot-swapping capabilities. See the Netra CT Server Product Overview (816-2480) for more information.
An alarm card is used in the Netra CT 810 and Netra CT 410 servers to control system functions. The board is plugged into slot 8 for the Netra CT 810, and into slot 1 for the Netra CT 410 server. ChorusOS 5.0 is the operating system running on the alarm card, and the boot environment is controlled by boot control firmware. Developers use a command-line interface (CLI) to provide an administrative interface to the system. Drawer-level monitoring and control of the system is accomplished through Managed Object Hierarchy (MOH) and Processor Management Service (PMS) software.
The host CPU board is the same for both Netra 810 and Netra CT 410 servers. The board is plugged into slot 1 for the Netra CT 810, and into slot 3 for the Netra CT 410 server. The Solaris OS runs on these boards. MOH and PMS provide local and drawer-level monitor and control functions.
Several satellite CPU cards can occupy the I/O slots and perform normal CPU functions independently. MOH and PMS provide local monitor and control functions.
One or more cPCI boards can occupy I/O slots. The I/O boards are controlled by the host CPU and the Solaris OS running on the host CPU board.
Boards and other field-replaceable units (FRUs) can be swapped while the system is running, depending on whether or not they conform to the Hot Swap Specification PICMG 2.1 R 2.0. This ability to hot swap is a feature that is controllable by software if the board itself is hot-swap compliant. For further information on hot swap issues, see the Netra CT Server Product Overview (816-2480), Netra CT Server System Administration Guide (816-2483), and Netra CT Server Service Manual (816-2482).
The abbreviations shown in FIGURE 1-1 are identified in TABLE 1-1.
System Management Controller firmware is related to IPMI Controller on CPU cards. SMC APIs provide client access to local resources such as temperature sensors, watchdog subsystems, and local I2C bus devices; and access to IPMI bus devices.
ChorusOS on the alarm card provides chassis management features that support real-time, multi-threaded applications, and POSIX interfaces to support easy porting of POSIX/UNIX (Solaris) applications. For details of ChorusOS 5.0, refer to the ChorusOS documentation.
Solaris 9 OS on the host and satellite CPU cards provides APIs such as platform information and control library (PICL), reconfiguration coordination manager (RCM), and cfgadm (1M), as explained in Chapter 8. The kernel layer interacts with device drivers to control hardware components of the system such as the CPU cards and the I/O boards. These device drivers bind to the kernel using the device driver interfaces (DDI) and driver kernel interfaces (DKI).
The Managed Object Hierarchy (MOH) is a distributed management application that runs on the alarm card, and host and satellite CPUs. MOH on the alarm card provides drawer-level monitoring of the system. MOH on the CPUs, both host and satellite, provides local views of the board on which it runs, and collaborates to provide the status of its components to the MOH on the alarm card. The various MOHs communicate with one another over MCNet. MOH is discussed further in Chapter 6.
Processor management services (PMS) software is an extension to the Netra CT platform services software that addresses the requirements of high-availability application frameworks. PMS software enables client applications to manage the operation of the processor CPU board elements within a single Netra CT system or within a cluster of multiple Netra CT systems.
PMS ensures high availability by monitoring a processor element's fault condition, such as OS hangs, deadlock, and panic. The alarm card provides a server-level view showing the state of each CPU card as a plug-in unit. PMS services are enabled separately on the alarm card and on the host CPU. PMS services are discussed further in Chapter 7.
MCNet uses the cPCI backplane on the Netra CT platform to provide Ethernet-like interface to the CPU cards and the alarm card.
Solaris MCNet driver provides standard DLPI v2 interface to higher level protocols and applications. It appears like any other network interface in Solaris when plumbed.
This Solaris library provides a method for publishing platform-specific information that clients can access in a way that is not specific to the platform. PICL is discussed further in Chapter 8.
The Java Dynamic Management Kit (JDMK) development package provides a framework of managed objects and their associated interfaces. SNMP uses a management information base (MIB), which defines managed objects for the elements within the Netra CT server platform. The managed objects are abstract representations of the resources and services within the system. The following interfaces can be used to manage Netra CT system.
The netract agent supports the following parts of the MIB:
The netract agent operates on the alarm card, the system host CPU card, and the satellite CPUs in a distributed manner. They all provide the SNMP interface version 2, and Netra CT-specific instrumentation monitoring.
The netract agent uses JDMK service to support common client/server protocols. These include Remote Method Invocation (RMI) which is the mechanism used to support remote, or distributed access to the managed object hierarchy (MOH).
PMS can run on both the alarm card, and host and satellite CPUs. To develop applications that use PMS on the alarm card you need Solaris 9 OS, ChorusOS 5.0, C compiler version, PMS API, and libraries as described in Chapter 7.
To develop applications that use PMS on host and satellite CPUs you need Solaris 9 OS, C compiler version, PMS API, and libraries as described in Chapter 7.
For more information about ChorusOS refer to the ChorusOS 5.0 Features and Architecture Overview (806-6897).
To develop applications to interface with MOH or SNMP, you need the Solaris 9 OS, Java Virtual Machine (JVM) and the Java Dynamic Management Kit (JDMK) and the Netra CT agent library. For more information about JDMK refer to Java Dynamic Management Kit 4.2 Tutorial (806-6633).
To develop applications to run on host or satellite CPU cards you require Solaris 9 OS to access services such as dynamic reconfiguration (DR) framework, and platform information and control library (PICL) API. Standard Solaris tools such as the cfgadm(1) command enable service operations such as configuring and unconfiguring system FRUs.