This chapter provides overviews of N1TM Provisioning Server architecture and components, concepts, software, security, and the implementation and installation process.
This chapter discusses the following topics:
The N1 Provisioning Server consists of various hardware components, such as one or more blade system chassis, server blades, servers, switches and the N1 Provisioning Server software. N1 Provisioning Server software combines your computing and networking resources into a contiguous automated fabric of infrastructure called an I-Fabric, and controls how I-Fabric components interoperate.
N1 Provisioning Server software enables you to manage and control I-Fabric components, and to partition, allocate and assign server blades to specific accounts that are known as a logical server farms. I-Fabric resources are dedicated to a server farm until returned to the common resource pool. With root access to devices, you can deploy any software or application onto the server blades within a farm. Secure partitions enforced by N1 Provisioning Server software and methodologies enable you to exercise independent administrative control over each farm.
The following sections provide descriptions of the physical and logical components of an N1 Provisioning Server Blades Edition system.
The following diagram is an example of the hardware that comprises a typical N1 Provisioning Server system.
The following sections describe the hardware components shown by the above diagram.
Each blade system chassis contains the following components:
One or two chassis switch and system controllers (SSCs). An SSC must be installed in SSC0 in each chassis.
One or more of the following server blades:
B100s: SPARC architecture, Solaris Operating System
B100x: Single processor x86 architecture, Solaris x86 or Linux operating system
B200x: Dual processor x86 architecture, Solaris x86 or Linux operating system
The B200x blade occupies 2 chassis slots and is treated as an unmanaged device.
B10n: Content Load balancing blade
B10p: SSL Proxy blade
The SSL proxy blade is treated as an unmanaged device.
Each blade system chassis can support up to 8 B200x server blades, or 16 single-slot server blades.
The control plane server hosts all N1 Provisioning Server software, which includes the control plane software, the control plane database (CPDB), the Control Center server and database, the Control Center software, and, in a standard install, the N1 Provisioning Image Server.
The Control Center Management PC provides access to the Control Center software using a web browser-based user interface. The Control Center is used to design and deploy logical server farms, and to define numerous characteristics including network topology, storage requirements, monitors, and alerts. The Control Center is also used to define the kinds of monitoring you want to perform. The monitoring definition is saved using the Monitoring Mark-up Language (MML).
The N1 Image Server (N1 IS) is used to store operating system disk images for each type of server blade in a chassis, and to load the disk images to server blades using the JumpStartTM and Flash archives depending on the type of server blade and operating system. The image server is typically installed on the control plane server. If desired, the image server can be installed on a separate machine.
For best results, use a Gigabit copper Network Interface Card (NIC) for the image server.
The control plane switch connects all management and control interfaces on a designated control subnet and virtual local area network (VLAN). The control plane switch is optional only for a single blade system chassis installation in which the chassis contains a single switch and system controller (SSC). The control plane switch is required for an installation if any chassis contains two SSCs or if there is more than one chassis.
The data plane switch provides connectivity between the control plane Server, the N1 image server, the blade system chassis SSCs and server blades, and your network.
The following diagram shows a representative example of the N1 Provisioning Server after N1 Provisioning Server software has been installed.
The following sections describe the logical components of the N1 Provisioning Server, Blades Edition.
The Resource Pool contains a one-blade to twelve-blade blade system chassis. Each chassis contains server blades that you can provision to a server farm. The resource pool within an I-Fabric starts out as a blank physical infrastructure with no predefined logical structure. The infrastructure can be configured into many different logical structures under the control of the N1 Provisioning Server software. The different logical structures, called logical server farms, are dynamic and securely partitioned.
The following diagram shows an example of the Resource Pool (unallocated server blades) and two farms (allocated server blades).
Each server blade in a farm is allocated to the farm as an individual server, and securely networked to prevent access from other server farms. When the user is finished using a farm, the server blades that were assigned to the farm are returned to the Resource Pool.
The control plane provides intelligence, management, and control of an I-Fabric. The N1 Provisioning Server software, providing the intelligence that enables an I-Fabric, resides within the control plane. The control plane consists of all N1 Provisioning Server software and hardware, third-party software and hardware, and the N1 Provisioning Server databases. The control plane does not include the resource pool and fabric layer. If desired, you can also connect an optional terminal server to the control plane to provide access to all device's console ports.
The control plane resides on a private virtual local area network (VLAN) that ensures that the control plane is securely partitioned from access by unauthorized servers or any external network traffic. N1 Provisioning Server software manages devices within an I-Fabric through secure out-of-band connections over Ethernet or serial connections.
The control plane software automates the configuration of the Ethernet switch connections and assignment of VLANs to the I-Fabric components. The automated management of VLANs enables you to securely add or remove devices in the resource pool from any network topology designed through the Control Center. Additional security is provided by the assignment of one or more VLANs to a farm. A VLAN assigned to one farm cannot be used by a different farm.
The N1 Provisioning Server VLAN assignments are as follows:
VLAN 1 – reserved
VLAN 2 – reserved
VLAN 3 – reserved
VLAN 4 – assigned to I-Fabric devices that are not allocated to a farm
VLAN 5 – reserved
VLAN 6 – reserved
VLAN 7 – reserved
VLAN 8 – assigned to disk image transfers from the N1 image server to server blades
VLAN 9 – assigned to control plane command traffic
VLANs 10 through 255 – available for farm allocation
The fabric layer contains the networking infrastructure that ties the resource pool together. The switched fabric consists of industry-standard Ethernet switching components that provide connectivity to devices within the resource pool and connectivity to internal networks, and optionally, the Internet.
The Ethernet switches provide connectivity to devices within the resource pool as well as network connectivity to the Internet or internal networks. Through the automated management of VLANs on an Ethernet switch, you can add or remove devices in the resource pool from any network topology designed using the Control Center.
This section provides summaries of the major N1 Provisioning Server logical components.
Administrative functionality for N1 Provisioning Server software and an I-Fabric is available in two forms: through the Administration screen within the Control Center and alternatively, by a set of command-line interface tools that interface directly with the Control Center.
The Administration screen is the central point of administration within the Control Center. Using the Control Center from the Control Center Management PC, you can define classes of users that have access to the administration screen and its associated functionality. From the Control Center Administration screen, you have a comprehensive view of all users and logical server farms within an I-Fabric. You can do the following tasks from the Control Center Administration screen:
Create and delete logical server farms
Create and delete accounts
Set usage limits
Set user and administrator access privileges
Add and remove logical server farms
Add images to the image repository
Remove images from the image repository
Create and remove contracts
Publish pertinent news items to accounts
You also can manage security rights and administration privileges from the Administration screen. The Control Center has three levels of access privileges:
User Level – A standard user level that permits access to logical server farms within an account
Account Manager Level – A manager within an account that permits the ability to add and delete users within an account
Administrator – The highest level of access that permits access to the entire I-Fabric (including the control plane) as well as all accounts
For more information about the Control Center, see N1 Provisioning Server 3.1, Blades Edition, Control Center Management Guide.
For more information about access privileges, see Applying Role-Based Access Control in N1 Provisioning Server 3.1, Blades Edition, System Administration Guide.
The command-line tools provide an interface to the Control Center that is used for managing an I-Fabric in conjunction with the Control Center administration functionality. The tools offer a more granular level of control, and also provide an interface for accessing devices and configuration data.
The tools are commonly used to view and track resources within an I-Fabric. Using the command line tools, you can:
Check the state of any devices within an I-Fabric
Trace details, such as physical Ethernet connectivity, from the network interface port in the device back to the physical port on the Ethernet switch within an I-Fabric
Track and manage the logical assignment of physical devices and ports to logical server farms
Manage VLANs and subnets within logical server farms
Update the physical resource pool of an I-Fabric
When a device, such as a server blade, is added to an I-Fabric, the command-line tools facilitate the wiring and configuration auditing required for integrating the new device into the available resource pool. Command-line tools also assist in the management of software images, the reconfiguration of devices, and the activation and updating of logical server farms.
For a list of the available command-line tools and a brief description of each tool, Appendix B, Command-Line Tools in N1 Provisioning Server 3.1, Blades Edition, System Administration Guide.
An important aspect of the design of N1 Provisioning Server software is the virtualization provided for all the hardware devices within the resource pool of an I-Fabric. This virtualization enables the rapid and dynamic association of devices to network connectivity and provides the capability to create a logical server farm from a pool of physical devices within an I-Fabric. Virtualization of network connectivity provides the foundation for deploying drag-and-drop connectivity between devices that can then be logically wired together.
Virtualization of the network provides security, and enables the transparent management, configuration, and allocation of network devices. N1 Provisioning Server software utilizes VLANs and automates all aspects of VLAN configuration to enable network virtualization.
Network virtualization provides two distinct benefits:
Customized virtual wiring is created for each logical server farm. N1 Provisioning Server Network virtualization enables you to create arbitrary network topologies, associate subnet addresses, and assign IP addresses to servers and network devices placed on the subnets. You can add and remove resources from the logical server farm while automatically configuring newly added and existing devices in the logical server farm as necessary.
For provisionable devices, the N1 Provisioning Server software performs secure partitioning at the Layer 2 network layer by taking sets of network ports on a large-scale switched fabric and placing them on a protected Layer 2 virtual network. Each virtual network uses physical port-based virtual local area network (VLAN) technology built into current generation Layer 2 switches.
The control plane, switched fabric, and resource pool work together to dynamically create logical server farms within an I-Fabric. Logical server farms are securely allocated from the Resource Pool and managed by N1 Provisioning Server software. N1 Provisioning Server software creates server farms from the resources available within the Resource Pool. Logical server farms are built using the same physical resources as traditional server farms but they are established and managed under the flexible control of N1 Provisioning Server software. Logical server farms are analogous to traditional, manually built, dedicated server farms except that you can create, grow, shrink, and delete them as data structures that reside within N1 Provisioning Server software.
Logical server farms have the same performance and control characteristics as traditional server farms. N1 Provisioning Server software is not in the data path and does nothing to limit the performance of the devices or prevent the logical server farm from running at wire speed.
Secure partitions enforced by N1 Provisioning Server software and methodologies enable you to exercise independent administrative control over each logical server farm. Even though the user of a specific logical server farm has full administrative access on all devices within that farm, the user cannot view, access, or modify the devices or data associated with a different logical server farm.
The following graphic illustrates the life cycle of a logical server farm in the Control Center.
D – Design State
A – Active State
S – Standby State
I – Inactive State
For more details on how to manage logical server farms, see Chapter 4, Building, Updating, and Monitoring Server Farms in N1 Provisioning Server 3.1, Blades Edition, Control Center Management Guide.
A logical server farm within an I-Fabric is constructed from a number of basic building blocks. Capturing a logical description of these building blocks and their interrelationships enables the creation of a digital blueprint that specifies a farm's logical structure. This logical blueprint facilitates the automation of many manual tasks involved in constructing logical server farms.
N1 Provisioning Server software uses the following three description languages to capture logical descriptions of server farms:
FML is an XML dialect used to represent the logical blueprint of a logical server farm. FML is scalable and capable of describing, with a high degree of abstraction, network and configuration data for servers within a logical server farm.
The general structure of FML is to describe an I-Fabric as a structure composed of sets of devices that have both connectivity as well as configuration-related information. The connectivity information describes how these various devices are interconnected, for example, how device Ethernet ports are connected to specific subnets and VLANs. In addition to devices and their interconnectivity, FML provides the ability to describe roles that servers may occupy within a logical server farm, for example, a web server, database server, and application server. This ability enables the Control Center to deploy multiple instances of a given server within a logical server farm.
FML also enables the replication of entire logical server farms. Such replication might be required for creating site mirrors at different geographic locations, implementing business continuance solutions, or for creating a testing and staging area for a future version of a logical server farm.
Monitoring Mark-Up Language (MML)
MML is an XML dialect that describes monitor deployments and configurations as defined using the Control Center. MML describes monitoring configurations as they pass from the control center to the provisioning server.
Wiring Mark-Up Language (WML)
WML is an XML dialect that describes the physical wiring characteristics within an I-Fabric. WML is also used to describe the physical wiring of an I-Fabric. The difference between FML and WML is that FML describes the logical device wiring of a logical server farm and layout, whereas WML describes the physical wiring of all the devices present within an I-Fabric.
N1 Provisioning Server software runs with the following network packages:
Packet filtering – TSPRipf
The TSPRipf tool filters IP packets based on configurable packet characteristics, such as protocol, port number, source address, or destination address. Each service processor has one packet filtering tool installed to prevent malformed or malicious packets from one account's network entering another account's network or the Control Center network. The tool is statically configured by the Control Center at installation time. The default configuration denies any packets not specifically used by the Control Center.
Network API – TSPRnetcf
This API defines the Java™ interfaces for networking configuration on the Control Center server. The network API supports the DHCP and DNS protocols.
The DHCP protocol implementation is based on the public domain package from the Internet Software Consortium (http://www.isc.org). The service processor uses the DHCP facility to configure the servers in a logical server farm with their hostname and IP addresses. The DHCP configuration information for a logical server farm is stored in the control plane database (CPDB) for persistency and ease of migrating a logical server farm from one service processor to another. The information in the CPDB is used to create the DHCP configuration file /etc/dhcp.conf at logical server farm activation time.
In the service processor, the TSPRdhcp utility assigns IP addresses and parameters to hosts, thus enabling the setup of IP addresses and parameters without having to modify or reboot the host. The utility does not allocate IP addresses. IP addresses are allocated by the Farm Manager.
Do not edit the dhcpd.conf file. dhcpd.conf is maintained by the N1 Provisioning Server software.
The DNS protocol implementation is based on the public domain package from the Internet Software Consortium (http://www.isc.org). The service processor uses the DNS facility for hostname resolution for servers and network devices in a logical server farm. The service processor that owns the logical server farm also serves as the DNS server for the logical server farm. The DNS information is stored in the CPDB for persistency and ease of migrating a logical server farm from one service processor to another. The information in the CPDB is used to create the DNS configuration file etc/named.conf at logical server farm activation time.
Do not manually edit the named.conf. named.conf is dynamically updated by the service processor.
Hardware Abstraction Layers (HALs) are sets of application programming interfaces (APIs) that provide device independence for the Control Center software. HALs are used to automate the interaction with physical devices within an I-Fabric. The HAL module translates abstract Control Center actions into device-specific commands. HALs might provide interfaces to specific manufacturer's Ethernet switches.
Because the Control Center software deals with only the abstract behavior of the device, HALs enable the Control Center software to manage different devices that exhibit the same overall behavior but might differ in how they are configured and managed. This difference could exist because the equipment is from different manufacturers or because of differences between current and next-generation products.
The Provisioning Server software resides on the control plane server and provides the infrastructure automation services required to manage and deploy logical server farms within an I-Fabric. At a high level, the Control Center manages the logical-to-physical mappings between a logical server farm and the physical resources assigned to it. The Control Center also provides an extensive command-line interface (CLI) for I-Fabric and farm management.
The N1 Provisioning Server software provides the following services.
Management of all of the blade system chassis and server blades
DNS resolution for the subnetwork on which it is installed
Management of both internal and external IP addresses and subnets for the N1 Provisioning Server network
N1 Provisioning Server software controls the contents of the /etc/dhcpd.conf and /etc/named.conf files. Any manual edits to these files are overwritten by the N1 Provisioning Server software.
Management of the N1 Provisioning Server virtual local area networks (VLANs)
Distribution and installation of operating system master images to blades in a farm
The installation program includes a base Solaris Operating System image for booting and running server blades. You can make changes to the base image and then take a snapshot of the new image. This snapshot becomes a new image.
The N1 Provisioning Server software does not offer the following functions:
Server blade and SSC firmware maintenance and upgrade
Control plane switch and data plane switch management
The N1 Provisioning Server software manages only the blade system chassis server blades and SSCs. You must connect and configure the control plane and data plane switches before you install the N1 Provisioning Server software.
The Provisioning Server contains the following software components:
Service processor (SP)
Control plane database (CPDB)
Image server
The Service Processor (SP) provides a variety of infrastructure management services such as provisioning, network virtualization, and monitoring. It contains the following subcomponents:
The Segment Manager controls and coordinates activities for an I-Fabric, and is the only entry point to state transitions in the Control Center. The Segment Manager selects and sets the logical server farm ownership at logical server farm activation time, monitors the Farm Manager process, and sends requests to Farm Managers in the I-Fabric. Each time a request for the logical server farm arrives, a Farm Manager is started. There is one Farm Manager process per logical server farm. The Segment Manager starts the Farm Manager process as needed. See the command-line tools man pages for details.
Farm Managers instantiate, monitor, and control activities related to logical server farms. A single service processor instance can contain many different Farm Manager processes. Each Farm Manager is assigned to one logical server farm. Farm Managers are only present when a change in a logical server farm occurs. Farm Managers communicate through the Segment Manager and through information stored and retrieved from the CPDB.
Farm Managers use logical descriptions of logical server farms stored in the CPDB in the form of an FML document to identify all resources required for the logical server farm. Farm Managers request resources from the idle pool of resources such as servers.
Dynamic Host Configuration Protocol (DHCP) and Domain Name Server (DNS) services
The service processor uses the DHCP facility to configure the servers in a logical server farm with their hostname and Internet Protocol (IP) addresses. The service processor uses the DNS facility for hostname resolution for servers and network devices in a logical server farm.
Storage Manager Client (STMC)
The STMC loads global images onto server blades and administers snapshots. The STMC also provides the interfaces required by the Farm Manager to access the storage functionality. The STMC also contains tools that perform the individual storage functions. These tools are available to any control plane server on which the STMC package is installed.
The control plane database (CPDB) is a persistent, central repository of data that guarantees consistent access and updates of data by using database locks and transactions. The CPDB uses an Oracle database featuring remote access and control. This database contains the following information pertaining to logical server farms, physical devices, and software associated with an I-Fabric:
Properties and connections of devices, such as servers
Logical server farm configurations
Resources, such as VLANs and IP addresses
State of network-specific applications
State of requests
Software images and their state
WML
FML
MML
DNS and DHCP configurations
The request table in the CPDB keeps growing as the Control Center processes requests. By keeping the requests, you can obtain a history of activities in the control plane. You can also manually delete requests that are no longer needed. For more information, see Managing the Request Queue in N1 Provisioning Server 3.1, Blades Edition, System Administration Guide.
The image server manages operating system images. The image server is installed on the Control Plan server, but can optionally be installed on any standalone server that supports network file server (NFS) file access.
The Control Center software provides the infrastructure automation services required to manage and deploy logical server farms within an I-Fabric. At a high level, the Control Center manages the logical-to-physical mappings between a logical server farm and the physical resources assigned to it. The Control Center understands the physical topology of the resources deployed within the I-Fabric and provides the capability to deploy and configure these devices to unique topologies and configurations to match account-specific designs created in the Control Center.
The Control Center provides six key areas of infrastructure automation services:
Provisioning and configuration services
Flexing services
Software image management services
Monitoring services
Physical infrastructure management services
Each of these five capabilities is built on a foundation of I-Fabric and security technologies that are leveraged by each service area.
The ability to automatically provision and configure resources within the resource pool of an I-Fabric is a core capability of the Control Center. The following summary of the steps required to activate a logical server farm should help you understand the provisioning and configuration process.
Allocate – The control center dispatches requests to the provisioning server to provision and configure resources. When this request is received, the Control Center performs resource allocation. Resources are randomly allocated from the resource pool and tracked within the CPDB. IP subnets can be allocated from both public and private IP address spaces.
Wiring – Following the physical allocation of resources, the network fabrics for Ethernet connections are configured. This process includes configuring network resources such as IP subnets and VLANs. Images are copied to the servers at this time.
Dispatch – Following the virtual wiring of the logical server farm, DHCP and DNS services are initiated. The Control Center automates the configuration and management of these services. When these services are available, the devices within the logical server farm are powered up through addressable power devices.
Activate – On activation, the logical server farm is monitored to enable automated failover services.
The Control Center manages and automates the ongoing evolution of logical server farms as well as their initial activation. As resources are added to or deleted from logical server farms, the Control Center continues to manage and automatically configure all wiring as well as DHCP and DNS services.
Flexing is the ability to add or delete capacity on a logical server farm. N1 Provisioning Server software rapidly and automatically provisions and configures resources. You can apply flexing to address temporary surges in demand or to adjust capacity on a long-term basis. In either case, flexing enables you to employ infrastructure resources more efficiently. The N1 Provisioning Server software provides two types of flexing services:
Adding and deleting individual servers within a logical server farm
Adding and deleting server groups through a server group mechanism
You can add or delete servers from an active logical server farm at any time. Servers are added from the Control Center by dragging the server icon into the existing logical server farm design and attaching it to the appropriate subnet. All DNS and DHCP services are automatically configured. Adding an additional server does not require you to reinitiate the farm activation process. You also can delete servers by using the Control Center.
The server group is a unique logical structure supported within N1 Provisioning Server software. Server groups enable rapid flexing of servers by associating a predefined role or image for all servers within the group. All servers in a server group are considered identical and start off with the same software image. This software image is a global image that is replicated for every server within the server group.
When a server group is flexed up, the global image associated with the server group is automatically stored onto each server added to the group. Although you can make changes to individual servers within a server group, those changes will not be reflected in a flex operation unless you have updated the designated global image. When a server group is flexed down, the servers and their associated storage are returned to the resource pool. Server group flexing is done through the Control Center server configuration dialog box.
The Control Center manages software images, and the configurations of servers and switches. The Control Center supports creation and management of two categories of images: global and account images.
Global images typically contain baseline operating systems and monitoring software that have been configured to work with an I-Fabric. The purpose of deploying global images is to make available a set of baseline boot images that are accessible across different accounts and different farms in an I-Fabric. Based on the global images, you can then create new server images for subsequent modification and configuration. You must have administrator-level access to the N1 Provisioning Server to create global images. You can create global images only through the Control Center CLI. In addition to an operating system and monitoring software, you might choose to include other software components in an image.
An I-Fabric supports images based on the Solaris 8, Solaris 9, and Red HatTM Linux 2.1 operating environments.
Account images are for a particular account and consist of account-specific customizations of one of the following items:
Global images
Blank disks
Application and data images
These images are the result of a snapshot of a disk in use within a logical server farm. The resulting images are available for use by farms within an account. Images that have account-specific software can be either global or account images. Their classification depends on their manner of creation. that is, whether they are created from modifications of an existing global image or from the snapshot of a disk in use within a logical server farm. Thus, you can create identical images by both methodologies, and the images are considered distinct solely on the basis of their methods of creation.
The N1 Provisioning Server software package comes with baseline Solaris 8 and Solaris 9 operating system images that you can copy using the snapshot tool and customize.
Using the snapshot tool available from the Control Center, you can capture software images to be stored in an image library and use them to subsequently configure similar devices. You can use these images for global or account images. A disk snapshot is the logical equivalent of making a master copy of a local disk image. The original image is stored in an image library and a reference to the image is entered in the CPDB. Depending on the I-Fabric configuration, images reside on the local disk or on a remote NFS file server. Snapshot images are named and catalogued in the Control Center image library. The image library is listed in the Control Center server configuration dialog box. You can choose from prebuilt images to be associated with a server or server group.
You can take a snapshot of any software image associated with any server (individual servers as well as a specific server within a server group). The snapshot function automatically shuts the server down to ensure that the resulting image is a stable, production-ready replication of the original image. After the snapshot is completed, the Control Center reboots the server automatically.
The snapshot function enables functionality such as server flexing and server failover. If a server fails, the system can automatically replace the failed server with a substitute by using the last snapshot of the failed server to create the image for the new server.
The Control Center actively monitors the state and health of devices in an I-Fabric. Monitoring provides visibility of an I-Fabric and supports failover and recovery or to restart failed processes.
The Control Center enables the following farm monitoring capabilities within an I-Fabric:
Availability of resource pool devices (server farms) to enable automatic failover and availability of these devices for display in the Control Center monitoring screen.
Basic performance monitors (disk, CPU, memory) for servers. You can display the monitoring information in the Control Center.
Changes in farm or device status, state, and configuration that is recorded in event logs and optionally can be made available externally using Simple Network Management Protocol (SNMP) traps.
Events in monitoring farm state changes include activating, deactivating, and placing a farm in standby mode.
Events in farm configuration changes include adding, removing, and reconfiguring farm servers.
Events in device state changes and failures include server UP and server DOWN messages.
Messages for events comprise three categories:
Informational messages, such as device availability or failure
Farm messages related to devices of a specific farm
Billing messages
Monitoring messages are forwarded to the service processor. The service processor then sends the messages to a central message repository in the CPDB. You can view monitoring data using the Control Center monitoring screen. You can also configure monitoring events for farm server utilization, such as disk and CPU, by using the Control Center monitoring screen.
Optionally, you can configure the CPDB to forward messages to an external network management system (NMS). An SNMP connection and a management information base (MIB) extension are required for forwarding messages to an external NMS.
Additional tools for monitoring system health include operating system and Control Center commands. For details regarding system health monitoring, see Chapter 4, Monitoring and Messaging in N1 Provisioning Server 3.1, Blades Edition, System Administration Guide.
As a part of the N1 Provisioning Server initialization process, the Control Center performs resource and wiring validation. This validation enables the Control Center to have a complete physical topology map of all resources within an I-Fabric. The wiring validation provides an automated way of confirming the physical wiring map of equipment in a given data center. The Control Center's ability to successfully manage the virtual wiring of a logical server farm relies on the integrity of the physical wiring of the resources within an I-Fabric. Automating this physical wiring validation removes a common source of errors in an I-Fabric, namely the potential for human error caused by incorrectly cabling the physical infrastructure.
The Control Center uses this wiring data to make resource allocation decisions. Physical infrastructure data is stored in a database that you can access using the Control Center CLI.
By default, an I-Fabric is configured to apply a high level of security at all levels. You can configure I-Fabric security according to your company's needs by using any suitable combination of security levels as described in the following sections.
An I-Fabric provides several levels of security throughout the infrastructure to ensure that each logical server farm is secure from intrusion or attack from within or outside the I-Fabric. Security solutions have been implemented at the following levels within the I-Fabric:
Password encryption
Control plane
Control center
Resource pool
Ethernet
VLAN
Physical network
Network virtualization
Logical server farms
External Ethernet port connections
Password encryption is provided at all levels within the I-Fabric for security purposes. You can configure the system to use clear-text passwords. However, clear-text passwords are problematic.
The server responsible for running the N1 Provisioning Server software resides within the control plane. The security of this server depends significantly on the deployment architecture of the servers and network responsible for running the N1 Provisioning Server application. The I-Fabric design provides a secure methodology for deploying the N1 Provisioning Server software.
Depending on the management requirements of an I-Fabric, you can deploy the Control Center without connectivity to external networks or to the Internet. Control Center security is implemented at several levels. For further information, see Provisioning Server Security. The Control Center communicates with the Control Center through a privileged VLAN that is not available from outside of the I-Fabric.
Control Center security prevents tampering from within the I-Fabric. Security for Control Center software is implemented by using dedicated VLANs. For further information, see Ethernet Security.
The following list describes the three types of connections to the Control Center, each of which has security measures in place:
Web access secured by Secure Socket Layer (SSL)
The Control Center uses SSL security (high-strength, 128-bit encryption) with login and password validation. The Control Center can be deployed with or without connectivity to external networks or to the Internet.
A private, separately secured connection to the monitoring tool.
The Control Center performs database, monitoring, and management operations through a monitoring agent.
A private, separately secured connection to each Control Center managed by the Control Center.
The Control Center communicates with the Control Center using FML, an XML-based dialect, through a dedicated, port-based VLAN that is not available from outside of an I-Fabric.
Secure access to the Control Center is based on login accounts. These login accounts provide security from accounts outside a company as well as inside a company. An account may have one of the following available login roles assigned to it, depending on the users job functions:
User is a technical user who can create farms and make changes to the state of any farm in the account.
Account Manager is a user who has the same access privileges as User and the added ability to add and remove Users from their accounts.
Administrator is an administrative user who has full access to the entire application, including the configuration of the application and operational access to every account and farm within the Control Center. Administrators do not belong to any account.
For more details about accounts, see the Control Center Management Guide.
The Control Center processes login name and password changes. You are responsible for issuing the initial name and password to the users of an account. The Control Center network system automatically verifies passwords.
By default, users are locked out of the Control Center if their login attempts fail a configurable number of times within a configurable number of minutes. The lock is automatically released after another configurable number of minutes. However, you can use the Control Center Login Status screen to unlock users before the automatic unlock process takes place. This screen also enables you to force-lock existing users if a security issue involving a user becomes apparent. You can also unlock or force-lock another administrator by using the same method. See the Control Center Management Guide.
When a software or hardware failure occurs during a session on the Control Center, users must log in again when reaccessing the Control Center.
Transactions performed using the Control Center are encrypted securely using the hyper-text transfer protocol secure sockets (HTTPS). External access to the Control Center is filtered at all points using IP filtering to ensure secure web access.
The ability to repurpose servers over time as they come in and out of the resource pool presents security challenges. Server integrity is protected by power cycling and scrubbing the storage and memory of all servers before they are added to a resource pool.
Within the Ethernet portion of the switched fabric, logical server farms are implemented using port-based virtual local area networks (VLANs). From a security perspective, port-based addressing provides a superior implementation when compared to VLAN implementations that are defined by Media Access Control (MAC) or IP addresses. This enhanced security is due to devices being connected physically through the switch rather than through logical addresses. The implementation of a network virtualization layer eliminates the possibility of VLAN hopping or IP spoofing, or the possibility of controlling VLAN membership from outside the Control Center.
To prevent IP spoofing attempts, an incoming IP packet on a VLAN must have the same VLAN tag and MAC address as the logical interface on which it is arriving. The Control Center sets VLAN tags for the appropriate ports and networks.
To ensure that the Control Center is protected from unauthorized access from within the I-Fabric, the control plane server on which the Control Center software runs resides within its own dedicated port-based VLAN. This architecture physically eliminates the possibility of unauthorized access to the Control Center from within the I-Fabric. Logical server farm users cannot manipulate their own or any other logical server farm's VLAN configuration.
Server blades within an I-Fabric are dedicated to only one unique logical server farm at any time. While servers may be added or subtracted from a particular logical server farm over its life cycle, no single physical server blade will ever be used by more than one logical server farm simultaneously. Thus, servers are protected from intrusion by the VLAN and the Control Center security measures previously described.
Farms are implemented in an I-Fabric using VLANs, which are based on physical switch ports and configured through the Control Center. The switch configuration is protected by the VLAN, not an administrative password. VLAN configurations are password protected on the applicable switch.
Access to services on the Control Center from the farms is restricted by IP filtering. IP routing through a control plane server is not possible. Access to the Farm Manager and the Segment Manager from a farm is not possible.
Only the Control Center is authorized to make modifications to virtual wiring and virtual farm security perimeters.
Implement security policies that protect the physical network from internal unauthorized access based on your site's setup and facilities.
By using port-based VLAN technology, network virtualization provides a network security perimeter for all the computing and network devices associated with a given farm. When a device is logically assigned to a farm, the device is transitioned to the appropriate logical network associated with that logical element of the farm.
Network virtualization uses physical port-based VLAN technology built into current generation Layer-2 switches. The VLAN enables you to create a secure virtual network between a set of network nodes that appears as a transparent Layer-2 interconnect to these sets of network nodes. These virtual Layer-2 interconnects are then used as virtual wires to connect the devices on the switched fabric into the desired Layer-2 network topology.
Ethernet switching equipment must be capable of supporting VLAN tagging for use in network virtualization to protect against VLAN hopping or other kinds of VLAN penetration attempts. In addition, standard password encryption protects the management of these switches from unauthorized modifications from any server or device in the resource pool. Any switching equipment must meet the standards of 802.1q.
The management of these switches is protected from unauthorized modifications from any server or device in an I-Fabric. Only the Control Center administrator is authorized to make modifications to the virtual wiring and virtual logical server farm security perimeters.
Logical server farms on an I-Fabric are implemented using port-based VLANs. These VLANs are configured through the Control Center. The Control Center restricts access from the farms. Farm users cannot change their own or any other farm's VLAN configuration.
Server blades within an I-Fabric are dedicated to one unique farm at a time. While you can add or subtract server blades from a particular farm over its lifecycle, no single physical server blade is ever used by more than one farm simultaneously.
When you deactivate a server blade, the N1 Provisioning Server software cycle its power sufficiently to clear volatile memory. You should also reset server blades to their factory values before returning them to the idle pool so that any account-specific, nonvolatile memory components are erased. Follow the best practices to configure and check your server blades for security. If you want to perform a recommended audit, an I-Fabric supports industry-standard third-party auditing tools.
Set up administrator server accounts and passwords by following conventions and best practices. See also security web sites such as http://www.cert.org, http://www.sun.com, and http://www.cisco.com for recommendations on keeping network servers protected from unauthorized access.
Ethernet port connections are optional with an I-Fabric. The connections can be either virtual private network (VPN) or leased-line connections. You can configure your I-Fabric for Ethernet port connections based on your site's needs and by using industry-standard security mechanisms.
This section provides a summary of the N1 Blade Provisioning Server implementation and installation process.
This guide does not discuss the following prerequisite knowledge and tasks:
Physical design
Cabling design
Rack design
Power requirements
You should have related designs and plans in place before implementing an I-Fabric.
The following diagram describes the major steps required to implement and install N1 Provisioning Server, Blades Edition version 3.1.
The following checklist describes the major steps required to implement and install N1 Provisioning Server, Blades Edition:
Determine the hardware requirements for your N1 Provisioning Server environment. See Chapter 2, Hardware and Software Requirements.
Choose a system configuration based on your hardware selections. See N1 Provisioning Server Supported Configurations.
Purchase the software and hardware for the selected configuration.
Install the gigabit Ethernet network interface card in the control plane server. Installing Gigabit Ethernet Network Interface CardsSee Installing Gigabit Ethernet Network Interface Cards
Install the Solaris 8TM, release 2/02, operating system on the control plane server. See Installing the Solaris Operating System, version 8 2/02.
Disable remote logins from root accounts. See Disabling Remote Logins From Root Accounts.
Install required patches. See Installing Required Patches.
Install and connect the chassis to the control plane server and external switches according to your selected configuration. See Connecting the Chassis to the Control Plane Server and Switches
If you plan to use the Oracle as your control plane database (CPDB), install Oracle 8i database on the control plane server. See Installing the N1 Provisioning Server Database.
Decide on and implement an IP address scheme for the configuration. See Assigning IP Addresses in the Control Plane
Install the N1 Provisioning Server software.
If you are installing the N1 Provisioning software for the first time, see Chapter 4, Installing Provisioning Server Software.
If you are upgrading from N1 Provisioning Server version 3.0 to version 3.1, see Chapter 5, Upgrading to N1 Provisioning Server 3.1.
If you also plan to use the Postgres as your control plane database, migrate your data from the Oracle CPDB to Postgres. See Migrating From Oracle to PostgreSQL
Validate the N1 Provisioning Server installation. See Validating the N1 Provisioning Server Installation.
Apply role-based access control. See Applying Role-Based Access Control in N1 Provisioning Server 3.1, Blades Edition, System Administration Guide.