N1 Grid Service Provisioning System User's Guide and Release Notes for the OS Provisioning Plug-In 1.0

Chapter 3 OS Provisioning Deployment Environment

This chapter provides guidelines for setting up an environment that supports OS provisioning.

Prerequisites

Provisioning an operating system using the OS provisioning plug-in requires that you have a basic understanding of system administration and networking. In addition, to provision the operating system requires that basic IP connectivity exists between the machines.

Basic OS Provisioning Environment

The basic OS provisioning environment has the following requirements:


Note –

The N1 Grid SPS Master Server, OS provisioning server, and Solaris boot and install server can be one physical system. However, running all three servers on one system increases the load on the server and increases the network traffic that the server has to handle. Keeping them separate enables you to scale better in the future.


Target Hosts

You need to set up provisionable target systems for OS provisioning. The OS provisioning server needs to know information about these targets, such as MAC address, GUID, remote management connections, and access information. For information about defining targets, see Chapter 8, Target Hosts for OS Provisioning.

Network

The OS provisioning plug-in is designed to work with a wide range of network configurations and topologies. As such, the plug-in does not dictate any network topology nor does it manipulate network elements like switches or routers for its needs. However, the plug-in relies on the existence of some network communication:

These requirements on the networking infrastructure are imposed by the needs of the two network types central to the function of the OS provisioning server. Those network types are the control network and the provisioning network.


Note –

An access network is the network used to access the OS provisioning and boot and install servers. An example of an access network is the corporate intranet. This network is not needed for OS provisioning functionality. From a security standpoint, you should keep the access network separate from the control and provisioning networks.


The following diagram illustrates the network environment.

Figure 3–1 Network Environment Diagram for OS Provisioning

Diagram that shows relationship between access network, provisioning
network and control network. See subsequent sections for text description.

Provisioning Network

A provisioning network is comprised of the provisioning interface of the OS provisioning server, the provisioning interfaces of the target platforms, and the provisioning interfaces of one or more boot and install servers. The provisioning network can be comprised of one or more subnets. An OS provisioning plug-in installation supports the use of multiple provisioning networks for OS provisioning. The protocols and technologies that are required for network-based provisioning dictate the requirements of these provisioning networks. These requirements are:

Control Network

The control network is the network used by the OS provisioning server for two primary functions:

The control network can be a pure IP network or may have serial/terminal server elements. The OS provisioning server communicates with the boot and install servers over an IP network. At the same time, communication with the network management port of the target host may occur over an IP network or a serial network. The control network can span many subnets. The only requirement on the control network is that all boot and install servers and target network management ports can be routed from the OS provisioning server.

Switched Networks

The above requirements take on special meaning in a switched environment. In a switched network, the switched connections can be in either trunk or access (non-trunk) modes. For the control network, switched connections can be in access mode because IP routing from the OS provisioning server is all that is required. The provisioning network can have switched ports in either trunk or access modes depending on the provisioning network design.

Security

The OS provisioning plug-in software leverages the N1 Grid SPS security model. Most communication between the different servers occurs through the N1 Grid SPS Remote Agents (RAs). Configure the RAs for secure communication. See documents for more information on how to enable secure communication between the Master Server and the RAs.

For remote management of the targets, the encrypted passwords are stored on the OS provisioning server. For information about encrypting the passwords, see Password Encryption.

For communication with the Windows boot and install server, you need to activate either RSH or SSH services. Use SSH services to secure communications between the OS provisioning server and Windows boot and install server. For information, see How to Install Windows SSH Server on the Windows RIS Server.

Configuring New Environments

The Sun Data Center Reference Architecture captures and applies best practices to define a generic data center configuration. This architecture can then be reliably and quickly assembled, tested, and deployed with lower risk and lower total cost of ownership (TCO). Data Center Reference Architecture Implementations are instantiations of the Sun Data Center Reference Architecture that provide complete details with actual hardware and software products and technologies, along with services, to meet customer requirements. Data Center Reference Architecture Implementations are pre-designed, pretested groups of components for small, medium, and large data centers, and provide a production-ready target environment for enterprise consolidation and migration projects.

The Sun Data Center Reference Architecture Implementation framework is a flexible combination of SunFire Servers, Sun StorEdgeTM storage arrays, Sun JavaTM Enterprise System and Solaris software, as well as LAN and SAN infrastructure. For more information, see the Sun Data Center Reference Architecture web site.

Process Overview

  1. Prepare the hardware for the N1 Grid SPS Master Server, OS provisioning server, and boot and install servers.

  2. Obtain the N1 Grid SPS software.

  3. Install the N1 Grid SPS Master Server, as explained in Installing the N1 Grid Service Provisioning System 5.0 in N1 Grid Service Provisioning System 5.0 Installation Guide.

  4. Install the N1 Grid SPS RA and N1 Grid SPS command-line interface (CLI) on the OS provisioning server

  5. Install the N1 Grid SPS RA and N1 Grid SPS CLI on the Solaris boot and install server

  6. Install the N1 Grid SPS RA on the Linux boot and install server

  7. Prepare the RAs on the OS provisioning server, Solaris boot and install server, and Linux boot and install server. For information, see How to Prepare a Physical Host in N1 Grid Service Provisioning System 5.0 System Administration Guide.


Note –

For safety, back up the N1 Grid SPS database. For information, see Chapter 9, Backing Up and Restoring, in N1 Grid Service Provisioning System 5.0 System Administration Guide.


ProcedureHow to Enable the Master Server to Use Session IDs

Steps
  1. Edit the Master Server configuration file.

    By default, this file is located at the following location:


    /opt/SUNWn1sps/N1_Grid_Service_Provisioning_System_5.0/server/config/config.properties
  2. If this is an existing N1 Grid SPS installation, follow these steps:

    1. Find the session ID entry that looks similar to the following: config.allowSessionIDOnHosts=masterserver,biss1

    2. Change the value after the equals sign to the names of the OS provisioning server and the Solaris boot and install server.

      For example: config.allowSessionIDOnHosts=myspsserver,sol10bis

  3. If this is a new N1 Grid SPS installation, add a line similar to the following: config.allowSessionIDOnHosts=masterserver,biss1

    The value after the equals sign must include the names of the OS provisioning server and the Solaris boot and install server.

  4. Adjust global plan execution timeouts for your environment.

    Change the following entries in the config.properties file:

    pe.defaultPlanTimeout=12000
    pe.nonPlanExecNativeTimeout=12000

    Where the timeouts are in seconds. Timeout should be greater than the longest plan run operation that you expect for your site. The default plan timeout is 30 minutes (1800 seconds). The default native timeout is 10 minutes (600 seconds). The above example shows an arbitrary higher timeout value of 200 minutes (12000 seconds).

  5. To enable these changes, stop and restart the Master Server.

    Login to the Master Server as n1sps and type the following commands:


    # cr_server stop
    # cr_server start
    

    By default, these commands are in the following file:


    /opt/SUNWn1sps/N1_Grid_Service_Provisioning_System_5.0/server/bin

Configuring Existing Environments

You can use the OS provisioning plug-in to provision the OS in an existing server and network environment. The following paragraphs describe in detail how you could use the plug-in in an existing environment

Hardware and Software Configuration

Ensure that you have hardware to support the N1 Grid SPS Master Server, OS provisioning server, Solaris boot and install server, Linux boot and install server, and Windows boot and install server. See Supported Systems for information about appropriate systems.

Network Environment

Verify that the Master Server, OS provisioning server, and boot and install servers are able to connect with each other through an IP network.

Ensure that you have enough bandwidth to provision the servers simultaneously. The bandwidth requirements vary depending on how many simultaneous provisioning operations you intend to perform.


Note –

Simultaneous OS installations require lot of bandwidth and might experience failures or timeouts if the bandwidth is not available. To avoid problems, either physically separate the traffic or deploy more boot and install servers.


DHCP Services

The OS provisioning server uses its own DHCP service. The DHCP service is used during the provisioning operation to provide install time parameters and install time IP addresses to targets. The DHCP service does not respond to clients that are not being provisioned. Therefore, if you have other DHCP services serving in this subnet, ensure that these services are not responding to the targets during the provisioning operation. Once the OS has been provisioned, you can reactivate DHCP to respond to the targets. Ensure that the target DHCP packets can reach the OS provisioning server by either locating the OS provisioning server in the same subnet or through routing.

Target Hosts

The OS provisioning plug-in can automate the power off and power on cycles during provisioning. Enable the remote management interfaces (if any) of the targets. If the target does not support remote management, use the generic target. For more information about target hosts, see Chapter 8, Target Hosts for OS Provisioning

N1 Grid Service Provisioning System Software

Ensure that the N1 Grid SPS software is version 5.0 or later.

Existing Solaris JET Environments

If you are running the JumpStart Enterprise Toolkit (JET) technology, that product must be uninstalled before you can use the OS provisioning plug-in. For more information , see Setting up the Solaris JET Server

The default base directory of the SUNWjet package that ships with the OS provisioning plug-in is /opt/SUNWjet. Earlier versions of SUNWjet used the /opt/jet default base directory.

If you are using an existing JET package, uninstall the existing package, then create the JET server, as explained in Setting up the Solaris JET Server. This process performs the following tasks:

  1. Installs the version of SUNWjet included with the OS provisioning plug-in at /opt/SUNWjet.

  2. Creates symbolic links between any pre-existing JET product modules in /opt/jet/Products over to the /opt/SUNWjet/Products location.

Once the process completes, when you use the OS provisioning plug-in to create new Solaris profiles, you can include by name any JET product modules that were installed previously on the server.

The previous /opt/jet/Templates and /opt/jet/Clients areas are left untouched. You can then refer to those areas as needed, in case some of their values are helpful for creating new Solaris profiles with the OS provisioning plug-in.