N1 Provisioning Server 3.1, Blades Edition, Installation Guide

Chapter 3 N1 Provisioning Server System and Network Preparation

This chapter provides the guidelines and procedures for connecting and configuring your N1 Provisioning Server components. The tasks provided in this chapter must be performed before you can install the N1 Provisioning Server software.

This chapter discusses the following topics:

N1 Provisioning Server Configuration and Connections Overview

This section provides a list of the supported N1 Provisioning Server configurations, the requirements for each type of connection in an I-Fabric, a summary of the chassis SSC connections, and the naming conventions used for connections.

N1 Provisioning Server Supported Configurations

The following N1 Provisioning Server configurations are supported:

Required Connection Information

The following information is required for each connection.

Table 3–1 Connection Information

Information 

Description 

Starting Device ID 

The device ID of the starting device 

Starting Port 

Identifies the port in the starting device 

Ending Device ID 

The device ID of the ending device 

Ending Device Port 

Identifies the port of the ending device 

Chassis Switch and System Controller (SSC) Connections

The following diagram illustrates the physical connections of a single SSC.

Figure 3–1 B1600 Switch and System Controller (SSC) Connections: Physical View

>

The SSC connections are as follows:

The following diagram shows the representative connections of a chassis with two SSCs. The diagram shows only those connections used by the N1 Provisioning Server software, and is used in the following sections to illustrate the connections required for the three supported configurations.

Figure 3–2 B1600 Switch and System Controller (SSC) Connections: Logical View

>

Connection Port Naming

The following table shows the port naming used in the following sections for the servers, switches, and chassis SSC devices.

Table 3–2 Device Port Naming

Logical Port Name 

Role 

eri0

Control plane server connection to the control plane switch for the Service Processor (SP), control plane database (CPDB), and Control Center (CC) provisioning command transfer. 


Note –

In a single chassis-single SSC installation with no external switch, eri0 connects to the NETMGT port of the SSC.


eri1

Control plane server connection to the local intranet. The Control Center Management PC is usually connected to the local intranet. 

ce0/skge0

Control plane server gigabit connection to the data plane switch for operating system image flash and JumpStart installations for server blades 

NETMGT

Chassis switch and system controller connection to the control plane switch for provisioning command transfer. 

NETP0

Chassis switch and system controller gigabit connection to the data plane switch for operating system image flash and JumpStart installations for server blades 

NETP1

Chassis switch and system controller gigabit connection to a separate image server if an external data plane switch is not used. 

NETP1 through NETP6

Unused chassis switch and system controller connections. 

NETP7

Uplink when the installation does not have a data plane switch. See Figure 3–3.

Connecting a Separate Image Server

The image server can be any machine that supports a network file server (NFS) access. The image server must have at least one 10/100 Base T Ethernet network interface card (NIC), and one 10/100/1000 VLAN-capable gigabit NIC. The Provisioning Server control plane software is set to use the default image server user account root with the password root that has read and write access for NFS. The image server username and password are configurable during installation. You must set up the image server with telnetd allowing access as user root and password root.

The N1 image server 10/100 Base T port must be connected to the control plane switch, and the 10/100/100 NIC port must be connected to the data plane switch.

Installing Gigabit Ethernet Network Interface Cards

You can install either the Sun GigaSwift Ethernet network interface card (NIC) or the SysKonnect Gigabit NIC in the control plane server to support the data trunks.


Note –

If you are installing the image server software on a separate machine, you should install either the Sun GigaSwift NIC or the SysKonnect Gigabit NIC on the image server machine as well.


Sun GigaSwift Gigabit Ethernet Network Interface Card

Install the GigaSwift VLAN-capable gigabit Ethernet network interface card on the server.

The driver configuration file for the GigaSwift gigabit Ethernet card is configured automatically by the N1 Provisioning Server installation program at the beginning of the installation process. The GigaSwift driver is enabled to support VLANs and configured for VLANs from vlan-id 0 to vlan-id 999. You do not need to configure the driver for network connectivity or add VLANs to the configuration file.


Note –

The Sun GigaSwift Ethernet card is designated ce0. If you select a card made by another vendor this designation might change.


At the beginning of the N1 Provisioning Server installation, you are prompted reboot the server in order to enable the GigaSwift card configuration.


The GigaSwift configuration file has been updated to support vlans.
A reboot is required to enable the new configuration to take effect.

As root, type the following command at the command-line prompt:


shutdown -i6 -g0 -y

When the server finishes rebooting, install the system and GigaSwift patches as directed in Installing Required Patches.

SysKonnect Gigabit Ethernet Network Interface Card

Install the SysKonnect VLAN-capable gigabit Ethernet network interface card (SK98xx) on the control plane server. You must install the Solaris 64-bit driver version 6.02. The drivers for SysKonnect cards are available at http://www.syskonnect.com/syskonnect/support/driver/d0102_driver.html.


Note –

The SysKonnect Gigabit Ethernet card is designated skge0. If you select a card made by another vendor this designation might change.


Type N when prompted whether you want to configure the interfaces. After you have installed the card, reboot the server. The N1 Provisioning software installation automatically creates the interfaces.

The driver configuration file for the SysKonnect gigabit Ethernet card is configured automatically by the N1 Provisioning Server install program at the beginning of the installation process. The SysKonnect driver is enabled to support VLANS and configured for VLANs from vlan-id 0 to vlan-id 999. You do not need to configure the driver for network connectivity or add VLANS to the configuration file.

At the beginning of the N1 Provisioning Server installation, you are prompted reboot the server in order to enable the SysKonnect card configuration.


The SysKonnect configuration file has been updated to support vlans.
A reboot is required to enable the new configuration to take effect.

As root, type the following command at the command-line prompt:


shutdown -i6 -g0 -y

Installing the Solaris Operating System, version 8 2/02

The Solaris Operating System, version 8 2/02, must be installed on the control plane server before you can install N1 Provisioning Server software. If you are installing the image server software on a separate machine, you must also install Solaris version 8 2/02 on the image server machine before installing the N1 Provisioning Server software.

To install the Solaris software, follow the installation instructions provided with that software. To satisfy the N1 Provisioning Server requirements for the Sun Fire B1600 Blade System, you must make the following selections during the installation of the Solaris Operating System:

Disabling Remote Logins From Root Accounts

Remote root login must be disabled on all N1 Provisioning servers as described by the following procedure. When remote root login is disabled, root accounts can log in only on the system console.


Note –

If the N1 Image server is installed on a separate machine, the following procedure must also be done on the image server machine.


ProcedureTo Disable Remote Root Account Logins

Steps
  1. Log in as root (su - root) on the control plane server.

  2. Edit the file /etc/default/login.

  3. Locate the line that contains the text string CONSOLE=/dev/console.

    If the line is commented out, remove the comment symbol #.

  4. Make certain the /etc/default/login file contains only one CONSOLE=/dev/console line.

  5. Save and close the file.

See Also

For more information, see the login(1) man page.

Installing Required Patches

This section provides the procedures for installing the Solaris Operating System, version 8 2/02 patches, and the GigaSwift copper gigabit Ethernet interface card patches.

The Solaris Operating System, version 8 2/02, must be installed on the control plane server before you can install N1 Provisioning Server software. If you are installing the image server software on a separate machine, you must also install Solaris version 8 2/02 on the image server machine before installing the N1 Provisioning Server software.

Solaris Operating System, version 8 2/02 Patches

For Solaris operating system, version 8 2/02, you must install the latest recommended patch cluster. You can find the patch cluster at: http://sunsolve.sun.com.

ProcedureTo Install the Solaris Version 8 2/02 Patch Cluster

Steps
  1. Log onto the control plane server as root.

  2. Open a web browser and go to the SunSolve Web site http://sunsolve.sun.com.

  3. Download the latest recommended Solaris Version 8 2/02 patch cluster.

    1. Click Patch Portal.

      The Patch Portal screen appears.

    2. Click Recommended Patch Cluster in the Downloads section.

      The Patch Finder screen appears.

    3. Choose 8 from the list of available patch clusters.

      Scroll down the page and select either HTTP or FTP to download the patch, then click Go. The Save dialog box appears.

    4. Select a directory into which to download the patch cluster zip file.

      Make note of the directory to which you downloaded the patch cluster zip file.

  4. Change to the directory where you downloaded the patch cluster zip file and unzip the file.

    For example:


    # cd /var/tmp/#unzip /var/tmp/8_Recommended.zip
    
  5. Install the patch cluster by using the install script that is uncompressed from the zip file.

    For example:


    #/var/tmp/8_Recommended/install_cluster
    

    Note –

    This step might take up to two hours to complete.


GigaSwift Gigabit Ethernet Patch

You can now use the GigaSwift copper gigabit Ethernet interface card, Part No. X1150A, within the N1 Provisioning Server environment. You can use this network interface card (NIC) as a VLAN-capable gigabit Ethernet adapter on the control plane server and, if you have a separate image server, on the image server for farm management.

During the initial installation of the N1 Provisioning Server software, the interface card is detected and configured by the N1 Provisioning Server installer utility. However, prior to the installation, you need to download and install the patch ID 112119-04 for Solaris 8.

ProcedureTo Download and Install the GigaSwift Ethernet Patch

Steps
  1. Log onto the control plane server as root (su - root).

  2. Open a web browser and go to the SunSolve Web site http://sunsolve.sun.com.

  3. Download patch 112119-04.

    1. Click Patch Portal.

      The Patch Portal screen appears.

    2. Type the patch ID 112119-04 in the Patch Finder box, and then click Find Patch.

      The download patch screen appears.

      The patch is available for downloading in ZIP format using either HTTP or FTP.

    3. Select either HTTP or FTP to download the patch and then click Go. The Save Dialog appears.

    4. Select a directory into which to download the patch zip file.

      Make note of the directory to which you downloaded the patch zip file.

  4. Change to the directory where you downloaded the patch zip file and unzip the file.

    For example:


    # cd /var/tmp/
    #unzip /var/tmp/112119-04.zip
    
  5. Install the patch.

    For each patch you downloaded, type patchadd patch-id where patch-id is the ID of the downloaded patch.

    For example:


    #patchadd 112119-04
    

Connecting the Chassis to the Control Plane Server and Switches

This section provides the procedures for connecting the control plane server, blade system chassis, Control Center Management PC, and switches for each of the supported configurations.

Connecting a Single Chassis with a Single SSC

This section illustrates the topology of a single-chassis single-SSC I-Fabric, and provides the procedure for connecting the I-Fabric components.


Caution – Caution –

The SSC must be installed in chassis slot SSC0 for a single chassis, single SSC configuration. The Provisioning Server software cannot configure and provision the server blades if the SSC is installed in SSC1.


Figure 3–3 Single Chassis With a Single SSC and no External Switch

>

ProcedureTo Connect A Single Chassis With a Single SSC to the Control Plane Server

Steps
  1. Connect the NETMGT port of SSC0 to the eri0 port of the control plane server with 100 base T copper Ethernet cable.

  2. Connect the NETP0 port of SSC0 to the ce0/skge0 port of the control plane server with 1000 base T copper Ethernet cable.

  3. Connect the NETP7 port of SSC0 to the external network with 1000 base T copper Ethernet cable. Farms are accessed through the NETP7 connection.

  4. Connect the eri1 port of the control plane server to your internal network switch with 100 base T copper Ethernet cable.

  5. Connect the Control Center Management PC NIC port to the internal network switch.

    The type of cable depends on the capacity of the PC NIC and the network switch ports.

  6. If you have chosen to install the image server as a separate machine, connect the image server ports as follows.

    1. Connect the NETP1 port of SSC0 to the NIC port of the image server machine

      Use a cable appropriate for the type of interface card installed in the image server: 100 base T copper for a 10/100 base T NIC, and 1000 base T copper for a gigabyte-capable NIC.


      Note –

      If you have chosen to install the N1 image server on a separate machine, install a gigabyte-capable card such as the Sun GigaSwift NIC or the SysKonnect NIC in the image server machine.


    2. If you have a separate image server machine, connect the eri0 port of the image server to your internal network switch with 100 base T copper Ethernet cable.

Connecting a Single Chassis with Two SSCs

This section illustrates the topology of a single-chassis dual-SSC I-Fabric, and provides the procedure for connecting the I-Fabric components.


Note –

For security, install a separate control plane switch and data plane switch. Use of a single switch for the control plane and data plane is not supported for an installation where any chassis contains two SSCs.


Figure 3–4 Single Chassis with Two SSCs and Separate Control Plane and Data Plane Switches

>

ProcedureTo Connect A Single Chassis with Dual SSCs to the Control Plane Server and Switches

Steps
  1. Connect the NETMGT ports of SSC0 and SSC1 to the control plane switch with 100 base T copper Ethernet cable.

  2. Connect the NETP0 ports of SSC0 and SSC1 to a data plane switch gigabit port with 1000 base T copper Ethernet cable.

  3. Connect the ce0/skge0 port of the control plane server to data plane switch gigabit port with 1000 base T copper Ethernet cable.

  4. Connect the eri0 port of the control plane server to a 100 base T port on the control plane switch with 100 base T copper Ethernet cable.

  5. Connect the eri1 port of the control plane server to the internal network switch with 100 base T copper Ethernet cable.

  6. Connect the Control Center Management PC NIC port to the internal network switch.

    The type of cable depends on the capacity of the PC NIC and the network switch ports.

  7. Connect the data plane switch to the external network with 1000 base T copper Ethernet cable. Farms are accessed through this connection.

  8. If you have chosen to install the image server as a separate machine, connect the image server ports as follows.

    1. Connect the eri0 port of the image server to a 100 base T port on the control plane switch with 100 base T copper Ethernet cable.

    2. Connect the eri1 port of the image server to the internal network switch with 100 base T copper Ethernet cable.

    3. Connect the NIC port of the image server to a port with the same bit-rate capacity on the data plane switch.

      Use a cable appropriate for the type of interface card installed in the image server and control plane switch: 100 base T copper for a 10/100 base T NIC, and 1000 base T copper for a gigabyte-capable NIC.


      Note –

      If you have chosen to install the N1 image server on a separate machine, install a gigabyte-capable card such as the Sun GigaSwift NIC or the SysKonnect NIC in the image server machine.


Connecting Two or More Chassis

This section illustrates the topology of a two or more chassis I-Fabric, and provides the procedure for connecting the I-Fabric components.

Figure 3–5 Two or More Chassis with Separate Control Plan and Data Plane Switches

>

ProcedureTo Connect Two or More Chassis to the Control Plane Server and Switches

Steps
  1. For each chassis, perform the following steps:

    1. Connect the NETMGT port of SSC0 to the control plane switch with 100 base T copper Ethernet cable.

    2. Connect the NETP0 port of SSC0 to a data plane switch gigabit port with 1000 base T copper Ethernet cable.

    3. If SSC1 present, connect the NETMGT port of SSC1 to the control plane switch with 100 base T copper Ethernet cable.

    4. If SSC1 present, connect the NETP0 port of SSC1 to a data plane switch gigabit port with 1000 base T copper Ethernet cable.

  2. Connect the ce0/skge0 port of the control plane server to data plane switch gigabit port with 1000 base T copper Ethernet cable.

  3. Connect the eri0 port of the control plane server to a 100 base T port on the control plane switch with 100 base T copper Ethernet cable.

  4. Connect the eri1 port of the control plane server to the internal network switch with 100 base T copper Ethernet cable.

  5. Connect the Control Center Management PC NIC port to the internal network switch.

    The type of cable depends on the capacity of the PC NIC and the network switch ports.

  6. Connect the data plane switch to the external network with 1000 base T copper Ethernet cable. Farms are accessed through this connection.

  7. If you have chosen to install the image server as a separate machine, connect the image server ports as follows.

    1. Connect the eri0 port of the image server to a 100 base T port on the control plane switch with 100 base T copper Ethernet cable.

    2. Connect the eri1 port of the image server to the internal network switch with 100 base T copper Ethernet cable.

    3. Connect the NIC port of the image server to a port with the same bit-rate capacity on the data plane switch.

      Use a cable appropriate for the type of interface card installed in the image server and control plane switch: 100 base T copper for a 10/100 base T NIC, and 1000 base T copper for a gigabyte-capable NIC.


      Note –

      If you have chosen to install the N1 image server on a separate machine, install a gigabyte-capable card such as the Sun GigaSwift NIC or the SysKonnect NIC in the image server machine.


Connecting Terminal Servers

Connecting a terminal server to the control plane is optional. The terminal server typically has power in, one Ethernet port, and 32 serial ports out, as shown in the following figure.

Figure 3–6 Terminal Server Connections

>

Serial connections allow out-of-band access to certain devices. Devices with serial connections have their ports labeled s0, s1, and so forth. Terminal servers have their ports labeled with a number, for example, 1, 2, 3, and so forth.


Note –

Only the control plane server, image server, and the chassis SSC switch controller and system controller are automatically discovered during the installation process. Other devices, such as terminal servers, are not added to the database.


The connection guidelines for a terminal server are as follows:

Configuring the Control Plane Switch

You can use the Cisco 2950, 3550, or 4503 switch to provide connectivity for the control plane of the N1 Provisioning Server software. In a typical scenario, the management port of the control plane server and the management port of the chassis switches are connected to this switch.


Note –

Refer to the Cisco documentation for login procedures and commands.


The management VLAN configured on this switch should be vlan 9. Log onto the control plane switch, and type the following commands to create VLAN 9.


enable
vlan database
 vlan 9 name ManageMentVlan state active media ethernet
 exit

Although the control plane server does not require the control plane switch to have IP connectivity on the management VLAN, you can optionally configure a management IP address on this switch. Type the following commands to create a management IP on this switch:


enable
configure terminal
interface Vlan1
 no ip address
 shutdown
 exit
interface Vlan9
 ip address <IP_address> <IP_subnet_mask>
 no shutdown
 end

Because the VLAN 9 interface is configured with an IP address, you also need to move the uplink to the external router to VLAN 9. To move the uplink, type the following commands:


configure terminal
interface FastEthernet 0/<port>
switchport access vlan 9
speed 100
duplex full
end

To set the default gateway on the device, type the following command :


configure terminal
ip default-gateway <IP_of_default_gateway>
end

To enable a telnet connection to the switch, type the following commands:


configure terminal
line vty 0 4
 password <PASSWORD>
 login
line vty 5 15
 password <PASSWORD>
 login
 exit

To set the enable password for the switch, type the following commands:


configure terminal
enable password 0 <password>
end

To move a port to a particular VLAN, do the following steps:

  1. Use telnet or console to connect to the switch.

  2. Enter enable and configure mode and type the following commands.


    interface <IF_NAME>
     switchport access vlan <VLAN_ID>
     speed 100
     duplex full
     end
    

You must move all chassis NETMGT and control connections from the control plane and image server to VLAN 9.

The following example shows a management port of a chassis switch that is connected to FastEthernet0/24 being moved to vlan 9.


configure terminal
interface FastEthernet0/24
 switchport access vlan 9
 end

When you are done, type write mem to permanently save all the configurations.

To view the switch configuration type show configuration. The following example shows an example of the output of the show configuration command.


sw-2950#show configuration
Using 1647 out of 32768 bytes
!
version 12.1
no service single-slot-reload-enable
no service pad
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
!
hostname sw-2950
!
enable secret 5 $1$byj9$P2S4zO48RKZBG3Sz0F4J/.
enable password root
!
ip subnet-zero
!
spanning-tree extend system-id
!
!
interface FastEthernet0/1
 no ip address
!
interface FastEthernet0/2
 no ip address

     .!
     .!
     .!

interface FastEthernet0/23

!
interface FastEthernet0/24
 switchport access vlan 9
 no ip address
!
interface Vlan1
 no ip address
 no ip route-cache
 shutdown
!
interface Vlan9
 ip address 10.5.131.210 255.255.255.0
 no ip route-cache
!
ip http server
!
line con 0
line vty 0 4
 password root
 login
line vty 5 15
 password root
 login
!
end

Configuring Data Plane Switch Connections

You can use the Cisco 3750, 4503, or 6500 switch to provide connectivity for the data plane of the N1 Provisioning Server software. In a typical scenario, the data plane switch is connected to the gigabit VLAN-capable network interface cards (NIC) of the provisioning servers, and to the switch ports of each chassis. The data plane switch can also be optionally attached to an external router or switch.

The presence or absence of these connections and the number of ports used depends on the network topology implemented. Ensure that the duplex and speed on both ends of the connection is properly auto-negotiated. Otherwise, network performance might be adversely affected. Also, if multiple ports are used for improving bandwidth between switches or switch—router connections, enable link aggregation on these ports.


Note –

Refer to the Cisco documentation for login procedures and commands.


Before setting VLAN rules to ports, VLANs need to exist in the switch's VLAN database. Log onto the data plane switch, and type the following commands:


c3750-eng1>enable
Password:c3750-eng1# vlan database
c3750-eng1(vlan)# vlan 1 name DefaultVlan media Ethernet state active
c3750-eng1(vlan)# vlan 4 name IdleVlan media Ethernet state active
c3750-eng1(vlan)# vlan 8 name ImageVlan media Ethernet state active
c3750-eng1(vlan)# vlan 10 name VLAN10 media Ethernet state active
c3750-eng1(vlan)# vlan 11 name VLAN11 media Ethernet state active.
.
.

Ensure that the data plane trunk connections to the server gigabit NICs and chassis NETP0 ports allow traffic on VLANs 4, 8, and 10 through 255.

When done creating all VLANs, press control-z or end to leave the configuration mode.

The following describes the configuration steps for ports involved in these connections.

ProcedureTo Configure Data Plane Switch Ports

Steps
  1. Connect ports to the external switch or router.

    Configure these ports as trunk ports that allow tagged packets using dot1q notation. By default, most Cisco switches allow all created VLANs to pass through if a port is in trunk mode. However, if this behavior is not implicit to the external switch being used, explicitly set the ports to allow all VLANs to pass through.

    For example, on the Cisco 3750 and 4503 switches, the set of commands to achieve this for port GigabitEthernet 0/6 are as follows:


    c3750-eng1>enable
    Password:
    c3750-eng1#config term
    Enter configuration commands, one per line.  End with CNTL/Z.
    c3750-eng1(config)#interface Gigabitethernet 0/6
    c3750-eng1(config-if)#switchport trunk encapsulation dot1q
    c3750-eng1(config-if)#switchport mode trunk
    c3750-eng1(config-if)#^Z
    c3750-eng1#
  2. Connect ports to the NetP0 switch port of the chassis.

    Configure the remaining ports in the same manner and execute the same commands as described in the previous step.

  3. Connect the port to a VLAN-aware NIC of the provisioning server.

    Configure the port in the same manner and execute the same commands as described in step 1.

Assigning IP Addresses in the Control Plane

Each N1 Provisioning Server control plane component must be assigned IP addresses within a single subnet. Each chassis SSC controller must be IP-accessible from the control plane server and the subnet. For added security, the subnet to which you assign the components should be an internal subnet and not an external subnet:

If you choose to use a different IP addressing structure, make note of the IP address assignments for each component. You are prompted for the IP addresses of the components during installation.

The following tables provide suggested IP address assignments for the N1 Provisioning Server and optional separate N1 image server, and the chassis components. The subnet addressing scheme 10.5.141.xx is only used as an example in the following tables.

Table 3–3 Control Plane Switch and N1 Provisioning and Image Server IP Address Assignments

Machine 

IP Assignment 

Cisco control plane switch  

10.5.141.10 

Combined N1 Provisioning Server and Image Server (control plane server) 

Port eri0: 10.5.141.18

Stand-alone N1 Provisioning Server (control plane server) 

Port eri0: 10.5.141.18

Stand-alone N1 Image Server 

Port eri0: 10.5.141.20

Port eri1: 10.5.141.22

The SSC login and password for each chassis system switch and system controller must be identical for all SSCs in all chassis. For procedures describing how to set the SSC switch and controller logins and passwords, see the Sun FireTM B1600 Blade System Chassis Administration Guide

Table 3–4 Blade System Chassis Component IP Address Assignments

Component 

Chassis 1 

Chassis 2 

Chassis 3 

Chassis N 

Virtual IP (VIP) 

10.5.141.50 

10.5.141.55 

10.5.141.60 

Prior Chassis VIP address +5 

System Controller 0 (SSC0) 

10.5.141.51 

10.5.141.56 

10.5.141.61 

Prior Chassis SSC0 address +5 

System Controller 1 (SSC1) 

10.5.141.52 

10.5.141.57 

10.5.141.62 

Prior Chassis SSC1 address +5 

Switch 0 (SW0) 

10.5.141.53 

10.5.141.58 

10.5.141.63 

Prior Chassis SW0 address +5 

Switch 1 (SW1) 

10.5.141.54 

10.5.141.59 

10.5.141.64 

Prior Chassis SW1 address +5 

Set the IP address, netmask, and gateway for each chassis SSC switch according to your control subnet as described in the following procedure.

ProcedureTo Manually Set an SSC IP Address, Netmask, and Gateway

Steps
  1. Use telnet to access the chassis SSC.

  2. To set up the SSC, type setupsc.

    The following messages appear.


    Entering Interactive setup mode.
    Use Ctrl-z to exit & save. Use Ctrl-c to abort
    Do you want to configure the enabled interfaces [y]?
  3. Type y to configure the SSC.

    You are prompted in succession for each SSC configuration value. The default values are shown in brackets ([]).

    The default values for the SSC IP address, netmask, and gateway are as follows.


    Enter the SC IP address [10.5.132.65]:
    Enter the SC IP netmask [255.255.255.0]: 
    Enter the SC IP gateway [10.5.132.1]: 

    Type the address if your chosen address is different, or press Enter to accept the default.

Installing the N1 Provisioning Server Database

The N1 Provisioning Server 3.1, Blades Edition software enables you to use either the Oracle 8i database, version 8.1.7, or the PostgreSQL database, version 7.4. During installation, you are given the choice to use either the Postgres or the Oracle database.

You must obtain the Oracle software and a license for the software which covers at least the number of connections that you will use. The N1 Provisioning Server software requires a number of concurrent connections to the Oracle database instance running on the N1 Provisioning Server. However, the number of connections that any particular organization might require is generally difficult to determine. Consequently, you should obtain a CPU license from Oracle, either perpetual or on a yearly basis.

You need to install the 32-bit version of the Oracle 8i database. If you do not have this version, you can download Oracle 8i from the Oracle Web site. Before you start the Oracle installation, you need to create the following user and group names, which are required during the Oracle installation:

Install Oracle 8i according to the Oracle 8i installation instructions. Follow the steps for a typical installation. The N1 Provisioning Server installation process creates separate control plane and control center databases. You can remove the Oracle database after a successful Provisioning Server installation if you require more disk space.

Be sure to note the full path of the Oracle installation directory, ORACLE_HOME. The ORACLE_HOME environment variable is required for the N1 Provisioning Server software installation, and must be set on the control plane server.