This chapter provides the guidelines and procedures for connecting and configuring your N1 Provisioning Server components. The tasks provided in this chapter must be performed before you can install the N1 Provisioning Server software.
This chapter discusses the following topics:
N1 Provisioning Server Configuration and Connections Overview
Connecting the Chassis to the Control Plane Server and Switches
This section provides a list of the supported N1 Provisioning Server configurations, the requirements for each type of connection in an I-Fabric, a summary of the chassis SSC connections, and the naming conventions used for connections.
The following N1 Provisioning Server configurations are supported:
One Sun Fire B1600 Blade System Chassis with a single Switch and System Controller (SSC). An external switch is not required, but is recommended.
One blade system chassis with two SSCs. An external control plane switch and an external data plane switch is required.
Two or more chassis. An external control plane switch and an external data plane switch is required.
The following information is required for each connection.
Table 3–1 Connection Information
Information |
Description |
---|---|
Starting Device ID |
The device ID of the starting device |
Starting Port |
Identifies the port in the starting device |
Ending Device ID |
The device ID of the ending device |
Ending Device Port |
Identifies the port of the ending device |
The following diagram illustrates the physical connections of a single SSC.
The SSC connections are as follows:
1 – RS232 serial console port
2 – 10/100 Base-T network management port, referred to as the NETMGT port
3 – 10/100/1000 Base-T data network ports, referred to as NETP0 to NETP3 from left to right on the bottom row, and as NETP4 through NETP7 from left to right on the top row
The following diagram shows the representative connections of a chassis with two SSCs. The diagram shows only those connections used by the N1 Provisioning Server software, and is used in the following sections to illustrate the connections required for the three supported configurations.
The following table shows the port naming used in the following sections for the servers, switches, and chassis SSC devices.
Table 3–2 Device Port Naming
Logical Port Name |
Role |
---|---|
eri0 |
Control plane server connection to the control plane switch for the Service Processor (SP), control plane database (CPDB), and Control Center (CC) provisioning command transfer. Note – In a single chassis-single SSC installation with no external switch, eri0 connects to the NETMGT port of the SSC. |
eri1 |
Control plane server connection to the local intranet. The Control Center Management PC is usually connected to the local intranet. |
ce0/skge0 |
Control plane server gigabit connection to the data plane switch for operating system image flash and JumpStart installations for server blades |
NETMGT |
Chassis switch and system controller connection to the control plane switch for provisioning command transfer. |
NETP0 |
Chassis switch and system controller gigabit connection to the data plane switch for operating system image flash and JumpStart installations for server blades |
NETP1 |
Chassis switch and system controller gigabit connection to a separate image server if an external data plane switch is not used. |
NETP1 through NETP6 |
Unused chassis switch and system controller connections. |
NETP7 |
Uplink when the installation does not have a data plane switch. See Figure 3–3. |
The Wiring Mark-up Language (WML) file contains the logical port name. The table shows only the physical name for each type of connection. Gigabit Ethernet connections are recorded in WML as eth0, eth1, or ethN.
The gigabit Ethernet card is designated ce0/skge0, and can be either the Sun GigaSwift card (ce) or the SysKonnect card (skge0). If you select a card made by another vendor this designation might change. Control plane servers can also have a Gigabit Ethernet card.
The image server can be any machine that supports a network file server (NFS) access. The image server must have at least one 10/100 Base T Ethernet network interface card (NIC), and one 10/100/1000 VLAN-capable gigabit NIC. The Provisioning Server control plane software is set to use the default image server user account root with the password root that has read and write access for NFS. The image server username and password are configurable during installation. You must set up the image server with telnetd allowing access as user root and password root.
The N1 image server 10/100 Base T port must be connected to the control plane switch, and the 10/100/100 NIC port must be connected to the data plane switch.
You can install either the Sun GigaSwift Ethernet network interface card (NIC) or the SysKonnect Gigabit NIC in the control plane server to support the data trunks.
If you are installing the image server software on a separate machine, you should install either the Sun GigaSwift NIC or the SysKonnect Gigabit NIC on the image server machine as well.
Install the GigaSwift VLAN-capable gigabit Ethernet network interface card on the server.
The driver configuration file for the GigaSwift gigabit Ethernet card is configured automatically by the N1 Provisioning Server installation program at the beginning of the installation process. The GigaSwift driver is enabled to support VLANs and configured for VLANs from vlan-id 0 to vlan-id 999. You do not need to configure the driver for network connectivity or add VLANs to the configuration file.
The Sun GigaSwift Ethernet card is designated ce0. If you select a card made by another vendor this designation might change.
At the beginning of the N1 Provisioning Server installation, you are prompted reboot the server in order to enable the GigaSwift card configuration.
The GigaSwift configuration file has been updated to support vlans. A reboot is required to enable the new configuration to take effect. |
As root, type the following command at the command-line prompt:
shutdown -i6 -g0 -y |
When the server finishes rebooting, install the system and GigaSwift patches as directed in Installing Required Patches.
Install the SysKonnect VLAN-capable gigabit Ethernet network interface card (SK98xx) on the control plane server. You must install the Solaris 64-bit driver version 6.02. The drivers for SysKonnect cards are available at http://www.syskonnect.com/syskonnect/support/driver/d0102_driver.html.
The SysKonnect Gigabit Ethernet card is designated skge0. If you select a card made by another vendor this designation might change.
Type N when prompted whether you want to configure the interfaces. After you have installed the card, reboot the server. The N1 Provisioning software installation automatically creates the interfaces.
The driver configuration file for the SysKonnect gigabit Ethernet card is configured automatically by the N1 Provisioning Server install program at the beginning of the installation process. The SysKonnect driver is enabled to support VLANS and configured for VLANs from vlan-id 0 to vlan-id 999. You do not need to configure the driver for network connectivity or add VLANS to the configuration file.
At the beginning of the N1 Provisioning Server installation, you are prompted reboot the server in order to enable the SysKonnect card configuration.
The SysKonnect configuration file has been updated to support vlans. A reboot is required to enable the new configuration to take effect. |
As root, type the following command at the command-line prompt:
shutdown -i6 -g0 -y |
The Solaris Operating System, version 8 2/02, must be installed on the control plane server before you can install N1 Provisioning Server software. If you are installing the image server software on a separate machine, you must also install Solaris version 8 2/02 on the image server machine before installing the N1 Provisioning Server software.
To install the Solaris software, follow the installation instructions provided with that software. To satisfy the N1 Provisioning Server requirements for the Sun Fire B1600 Blade System, you must make the following selections during the installation of the Solaris Operating System:
Install the 64-bit version.
Select the en_US locale.
Set up the file partition to allocate a minimum of 8 GBytes in the root directory.
When you are prompted to setup DNS, select NO.
When you are prompted to setup DHCP, select NO.
When you are prompted to select the type of configuration, select End User.
If you are using a non-interactive JumpStartTM installation, include the bash package SUNWbash and SUNWgzip in the profile. If you are using an interactive JumpStart or CD install, customize your configuration selection to include these packages.
Make sure that you have installed the following packages, which are part of the end-user install:
SUNWbzip
SUNWbzipx
SUNWzip
SUNWtcsh
SUNWscpux
SUNWbtool
SUNWtoo
SUNWsprot
Remote root login must be disabled on all N1 Provisioning servers as described by the following procedure. When remote root login is disabled, root accounts can log in only on the system console.
If the N1 Image server is installed on a separate machine, the following procedure must also be done on the image server machine.
Log in as root (su - root) on the control plane server.
Edit the file /etc/default/login.
Locate the line that contains the text string CONSOLE=/dev/console.
If the line is commented out, remove the comment symbol #.
Make certain the /etc/default/login file contains only one CONSOLE=/dev/console line.
Save and close the file.
For more information, see the login(1) man page.
This section provides the procedures for installing the Solaris Operating System, version 8 2/02 patches, and the GigaSwift copper gigabit Ethernet interface card patches.
The Solaris Operating System, version 8 2/02, must be installed on the control plane server before you can install N1 Provisioning Server software. If you are installing the image server software on a separate machine, you must also install Solaris version 8 2/02 on the image server machine before installing the N1 Provisioning Server software.
For Solaris operating system, version 8 2/02, you must install the latest recommended patch cluster. You can find the patch cluster at: http://sunsolve.sun.com.
Log onto the control plane server as root.
Open a web browser and go to the SunSolve Web site http://sunsolve.sun.com.
Download the latest recommended Solaris Version 8 2/02 patch cluster.
Click Patch Portal.
The Patch Portal screen appears.
Click Recommended Patch Cluster in the Downloads section.
The Patch Finder screen appears.
Choose 8 from the list of available patch clusters.
Scroll down the page and select either HTTP or FTP to download the patch, then click Go. The Save dialog box appears.
Select a directory into which to download the patch cluster zip file.
Make note of the directory to which you downloaded the patch cluster zip file.
Change to the directory where you downloaded the patch cluster zip file and unzip the file.
For example:
# cd /var/tmp/#unzip /var/tmp/8_Recommended.zip |
Install the patch cluster by using the install script that is uncompressed from the zip file.
For example:
#/var/tmp/8_Recommended/install_cluster |
This step might take up to two hours to complete.
You can now use the GigaSwift copper gigabit Ethernet interface card, Part No. X1150A, within the N1 Provisioning Server environment. You can use this network interface card (NIC) as a VLAN-capable gigabit Ethernet adapter on the control plane server and, if you have a separate image server, on the image server for farm management.
During the initial installation of the N1 Provisioning Server software, the interface card is detected and configured by the N1 Provisioning Server installer utility. However, prior to the installation, you need to download and install the patch ID 112119-04 for Solaris 8.
Log onto the control plane server as root (su - root).
Open a web browser and go to the SunSolve Web site http://sunsolve.sun.com.
Download patch 112119-04.
Click Patch Portal.
The Patch Portal screen appears.
Type the patch ID 112119-04 in the Patch Finder box, and then click Find Patch.
The download patch screen appears.
The patch is available for downloading in ZIP format using either HTTP or FTP.
Select either HTTP or FTP to download the patch and then click Go. The Save Dialog appears.
Select a directory into which to download the patch zip file.
Make note of the directory to which you downloaded the patch zip file.
Change to the directory where you downloaded the patch zip file and unzip the file.
For example:
# cd /var/tmp/ #unzip /var/tmp/112119-04.zip |
Install the patch.
For each patch you downloaded, type patchadd patch-id where patch-id is the ID of the downloaded patch.
For example:
#patchadd 112119-04 |
This section provides the procedures for connecting the control plane server, blade system chassis, Control Center Management PC, and switches for each of the supported configurations.
This section illustrates the topology of a single-chassis single-SSC I-Fabric, and provides the procedure for connecting the I-Fabric components.
The SSC must be installed in chassis slot SSC0 for a single chassis, single SSC configuration. The Provisioning Server software cannot configure and provision the server blades if the SSC is installed in SSC1.
Connect the NETMGT port of SSC0 to the eri0 port of the control plane server with 100 base T copper Ethernet cable.
Connect the NETP0 port of SSC0 to the ce0/skge0 port of the control plane server with 1000 base T copper Ethernet cable.
Connect the NETP7 port of SSC0 to the external network with 1000 base T copper Ethernet cable. Farms are accessed through the NETP7 connection.
Connect the eri1 port of the control plane server to your internal network switch with 100 base T copper Ethernet cable.
Connect the Control Center Management PC NIC port to the internal network switch.
The type of cable depends on the capacity of the PC NIC and the network switch ports.
If you have chosen to install the image server as a separate machine, connect the image server ports as follows.
Connect the NETP1 port of SSC0 to the NIC port of the image server machine
Use a cable appropriate for the type of interface card installed in the image server: 100 base T copper for a 10/100 base T NIC, and 1000 base T copper for a gigabyte-capable NIC.
If you have chosen to install the N1 image server on a separate machine, install a gigabyte-capable card such as the Sun GigaSwift NIC or the SysKonnect NIC in the image server machine.
If you have a separate image server machine, connect the eri0 port of the image server to your internal network switch with 100 base T copper Ethernet cable.
This section illustrates the topology of a single-chassis dual-SSC I-Fabric, and provides the procedure for connecting the I-Fabric components.
For security, install a separate control plane switch and data plane switch. Use of a single switch for the control plane and data plane is not supported for an installation where any chassis contains two SSCs.
Connect the NETMGT ports of SSC0 and SSC1 to the control plane switch with 100 base T copper Ethernet cable.
Connect the NETP0 ports of SSC0 and SSC1 to a data plane switch gigabit port with 1000 base T copper Ethernet cable.
Connect the ce0/skge0 port of the control plane server to data plane switch gigabit port with 1000 base T copper Ethernet cable.
Connect the eri0 port of the control plane server to a 100 base T port on the control plane switch with 100 base T copper Ethernet cable.
Connect the eri1 port of the control plane server to the internal network switch with 100 base T copper Ethernet cable.
Connect the Control Center Management PC NIC port to the internal network switch.
The type of cable depends on the capacity of the PC NIC and the network switch ports.
Connect the data plane switch to the external network with 1000 base T copper Ethernet cable. Farms are accessed through this connection.
If you have chosen to install the image server as a separate machine, connect the image server ports as follows.
Connect the eri0 port of the image server to a 100 base T port on the control plane switch with 100 base T copper Ethernet cable.
Connect the eri1 port of the image server to the internal network switch with 100 base T copper Ethernet cable.
Connect the NIC port of the image server to a port with the same bit-rate capacity on the data plane switch.
Use a cable appropriate for the type of interface card installed in the image server and control plane switch: 100 base T copper for a 10/100 base T NIC, and 1000 base T copper for a gigabyte-capable NIC.
If you have chosen to install the N1 image server on a separate machine, install a gigabyte-capable card such as the Sun GigaSwift NIC or the SysKonnect NIC in the image server machine.
This section illustrates the topology of a two or more chassis I-Fabric, and provides the procedure for connecting the I-Fabric components.
For each chassis, perform the following steps:
Connect the NETMGT port of SSC0 to the control plane switch with 100 base T copper Ethernet cable.
Connect the NETP0 port of SSC0 to a data plane switch gigabit port with 1000 base T copper Ethernet cable.
If SSC1 present, connect the NETMGT port of SSC1 to the control plane switch with 100 base T copper Ethernet cable.
If SSC1 present, connect the NETP0 port of SSC1 to a data plane switch gigabit port with 1000 base T copper Ethernet cable.
Connect the ce0/skge0 port of the control plane server to data plane switch gigabit port with 1000 base T copper Ethernet cable.
Connect the eri0 port of the control plane server to a 100 base T port on the control plane switch with 100 base T copper Ethernet cable.
Connect the eri1 port of the control plane server to the internal network switch with 100 base T copper Ethernet cable.
Connect the Control Center Management PC NIC port to the internal network switch.
The type of cable depends on the capacity of the PC NIC and the network switch ports.
Connect the data plane switch to the external network with 1000 base T copper Ethernet cable. Farms are accessed through this connection.
If you have chosen to install the image server as a separate machine, connect the image server ports as follows.
Connect the eri0 port of the image server to a 100 base T port on the control plane switch with 100 base T copper Ethernet cable.
Connect the eri1 port of the image server to the internal network switch with 100 base T copper Ethernet cable.
Connect the NIC port of the image server to a port with the same bit-rate capacity on the data plane switch.
Use a cable appropriate for the type of interface card installed in the image server and control plane switch: 100 base T copper for a 10/100 base T NIC, and 1000 base T copper for a gigabyte-capable NIC.
If you have chosen to install the N1 image server on a separate machine, install a gigabyte-capable card such as the Sun GigaSwift NIC or the SysKonnect NIC in the image server machine.
Connecting a terminal server to the control plane is optional. The terminal server typically has power in, one Ethernet port, and 32 serial ports out, as shown in the following figure.
Serial connections allow out-of-band access to certain devices. Devices with serial connections have their ports labeled s0, s1, and so forth. Terminal servers have their ports labeled with a number, for example, 1, 2, 3, and so forth.
Only the control plane server, image server, and the chassis SSC switch controller and system controller are automatically discovered during the installation process. Other devices, such as terminal servers, are not added to the database.
The connection guidelines for a terminal server are as follows:
Connect the terminal server to a power controller or constant power.
Connect the terminal server serial port to the control plane switch.
Connect the terminal server Fast Ethernet port to a control plane switch port.
You can use the Cisco 2950, 3550, or 4503 switch to provide connectivity for the control plane of the N1 Provisioning Server software. In a typical scenario, the management port of the control plane server and the management port of the chassis switches are connected to this switch.
Refer to the Cisco documentation for login procedures and commands.
The management VLAN configured on this switch should be vlan 9. Log onto the control plane switch, and type the following commands to create VLAN 9.
enable vlan database vlan 9 name ManageMentVlan state active media ethernet exit |
Although the control plane server does not require the control plane switch to have IP connectivity on the management VLAN, you can optionally configure a management IP address on this switch. Type the following commands to create a management IP on this switch:
enable configure terminal interface Vlan1 no ip address shutdown exit interface Vlan9 ip address <IP_address> <IP_subnet_mask> no shutdown end |
Because the VLAN 9 interface is configured with an IP address, you also need to move the uplink to the external router to VLAN 9. To move the uplink, type the following commands:
configure terminal interface FastEthernet 0/<port> switchport access vlan 9 speed 100 duplex full end |
To set the default gateway on the device, type the following command :
configure terminal ip default-gateway <IP_of_default_gateway> end |
To enable a telnet connection to the switch, type the following commands:
configure terminal line vty 0 4 password <PASSWORD> login line vty 5 15 password <PASSWORD> login exit |
To set the enable password for the switch, type the following commands:
configure terminal enable password 0 <password> end |
To move a port to a particular VLAN, do the following steps:
Use telnet or console to connect to the switch.
Enter enable and configure mode and type the following commands.
interface <IF_NAME> switchport access vlan <VLAN_ID> speed 100 duplex full end |
You must move all chassis NETMGT and control connections from the control plane and image server to VLAN 9.
The following example shows a management port of a chassis switch that is connected to FastEthernet0/24 being moved to vlan 9.
configure terminal interface FastEthernet0/24 switchport access vlan 9 end |
When you are done, type write mem to permanently save all the configurations.
To view the switch configuration type show configuration. The following example shows an example of the output of the show configuration command.
sw-2950#show configuration Using 1647 out of 32768 bytes ! version 12.1 no service single-slot-reload-enable no service pad service timestamps debug uptime service timestamps log uptime no service password-encryption ! hostname sw-2950 ! enable secret 5 $1$byj9$P2S4zO48RKZBG3Sz0F4J/. enable password root ! ip subnet-zero ! spanning-tree extend system-id ! ! interface FastEthernet0/1 no ip address ! interface FastEthernet0/2 no ip address .! .! .! interface FastEthernet0/23 ! interface FastEthernet0/24 switchport access vlan 9 no ip address ! interface Vlan1 no ip address no ip route-cache shutdown ! interface Vlan9 ip address 10.5.131.210 255.255.255.0 no ip route-cache ! ip http server ! line con 0 line vty 0 4 password root login line vty 5 15 password root login ! end |
You can use the Cisco 3750, 4503, or 6500 switch to provide connectivity for the data plane of the N1 Provisioning Server software. In a typical scenario, the data plane switch is connected to the gigabit VLAN-capable network interface cards (NIC) of the provisioning servers, and to the switch ports of each chassis. The data plane switch can also be optionally attached to an external router or switch.
The presence or absence of these connections and the number of ports used depends on the network topology implemented. Ensure that the duplex and speed on both ends of the connection is properly auto-negotiated. Otherwise, network performance might be adversely affected. Also, if multiple ports are used for improving bandwidth between switches or switch—router connections, enable link aggregation on these ports.
Refer to the Cisco documentation for login procedures and commands.
Before setting VLAN rules to ports, VLANs need to exist in the switch's VLAN database. Log onto the data plane switch, and type the following commands:
c3750-eng1>enable Password:c3750-eng1# vlan database c3750-eng1(vlan)# vlan 1 name DefaultVlan media Ethernet state active c3750-eng1(vlan)# vlan 4 name IdleVlan media Ethernet state active c3750-eng1(vlan)# vlan 8 name ImageVlan media Ethernet state active c3750-eng1(vlan)# vlan 10 name VLAN10 media Ethernet state active c3750-eng1(vlan)# vlan 11 name VLAN11 media Ethernet state active. . . |
Ensure that the data plane trunk connections to the server gigabit NICs and chassis NETP0 ports allow traffic on VLANs 4, 8, and 10 through 255.
When done creating all VLANs, press control-z or end to leave the configuration mode.
The following describes the configuration steps for ports involved in these connections.
Connect ports to the external switch or router.
Configure these ports as trunk ports that allow tagged packets using dot1q notation. By default, most Cisco switches allow all created VLANs to pass through if a port is in trunk mode. However, if this behavior is not implicit to the external switch being used, explicitly set the ports to allow all VLANs to pass through.
For example, on the Cisco 3750 and 4503 switches, the set of commands to achieve this for port GigabitEthernet 0/6 are as follows:
c3750-eng1>enable Password: c3750-eng1#config term Enter configuration commands, one per line. End with CNTL/Z. c3750-eng1(config)#interface Gigabitethernet 0/6 c3750-eng1(config-if)#switchport trunk encapsulation dot1q c3750-eng1(config-if)#switchport mode trunk c3750-eng1(config-if)#^Z c3750-eng1# |
Connect ports to the NetP0 switch port of the chassis.
Configure the remaining ports in the same manner and execute the same commands as described in the previous step.
Connect the port to a VLAN-aware NIC of the provisioning server.
Configure the port in the same manner and execute the same commands as described in step 1.
Each N1 Provisioning Server control plane component must be assigned IP addresses within a single subnet. Each chassis SSC controller must be IP-accessible from the control plane server and the subnet. For added security, the subnet to which you assign the components should be an internal subnet and not an external subnet:
Internal subnets are used to create IP address namespaces for I-Fabric CPU devices that do not require network connectivity outside the corporate network (for example, to the Internet) via an external router. Internal subnets are defined based on an internal corporate IT convention to prevent namespace collisions between various internal subnets. Check your internal subnet namespace scheme for availability of internal subnet addresses for use within the I-Fabric. Enter these subnets as internal subnets during installation.
External subnets are assigned by an outside entity to a corporate network when I-Fabric CPU devices require connectivity outside the corporate network. If your I-Fabric is connected to an external router and you want the CPU devices in a farm to access the Internet, check your external subnet namespace scheme for availability of external subnet addresses for use within an I-Fabric. Enter these subnets as external subnets during installation.
If you choose to use a different IP addressing structure, make note of the IP address assignments for each component. You are prompted for the IP addresses of the components during installation.
The following tables provide suggested IP address assignments for the N1 Provisioning Server and optional separate N1 image server, and the chassis components. The subnet addressing scheme 10.5.141.xx is only used as an example in the following tables.
Refer to the Solaris 8 Administration Guide for the procedure for setting server IP addresses.
Refer to the Sun Fire B1600 Blade System Chassis Switch Administration Guides for the procedure for setting chassis component IP addresses.
Machine |
IP Assignment |
---|---|
Cisco control plane switch |
10.5.141.10 |
Combined N1 Provisioning Server and Image Server (control plane server) |
Port eri0: 10.5.141.18 |
Stand-alone N1 Provisioning Server (control plane server) |
Port eri0: 10.5.141.18 |
Stand-alone N1 Image Server |
Port eri0: 10.5.141.20 Port eri1: 10.5.141.22 |
The SSC login and password for each chassis system switch and system controller must be identical for all SSCs in all chassis. For procedures describing how to set the SSC switch and controller logins and passwords, see the Sun FireTM B1600 Blade System Chassis Administration Guide
Table 3–4 Blade System Chassis Component IP Address Assignments
Component |
Chassis 1 |
Chassis 2 |
Chassis 3 |
Chassis N |
---|---|---|---|---|
Virtual IP (VIP) |
10.5.141.50 |
10.5.141.55 |
10.5.141.60 |
Prior Chassis VIP address +5 |
System Controller 0 (SSC0) |
10.5.141.51 |
10.5.141.56 |
10.5.141.61 |
Prior Chassis SSC0 address +5 |
System Controller 1 (SSC1) |
10.5.141.52 |
10.5.141.57 |
10.5.141.62 |
Prior Chassis SSC1 address +5 |
Switch 0 (SW0) |
10.5.141.53 |
10.5.141.58 |
10.5.141.63 |
Prior Chassis SW0 address +5 |
Switch 1 (SW1) |
10.5.141.54 |
10.5.141.59 |
10.5.141.64 |
Prior Chassis SW1 address +5 |
Set the IP address, netmask, and gateway for each chassis SSC switch according to your control subnet as described in the following procedure.
Use telnet to access the chassis SSC.
To set up the SSC, type setupsc.
The following messages appear.
Entering Interactive setup mode. Use Ctrl-z to exit & save. Use Ctrl-c to abort Do you want to configure the enabled interfaces [y]? |
Type y to configure the SSC.
You are prompted in succession for each SSC configuration value. The default values are shown in brackets ([]).
The default values for the SSC IP address, netmask, and gateway are as follows.
Enter the SC IP address [10.5.132.65]: Enter the SC IP netmask [255.255.255.0]: Enter the SC IP gateway [10.5.132.1]: |
Type the address if your chosen address is different, or press Enter to accept the default.
The N1 Provisioning Server 3.1, Blades Edition software enables you to use either the Oracle 8i database, version 8.1.7, or the PostgreSQL database, version 7.4. During installation, you are given the choice to use either the Postgres or the Oracle database.
The PostgreSQL database is included on the installation DVD-ROM. If you choose to use PostgreSQL, skip this section. The PostgreSQL database is installed during N1 Provisioning Software installation.
If you choose to use Oracle, you must purchase and install the 32–bit version of the Oracle 8.1.7 database before you can install the N1 Provisioning Server software as described by this section.
You must obtain the Oracle software and a license for the software which covers at least the number of connections that you will use. The N1 Provisioning Server software requires a number of concurrent connections to the Oracle database instance running on the N1 Provisioning Server. However, the number of connections that any particular organization might require is generally difficult to determine. Consequently, you should obtain a CPU license from Oracle, either perpetual or on a yearly basis.
You need to install the 32-bit version of the Oracle 8i database. If you do not have this version, you can download Oracle 8i from the Oracle Web site. Before you start the Oracle installation, you need to create the following user and group names, which are required during the Oracle installation:
User name – oracle
Group name – dba
Install Oracle 8i according to the Oracle 8i installation instructions. Follow the steps for a typical installation. The N1 Provisioning Server installation process creates separate control plane and control center databases. You can remove the Oracle database after a successful Provisioning Server installation if you require more disk space.
Be sure to note the full path of the Oracle installation directory, ORACLE_HOME. The ORACLE_HOME environment variable is required for the N1 Provisioning Server software installation, and must be set on the control plane server.