C H A P T E R 6 |
Adding Blade Management and VLAN Tagging for SPARC Solaris Blades |
This chapter tells you how to configure the system chassis to permit secure management of server blades from the management network.
This chapter contains the following sections:
Note - For information about setting up redundant virtual connections for Linux and/or Solaris x86 blades, refer to the Sun Fire B100x and B200x Server Blade Installation and Setup Guide. |
This chapter tells you how to refine the configuration in Chapter 5 to enable network administrators to perform management tasks on the server blades from the management network (that is by telnet connections direct to the server blades) without compromising the security of the management network.
In FIGURE 6-1 there are dotted lines from the server blade ports in the chassis's switches to the management port (NETMGT). There are also dotted lines from the server blades themselves to the management port in each switch. These dotted lines represent links between components or devices that are members of the management VLAN (VLAN 2). By default VLAN 2, which contains the management port (NETMGT) on the switch, does not include any server blade ports. So to configure the chassis to support a network environment like the one in FIGURE 6-1 you must reconfigure these ports manually. For information on how to do this, see Section 6.3, Configuring the System Controller and Switches.
Also, by default, no network traffic is allowed to pass from the server blade ports - through the switch's packet filter - to the management port. This is a security feature and you must exercise caution when configuring the switch to permit traffic to pass through its packet filter. The instructions in Section A.12, Enabling Secure Management of Blades tell you how to permit only specific protocols to pass through the packet filter.
Finally, because the instructions in this chapter tell you how to include the server blades in the management network (VLAN 2), they also tell you how to modify the IPMP setup on the server blades so that, not only does each blade have a redundant connection to the data network (as described in Chapter 5), but each one also has a redundant connection to the management network (VLAN 2).
This section contains an illustration of the configuration from the previous chapter but with the enhancements described in the introduction above plus examples of the IPMP information required to create the redundant connections from each blade to the management network. The section also contains a sample /etc/hosts file for the Name Server on the management network. The administration files on the data network remain the same as in Chapter 5. However, the /etc/hosts file on the management network's Name Server needs to contain IP addresses (on the management subnet) for each server blade as well as for both SSCs and switches in the chassis (see CODE EXAMPLE 6-1).
If you have already set up the System Controller and switches in the system chassis by following the instructions in the previous chapters, then go straight to Section 6.3.1, Adding the Server Blades to the Management VLAN on the Switches in SSC0 and SSC1.
Otherwise, follow the instructions in Chapter 5 but do not configure the switch in SSC1, because the instructions below (Section 6.3.1, Adding the Server Blades to the Management VLAN on the Switches in SSC0 and SSC1) involve copying the entire configuration of the switch in SSC0 onto the switch in SSC1.
The instructions in this section tell you how to add the server blades to the management VLAN, which is VLAN 2 by default (in other words, by default VLAN2 contains the management port, NETMGT). VLAN 1 is also set up by default on the switch. This VLAN contains all the switch's server blade and uplink ports. However, to demonstrate the use of the switch's VLAN configuration facilities, the instructions in this section will use VLAN 3 instead of VLAN 1 for the data network.
In these instructions the management VLAN (VLAN 2) and the data VLAN (VLAN 3) are tagged. However, the instructions also tell you to create an additional VLAN for blade booting (VLAN 4). This handles untagged traffic generated by the blades during the Solaris Operating Environment Network Install process.
This traffic on the boot VLAN (VLAN 4) can be tagged or untagged when it leaves the system chassis. In the sample commands in this section it is tagged. (The instructions assume that the devices outside the chassis are VLAN-aware, and VLAN 4 is assumed to contain the Network Install Server used by the server blades.)
Note - If you reset the switch while you are performing the instructions in this section, you must save the configuration first. If you do not, you will lose all of your changes. To save the configuration, follow the instructions in Section A.9, Saving Your Switch Settings. |
1. From the sc> prompt, log into the console to configure the switch in SSC0.
To log into the switch in SSC0, type:
2. When prompted, type your user name and password.
3. At the Console# prompt on the switch's command line, type:
4. Enter the switch's VLAN database by typing:
5. Set up the VLAN for the data network and for the boot network by typing:
Console(config-vlan)#vlan 3 name Data media ethernet Console(config-vlan)#vlan 4 name Boot media ethernet |
6. Exit the vlan database by typing:
7. Add the server blade port SNP0 to the management VLAN (VLAN 2), the data VLAN (VLAN 3), and to the VLAN that you are using for booting (VLAN 4).
To do this, type the following commands:
The meaning of this sequence is as follows:
Repeat Step 7 for all the remaining server blade ports (SNP1 through SNP15). All of these ports need to be included in both the management network and the data network.
To inspect the port you have configured, type:
8. If you intend to combine any of the data uplink ports into aggreated links, do this now.
Follow the instructions in Section A.11, Using Aggregated Links for Resilience and Performance.
9. Add any data uplink ports (that are not combined into aggreated link) to the data VLAN (that is, VLAN 3) and to the boot VLAN (VLAN 4) by typing the following commands:
To inspect a port that you have configured, type:
10. Add any aggregated link to the data VLAN (VLAN 3) by typing the commands below.
For more information about using aggregated links, see Chapter A.
In the example below, the aggregated link is called port-channel 1. The interface port-channel 1 command specifies the aggregated link you are about to configure.
11. Add all uplink ports to VLAN 3 either individually or as aggregated links (see Step 9 and Step 10).
For example, if ports NETP1, NETP2, and NETP3 are combined into port-channel 1, and NETP4, and NETP5 are combined into port-channel 2, you will need to add ports NETP0, NETP6, and NETP7 plus port-channel 1 and port-channel 2 to VLAN 3.
12. Follow the instructions in Section A.12, Enabling Secure Management of Blades.
13. Save the changes you have made to the configuration of the switch in SSC0.
To do this, follow the instructions in Section A.9, Saving Your Switch Settings.
14. Copy the configuration of the switch in SSC0 on to the switch in SSC1.
Follow the instructions in Section A.10, Copying the Configuration of the First Switch to the Second.
15. Type #. to exit the switch's command-line interface and return to the System Controller.
16. From the sc> prompt, log into the switch in SSC1 by typing:
17. Type your user name and password.
18. Set the IP address, netmask, and default gateway for the switch in SSC1.
To do this, follow the instructions in Section A.7, Setting the IP address, Netmask, and Default Gateway.
19. Save the changes you have made to the configuration of the switch in SSC1.
To do this, follow the instructions in Section A.9, Saving Your Switch Settings.
20. Type #. to exit the switch command-line interface and return to the sc> prompt.
21. Follow the instructions in Section 6.4, Setting up the SPARC Solaris Blades Using IPMP for Network Resiliency (VLAN Tagging).
The switch configuration you performed in the previous section uses tagged VLANs to separate the data and management networks. For IPMP to work with this switch configuration, you need four IP addresses for each VLAN that the server blade is a member of. (In other words, you need eight IP addressess, four for the management VLAN and four for the data VLAN.)
This is because the IPMP driver supports tagged VLANs by using a separate pair of logical Ethernet interfaces for each VLAN. These logical interfaces each have to be named manually according to a simple formula:
where VLAN id is the number of the VLAN (as configured on the switch ports that the server blade is connected to inside the chassis), and instance is either 0 or 1 depending on whether the logical interface is associated with the physical interface ce0 or ce1.
The effect of creating these pairs of logical Ethernet interfaces is to ensure that frames for one network go to that network and not to any other. Whenever the IPMP driver has a frame to send to the switch, it tags it for whichever VLAN is destined to receive it and then transmits it using either of the two logical interfaces available for that VLAN. One of the switches then receives the frame (on the port that is dedicated to the particular server blade that sent it). And, assuming that the switch has been configured to accept frames for the VLAN indicated by the tag, it forwards the frame onto that VLAN.
The important point is that the server blade's IPMP driver has transmitted the frame onto a particular VLAN, and has used a redundant virtual connection to that VLAN to do so. Any other VLANs that the server blade is a member of have been prevented from receiving the frame.
This section tells you how to configure IPMP on a server blade so that the two Ethernet interfaces both provide two active logical interfaces (one each to the data VLAN and the management VLAN).
For purposes of illustration the instructions below use sample configuration input from the network scenario described in Section 6.2, Preparing the Network Environment. They assume that the server blade configuration for IPMP described in Chapter 5 has already been performed.
TABLE 6-1 summarizes the information you would need to give the IPMP driver on the server blade in Slot 0 of the system chassis illustrated in FIGURE 6-1.
Note - You need to perform the instructions in this section on each server blade that requires a redundant connection to the data network and the management network. |
1. Perform a preliminary setup of Solaris by following the instructions in Chapter 3.
When you have done this, type #. to return from a server blade console to the sc> prompt.
2. Log into the console of the server blade whose interfaces you want to configure.
Type the following at the sc> prompt:
where n is the number of the slot containing the server blade you want to log into.
3. Edit the /etc/hosts file on the server blade itself to add the IP addresses for the management interfaces.
For a blade using the sample addresses in TABLE 6-1, you would need to add the last two lines of the following file:
4. Set the netmask in the server blade's /etc/netmasks file.
For a blade using the sample addresses in TABLE 6-1, you would need to add the folowing line:
5. Disable routing, because the server blade is not being used to perform routing.
6. Unplumb the existing network interfaces by typing:
If either or both these interfaces have not been previously configured, you may receive the following error message:
7. Create the new interfaces by typing:
8. Create IPMP failover groups containing the new interfaces:
# ifconfig ce2000 group medusa_grp0-mgt # ifconfig ce2001 group medusa_grp0-mgt # ifconfig ce3000 group medusa_grp0 # ifconfig ce3001 group medusa_grp0 |
When you execute these commands, you might see the following type of syslog message:
Sep 3 00:49:58 medusa-s0 in.mpathd[298]: Failures cannot be detected on ce0 as no IFF_NOFAILOVER address is available |
This simply warns you that failures cannot be detected until test addresses have been established on the interfaces.
9. Create an address on each new interface for data transmission and mark it to failover if an interface failure is detected.
10. Configure a test address on each network interface.
These will be used by mpathd to detect interface failures. To prevent them from being used by host applications for data communication use the word deprecated on the command line (see below).
Also, you need to use the -failover flag. This causes in.mpathd to use the address as a test address (in other words, an address that cannot pass to the other interface and therefore does not fail over):
11. To enable the new interface configuration to survive a reboot, create files called hostname.ce2000, hostname.ce2001, hostname.ce3000, and hostname.ce3001 in the /etc directory.
A sample file for hostname.ce2000 is as follows:
medusa-s0-mgt netmask + broadcast + \ group medusa_grp0-mgt failover up \ addif medusa-s0-mgt-0 netmask + broadcast + \ deprecated -failover up |
A sample file for hostname.ce2001 is as follows:
medusa-s0-mgt-sec netmask + broadcast + \ group medusa_grp0-mgt failover up \ addif medusa-s0-mgt-1 netmask + broadcast + \ deprecated -failover up |
A sample file for hostname.ce3000 is as follows:
medusa-s0 netmask + broadcast + \ group medusa_grp0 failover up \ addif medusa-s0-0 netmask + broadcast + \ deprecated -failover up |
A sample file for hostname.ce3001 is as follows:
medusa-s0-sec netmask + broadcast + \ group medusa_grp0 failover up \ addif medusa-s0-1 netmask + broadcast + \ deprecated -failover up |
12. Inspect the configuration of the two network adapters by typing:
The output above shows that eight addresses have been defined (the sample addresses from TABLE 6-1). The four IPMP test addresses are marked NOFAILOVER. This means that they will not be transferred to the surviving interface in the event of a failure.
13. Test IPMP by temporarily removing one SSC from the chassis.
This will cause the following error messages to be displayed on the console:
Sep 4 20:12:16 medusa-s0 in.mpathd[31]: NIC failure detected on ce3001 of group medusa_grp0 Sep 4 20:12:16 medusa-s0 in.mpathd[31]: Successfully failed over from NIC ce3001 to NIC ce3000 |
Copyright © 2004, Sun Microsystems, Inc. All rights reserved.