A P P E N D I X  I


Configuring an HP Server Running the HP-UX Operating System

This appendix provides platform-specific host installation and configuration information to use when you connect a Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array to an HP server running the HP-UX operating system.

For a list of supported host bus adapters, refer to the Sun StorEdge 3000 Family Release Notes for your array.

The Sun StorEdge 3510 FC array and Sun StorEdge 3511 SATA array support the HP-UX operating system Level 11.0 and Level 11.i in dual-path configurations using Sun StorEdge Traffic Manager 3.0 failover drivers for the HP-UX operating system.

Refer to the Sun StorEdge Traffic Manager 3.0 Installation and User's Guide for the Hewlett Packard HP-UX Operating System for detailed instructions about setting up the device driver on the server and for additional information about configuring your HP server.

Customers interested in Sun StorEdge Traffic Manager 3.0 for multiplatform support should contact Sun Sales or visit:


For more information about multiplatform support, refer to:


The information in this appendix covers the following steps:

I.1 Setting Up a Serial Port Connection

The RAID controller can be configured by means of a host system running a VT100 terminal emulation program or by a Microsoft Windows terminal emulation program such as Hyperterminal.

If you are planning to access your array over an IP network or through a terminal server and only want to connect through a serial port for the initial configuration of the array, it is not necessary to configure a serial port connection from your HP host. For convenience, installers frequently perform the initial array configuration using a serial port on a portable computer.

If you want to use a Microsoft Windows portable computer for this initial array configuration, see Section F.1, Setting Up the Serial Port Connection for Windows 2000 systems.

If you prefer to connect through a serial port on your HP server, consult the hardware information for your HP host system to locate a serial port you can use for configuring the Sun StorEdge disk array. The system documentation also tells you what device file to use to access that port. Then set the serial port parameters on the server. See Section 4.9.2, Configuring the RS-232 Serial Port Connection for the parameters to use.

Note - The next section also shows how to use the Kermit utility to set these parameters.

Once you have configured your serial port, follow the instructions in the next section.

I.2 Accessing the Firmware Application From an HP Server Running HP-UX

The RAID controller can be configured from the host system by means of terminal emulators such as cu or Kermit. These instructions show the use of Kermit. For information about cu, refer to cu(1).

Note - You can also monitor and configure a RAID array over an IP network with Sun StorEdge Configuration Service after you assign an IP address to the array. For details, see Section 4.10, Setting Up Out-of-Band Management Over Ethernet and refer to the Sun StorEdge 3000 Family Configuration Service User's Guide.

To access the controller firmware through the serial port, perform the following steps:

1. Use a null modem serial cable to connect the COM port of the RAID array to an unused serial port on your host system.

A null modem cable has serial signals swapped for connecting to a standard serial interface.

Note - A DB9-to-DB25 serial cable adapter is included in your package contents for connecting the serial cable to a DB25 serial port on your host if you do not have a DB9 serial port.

  FIGURE I-1 RAID Array COM Port Connected Locally to the Serial Port of a Host System

Figure showing RAID array COM port connected locally to the COM port of a workstation or computer terminal.

2. Power on the array.

3. After the array is powered up, power on the HP server and log in as root, or become superuser if you are logged in as a user.

4. Start the Kermit program and set the parameters as shown.

Use the device-specific name for the serial port you are using. In the example, the serial port being configured is /dev/tty0p1.

# kermit
Executing /usr/share/lib/kermit/ckermit.ini for UNIX...
Good Morning!
C-Kermit 7.0.197, 8 Feb 2000, for HP-UX 11.00
Copyright (C) 1985, 2000,
Trustees of Columbia University in the City of New York.
Type ? or HELP for help.
(/) C-Kermit>set line /dev/tty0p1
(/) C-Kermit>set baud 38400
/dev/tty0p1, 38400 bps
(/) C-Kermit>set term byte 8
(/) C-Kermit>set carrier-watch off
(/) C-Kermit>C
Connecting to /dev/tty0p1, speed 38400.
The escape character is Ctrl-\ (ASCII 28, FS)
Type the escape character followed by C to get back,
or followed by ? to see other options.

Note - To return to the Kermit prompt, type Ctrl \ and then C. To exit Kermit, first return to the Kermit prompt and then type exit.

I.3 Attaching the Disk Array

The simplest way to configure a disk array is to use System Administration Manager (SAM), HP-UX's system administration tool. If SAM is not installed on your system, or if you prefer to use the command-line interface, the following procedures guide you through the task. For more information please consult the HP document, Configuring HP-UX for Peripherals.

Note - HP-UX requires a unique addressing system called Volume Set Addressing to be implemented in the target/LUN device semantics in order for HP-UX systems to have access to more than eight LUNs per target. Currently HP original equipment manufacturer (OEM), EMC, and HDS arrays are recognized by their vendor ID (VID) and designed to support these semantics with Host Mode configuration specific to HP-UX. You can avoid this limitation by using a different target ID on the host channel to map each group of eight LUNs. For more information about the addressing limitations of HP-UX, refer to your HP-UX documentation.

1. Use the ioscan command to determine what addresses are available on the HBA to which you will be attaching the array.

2. Access the firmware application on the array and set the SCSI IDs of the host channels you will be using.

3. Map the partitions containing storage that you want to use to the appropriate host channels.

Partitions must be assigned to LUNs in sequential order, beginning at LUN 0.

4. Halt the operating system using the shutdown command.

5. Turn off all power to peripheral devices and then to the server.

6. Attach one or more host channels of the Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array to the SCSI interface cards in the host using the fiber cables or optical cables that are provided.

7. Turn on the power to the Sun StorEdge 3510 FC array or Sun StorEdge 3511 SATA array and all other peripheral devices. After they are initialized, power on the server and boot HP-UX. During the boot process, the operating system recognizes the new disk devices and builds device files for them.

8. Verify that you can see the new storage resources by running the ioscan command. You are now ready to use the storage.

Note - If you create and map new partitions to the array, you can have them recognized by the operating system without rebooting. Run the ioscan and the insf commands to discover the resources and to create their device files.

I.4 Logical Volume Manager

The Logical Volume Manager (LVM) is a disk management system provided by HP in all versions of HP-UX 11. The LVM allows you to manage storage as logical volumes. This section describes some concepts used by the LVM and explains how to create logical volumes on your Sun StorEdge Fibre Channel Array. For more detailed information about the LVM, please consult lvm(7) and the HP publication Managing Systems and Workgroups: Guide for HP-UX System Administration (HP part number B2355-90742).

As with many system administration tasks, you can use SAM to create and maintain logical volumes. However, some functions can only be performed with HP-UX commands. The procedures in this appendix are performed using the command-line interface rather than SAM.

I.5 Definitions of Common Terms

Volume groups are HP-UX's method for dividing and allocating disk storage capacity. Volume groups can be used to subdivide a large partition of storage into smaller units of usable space called logical volumes.

Each volume group is divided into logical volumes, which are seen by the applications as individual disks. They can be accessed as either character or block devices and can contain their own file systems.

The underlying physical storage in a volume group consists of one or more physical volumes. A physical volume can be a single physical disk or a partition of a disk array.

Each physical volume is divided into units called physical extents. The default size of these units is 4 Mbyte, but can range in size from 1 Mbyte to 256 Mbyte. The maximum number of physical extents that a volume group can contain is 65,535. With the default size of 4 Mbyte, this limits the size of the volume group to 255 Gbyte.

To create a volume group larger than 255 Gbyte, you must increase the size of the physical extents when creating the volume group. Refer to vgcreate(1m) for further information.

I.6 Creating a Physical Volume

To use a storage resource in the LVM, it must first be initialized into a physical volume (also called an LVM disk).

1. Log in as root, or become superuser if you are not logged in with root user privileges.

2. Select one or more partitions on the array that you want to use. The output of ioscan(1M) shows the disks attached to the system and their device names:

# ioscan -fnC disk
Class I  H/W Path   Driver S/W State H/W Type Description
disk  1 0/12/0/0.6.0 sdisk  CLAIMED   DEVICE   Sun StorEdge 3510
/dev/dsk/c12t6d2 /dev/rdsk/c12t6d2

3. Initialize each partition as an LVM disk with the pvcreate command. For example, type:

# pvcreate /dev/rdsk/c12t6d2

caution icon

Caution - This process results in the loss of any data that resides on the partition.

I.7 Creating a Volume Group

The volume group contains the physical resources that you can use to create usable storage resources for your applications.

1. Create a directory for the volume group and a device file for the group in that directory:

# mkdir /dev/vgmynewvg
# mknod /dev/vgmynewvg/group c 64 0x060000

The name of the directory is the name of the volume group. By default, HP-UX uses names of the format vgNN, but you can choose any name that is unique within the list of volume groups.

In the preceding example, the mknod command has the following arguments:

To associate the physical volume with a volume group, use the vgcreate command:

# vgcreate /dev/vgmynewvg /dev/dsk/c12t6d2

To verify the creation and view the volume group properties, use the vgdisplay command:

# vgdisplay vg02
--- Volumegroups ---
VG Name
VG Write Access
VG Status
Max LV
Cur LV
Open LV
Max PV
Cur PV
Act PV
Max PE per PV
PE Size (Mbytes)
Total PE
Alloc PE
Free PE
Total PVG

In the output of vgdisplay, the Total PE field displays the number of physical extents in the volume group.

The size of each physical extent is displayed in the PE Size field (the default is 4 Mbyte), so the total capacity of this volume group is 2167 x 4 Mbyte = 8668 Mbyte.

The Alloc PE field shows the number of physical extents allocated to logical volumes. At this point, the Alloc PE field is zero because we have not assigned any of this volume group's capacity to logical volumes.

I.8 Creating a Logical Volume

To create a logical volume within the volume group, use the lvcreate command with the -L option to specify the size of the logical volume in megabytes. The logical volume size should be a multiple of the physical extent size. In this example, a logical volume of 4092 Mbyte is created:

# lvcreate -L 4092 /dev/vg02

Both character and block device files for the new logical volume are created in the volume group directory:

# ls /dev/vg02
group  lvol1  rlvol1

Applications should use these names to access the logical volumes. Unless you specify otherwise, HP-UX creates names in the form shown in the example. To specify custom names for logical volumes refer to vgcreate(1M).

I.9 Creating an HP-UX File System

The following command creates a file system on the logical volume created in the previous steps.

# /sbin/newfs -F vxfs /dev/vgmynewvg/rlvol1

I.10 Mounting the File System Manually

The process of incorporating a file system into the existing directory structure is known as "mounting the file system." The files, although present on the disk, are not accessible to users until they are mounted.

1. Create a directory to be the mount point for your new file system:

# mkdir /usr/local/myfs

2. To mount your file system, type the following:

# mount /dev/vgmynewvg/lvol1 /usr/local/myfs

I.11 Mounting the File System Automatically

By placing information about your file system in the fstab file, you can have HP-UX mount the file system automatically during bootup. You can also use the name of the mount point in mount commands that you issue from the console.

1. Make a copy of the existing fstab file:

# cp /etc/fstab /etc/fstab.orig

2. To include the file system created in the example, add the following line to the file /etc/fstab:

/dev/vg0mynewvg/lvol1 /usr/local/myfs vxfs delaylog 0 2

Refer to the entry for fstab(4) for details about creating /etc/fstab entries.

3. To check to see if fstab was set up correctly, type:

# mount -a

If the mount point and the fstab file are correctly set up, no errors are displayed.

4. To verify that the file system is mounted and list all mounted file systems, type:

# bdf

5. To unmount the file system, type:

# umount /usr/local/myfs

I.12 Determining the Worldwide Name for HP-UX Hosts

Before you can create host filters, you need to know the worldwide name (WWN) for the FC HBA that connects your host to your FC array.

For supported HP-UX host HBAs, follow these steps:

1. Determine the device name by typing the command:

# ioscan -fnC fc

2. Type:

# fcmsutil/device-name/

Output similar to the following is displayed:

 Screen capture showing the output of the fcmsutil command, including the HBA's worldwide name.

The Node worldwide name shown is the WWN you use when configuring the RAID controller.