Oracle® Hierarchical Storage Manager and StorageTek QFS Software Installation and Configuration Guide Release 6.0 E78137-01 |
|
Previous |
Next |
Carry out the storage configuration tasks outlined in this chapter before proceeding further with Oracle HSM installation and configuration. The chapter outlines the following topics:
In an Oracle HSM file system, primary disk or solid-state disk devices store files that are being actively used and modified. Follow the guidelines below when configuring disk or solid-state disk devices for the cache.
To estimate a starting capacity for the primary cache, decide how much data each file system will hold when full.
Increase this starting capacity by 10% to allow for file-system metadata.
If you are preparing for a high-performance ma
-type file system, configure hardware for the mm
metadata devices. One, hardware-controlled, four-disk, RAID 10 (1+0) volume group per mm
metadata device is ideal. Consider using solid-state disk devices for maximum performance.
The characteristics of striped-mirror, RAID 10 arrays are ideal for storing Oracle HSM metadata. RAID 10 storage hardware is highly redundant, so critical metadata is protected. Throughput is higher and latency is lower than in most other RAID configurations.
An array that is controlled by dedicated controller hardware generally offers higher performance than an array controlled by software running on a shared, general-purpose processor.
Solid-state devices are particularly useful for storing metadata that is, by its nature, frequently updated and frequently read.
If you are using an external disk array for primary cache storage, configure 3+1 or 4+1 RAID 5 volume groups for each md
or mr
device in the file-system configuration. Configure one logical volume (LUN) on each volume group.
For a given number of disks, smaller, 3+1 and 4+1 RAID 5 volume groups provide greater parallelism and thus higher input/output (I/O) performance than larger volume groups. The individual disk devices in RAID 5 volume groups do not operate independently—from an I/O perspective, each volume group acts much like a single device. So dividing a given number of disks into 3+1 and 4+1 volume groups creates more independent devices, better parallelism, and less I/O contention than otherwise equivalent, larger configurations.
Smaller RAID groups offer less capacity, due to the higher ratio of parity to storage. But, for most users, this is more than offset by the performance gains. In an archiving file system, the small reduction in disk cache capacity is often completely offset by the comparatively unlimited capacity available in the archive.
Configuring multiple logical volumes (LUNs) on a volume group makes I/O to the logically separate volumes contend for a set of resources that can service only one I/O at a time. This increases I/O-related overhead and reduces throughput.
Next, start Configuring Archival Storage.
Carry out the following tasks:
Zone the storage area network (SAN) to allow communication between the drive and the host bus adapter.
Make sure that the host can see the devices on the SAN. Enter the Solaris configuration administration command cfgadm
with the -al
(attachment-points list) and -o show_SCSI_LUN
options. Examine the output for the World Wide Name (WWN) of the drive port.
The first column of the output displays the attachment-point ID (Ap_id
), which consists of the controller number of the host bus adapter and the WWN, separated by colons. The -o show_SCSI_LUN
option displays all LUNs on the node if the node is the bridged drive controlling a media changer via an ADI interface.
root@solaris:~#cfgadm
-al
-o
show_SCSI_LUN
Ap_Id Type Receptacle Occupant Condition c2::500104f000937528 tape connected configured unknown c3::50060160082006e2,0 tape connected unconfigured unknown
If the drive's WWN is not listed in the output of cfgadm -al
-o show_SCSI_LUN
, the drive is not visible. Something is wrong with the SAN configuration. So recheck SAN connections and the zoning configuration. Then repeat the preceding step.
If the output of the cfgadm -al
command shows that a drive is unconfigured, run the command again, this time using the -c
(configure) switch.
The command builds the necessary device files in /dev/rmt
:
root@solaris:~#cfgadm
-al
Ap_Id Type Receptacle Occupant Condition c2::500104f000937528 tape connected configured unknown c3::50060160082006e2,0 tape connected unconfigured unknown root@solaris:~#cfgadm
-c
configure
50060160082006e2,0
Verify the association between the device name and the World Wide Name. Use the command ls
-al
/dev/rmt | grep
WWN
, where WWN
is the World Wide Name.
root@solaris:~#ls
-al
/dev/rmt
|
grep
50060160082006e2,0
lrwxrwxrwx 1 root root 94 May 20 05:05 3un -> \ ../../devices/pci@1f,700000/SUNW,qlc@2/fp@0,0/st@w50060160082006e2,0:
If you have the recommended minimum Solaris patch level, stop here, go to Configuring Archival Disk Storage.
Otherwise, get the target ID for your device.
Edit /kernel/drv/st.conf
. Add the vendor-specified entry to the tape-config-list
, specifying the target ID determined above.
Force reload the st
module. Use the command update_drv -f st
.
root@solaris:~# update_drv -f st
root@solaris:~#
Next, go to Configuring Archival Disk Storage.
You can use ZFS, UFS, QFS, or NFS file systems for the volumes in a disk archive. For best archiving and staging performance, configure file systems and underlying storage to maximize the bandwidth available for archiving and staging, while minimizing contention between archiving and staging jobs and between Oracle HSM and other applications. Observe the following guidelines:
Use dedicated file systems, so that Oracle HSM does not contend with other applications and users for access to the file system.
Configure one Oracle HSM archival disk volume per file system or ZFS data set and set a quota for the amount of storage space that the archival disk volume can occupy.
When the storage space for an archive volume is dynamically allocated from a pool of shared disk devices, make sure that the underlying physical storage is not oversubscribed. Quotas help to keep Oracle HSM archiving processes from trying to use more of the aggregate storage than it has available.
Size each file system at between 10 and 20 terabytes, if possible.
When the available disk resources allow, configure multiple file systems, so that individual Oracle HSM archiving and staging jobs do not contend with each other for access to the file system. Between fifteen and thirty archival file systems are optimum.
Configure each file system on dedicated devices, so that individual archiving and staging jobs do not contend with each other for access to the same underlying hardware.
Do not use the subdirectories of a single file system as separate archival volumes.
Do not configure two or more file systems on LUNs that reside on the same physical drive or RAID group.
Now go to Configuring Archival Tape Storage.
Carry out the following tasks:
Determine the Order in Which Drives are Installed in the Library
Configure Direct-Attached Libraries (if any).
If your automated library contains more than one drive, the order of the drives in the Oracle HSM master configuration file (mcf
) file must be the same as the order in which the drives are seen by the library controller. This order can be different from the order in which devices are seen on the host and reported in the host /var/adm/messages
file.
For each Oracle HSM metadata server and datamover host, determine the drive order by carrying out the tasks listed below:
Gather Drive Information for the Library and the Solaris Host
Either Map the Drives in a Direct-Attached Library to Solaris Device Names or Map the Drives in an ACSLS-Attached Library to Solaris Device Names, depending on the equipment you are using.
Consult the library documentation. Note how drives and targets are identified. If there is a local operator panel, see how it can be used to determine drive order.
If the library has a local operator panel mounted on the library, use it to determine the order in which drives attach to the controller. Determine the SCSI target identifier or World Wide Name of each drive.
Log in to the Solaris host as root
.
root@solaris:~#
List the Solaris logical device names in /dev/scsi/changer/
, redirecting the output to a text file.
In the example, we redirect the listings for /dev/rmt/
to the file device-mappings.txt
in the root
user's home directory:
root@solaris:~#ls
-l
/dev/rmt/
>
/root/device-mappings.txt
Now, Map the Drives in a Direct-Attached Library to Solaris Device Names or Map the Drives in an ACSLS-Attached Library to Solaris Device Names.
For each Solaris logical drive name listed in /dev/rmt/
and each drive that the library assigns to the Oracle HSM server host, carry out the following procedure:
If you are not already logged in to the Oracle HSM Solaris host, log in as root
.
root@solaris:~#
In a text editor, open the device mappings file that you created in the procedure "Gather Drive Information for the Library and the Solaris Host", and organize it into a simple table.
You will need to refer to this information in subsequent steps. In the example, we are using the vi
editor to delete the permissions, ownership, and date attributes from the /dev/rmt/
list, while adding headers and space for library device information:
root@solaris:~#vi
/root/device-mappings.txt
LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ---------- ------------------------------------------- /dev/rmt/0 -> ../../devices/pci@1f,4000/scsi@2,1/st@2,0: /dev/rmt/1 -> ../../devices/pci@1f,4000/scsi@4,1/st@5,0: /dev/rmt/2 -> ../../devices/pci@1f,4000/scsi@4,1/st@6,0: /dev/rmt/3 -> ../../devices/pci@1f,4000/scsi@4/st@1,0: lrwxrwxrwx 1 root root 40 Mar 18 2014 /dev/rmt/4 -> ../../devices/pci@1f,4000/scsi@4/st@2,0:
On the library, make sure that all drives are empty.
Load a tape into the first drive in the library that you have not yet mapped to a Solaris logical device name.
For the purposes of the examples below, we load an LTO4 tape into an HP Ultrium LTO4 tape drive.
Identify the Solaris /dev/rmt/
entry that corresponds to the drive that mounts the tape. Until you identify the drive, run the command mt
-f
/dev/rmt/
number
status
where number
identifies the drive in /dev/rmt/
.
In the example, the drive at /dev/rmt/0
is empty, but the drive at /dev/rmt/1
holds the tape. So the drive that the library identifies as drive 1 corresponds to Solaris /dev/rmt/1
:
root@solaris:~#mt -f /dev/rmt/0 status
/dev/rmt/0: no tape loaded or drive offline root@solaris:~#mt -f /dev/rmt/1 status
HP Ultrium LTO 4 tape drive: sense key(0x0)= No Additional Sense residual= 0 retries= 0 file no= 0 block no= 3
In the device-mappings file, locate the entry for the Solaris device that holds the tape, and enter the library's device identifier in the space provided. Then save the file.
In the example, enter 1
in the LIBRARY DEVICE NUMBER
field of the row for /dev/rmt/1
:
root@solaris:~# vi /root/device-mappings.txt
LIBRARY SOLARIS SOLARIS
DEVICE LOGICAL PHYSICAL
NUMBER DEVICE DEVICE
------- ---------- -------------------------------------------
/dev/rmt/0 -> ../../devices/pci@1f,4000/scsi@2,1/st@2,0:
1
/dev/rmt/1 -> ../../devices/pci@1f,4000/scsi@4,1/st@5,0:
/dev/rmt/2 -> ../../devices/pci@1f,4000/scsi@4,1/st@6,0:
/dev/rmt/3 -> ../../devices/pci@1f,4000/scsi@4/st@1,0:
:w
Unload the tape.
Repeat this procedure until the device-mappings file holds Solaris logical device names for all devices that the library assigns to the Oracle HSM host. Then save the file and close the editor.
root@solaris:~# vi /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ---------- -------------------------------------------2
/dev/rmt/0 -> ../../devices/pci@1f,4000/scsi@2,1/st@2,0: 1 /dev/rmt/1 -> ../../devices/pci@1f,4000/scsi@4,1/st@5,0:3
/dev/rmt/2 -> ../../devices/pci@1f,4000/scsi@4,1/st@6,0:4
/dev/rmt/3 -> ../../devices/pci@1f,4000/scsi@4/st@1,0::wq
root@solaris:~#
Keep the mappings file.
You will need the information for Configuring the Basic File System (Chapter 6), and you may wish to include it when Backing Up the Oracle HSM Configuration (Chapter 13).
Next, go to "Configure Direct-Attached Libraries".
If you are not already logged in to the Oracle HSM Solaris host, log in as root
.
root@solaris:~#
In a text editor, open the device mappings file that you created in the procedure "Gather Drive Information for the Library and the Solaris Host", and organize it into a simple table.
You will need to refer to this information in subsequent steps. In the example, we are using the vi
editor to delete the permissions, ownership, and date attributes from the /dev/rmt/
list, while adding headers and space for library device information:
root@solaris:~# vi /root/device-mappings.txt
LOGICAL DEVICE DEVICE SERIAL NUMBER ACSLS DEVICE ADDRESS
-------------- -------------------- ----------------------------------
/dev/rmt/0
/dev/rmt/1
/dev/rmt/2
/dev/rmt/3
For each logical device name listed in /dev/rmt/
, display the device serial number. Use the command luxadm
display
/dev/rmt/
number
, where number
identifies the drive in /dev/rmt/
.
In the example, we obtain the serial number HU92K00200
for device /dev/rmt/0
:
root@solaris:~#luxadm
display
/dev/rmt/0
DEVICE PROPERTIES for tape: /dev/rmt/0 Vendor: HP Product ID: Ultrium 4-SCSI Revision: G25W Serial Num: HU92K00200 ... Path status: Ready root@solaris:~#
Enter the serial number in the corresponding row of the device-mappings.txt
file.
In the example, we record the serial number of device /dev/rmt/0
, HU92K00200
in the row for logical device /dev/rmt/0
.
root@solaris:~#vi
/root/device-mappings.txt
LOGICAL DEVICE DEVICE SERIAL NUMBER ACSLS DEVICE ADDRESS -------------- -------------------- ---------------------------------- /dev/rmt/0HU92K00200
/dev/rmt/1 /dev/rmt/2 /dev/rmt/3:wq
root@solaris:~#
Repeat the two preceding steps until you have identified the device serial numbers for all logical devices listed in /dev/rmt/
and recorded the results in the device-mappings.txt
file.
In the example, there are four logical devices:
root@solaris:~#vi
/root/device-mappings.txt
LOGICAL DEVICE DEVICE SERIAL NUMBER ACSLS DEVICE ADDRESS -------------- -------------------- ---------------------------------- /dev/rmt/0 HU92K00200 /dev/rmt/1 HU92K00208 /dev/rmt/2 HU92K00339 /dev/rmt/3 HU92K00289 :w root@solaris:~#
For each device serial number mapped to /dev/rmt/
, obtain the corresponding ACSLS drive address. Use the ACSLS command display
drive
*
-f
serial_num
.
In the example, we obtain the ACSLS addresses of devices HU92K00200
(/dev/rmt/0
), HU92K00208
(/dev/rmt/1
), HU92K00339
(/dev/rmt/2
), HU92K00289
(/dev/rmt/3
):
ACSSA>display drive * -f serial_num
2014-03-29 10:49:12 Display Drive Acs Lsm Panel Drive Serial_num 0 2 10 12 331000049255 0 2 10 16 3310020313520 2 10 17
HU92K00200
0 2 10 18
HU92K00208
0 3 10 10
HU92K00339
0 3 10 11 HU92K001890 3 10 12 HU92K00289
Record each ACSLS drive address in the corresponding row of the device-mappings.txt
file. Save the file, and close the text editor.
root@solaris:~#vi /root/device-mappings.txt
LOGICAL DEVICE DEVICE SERIAL NUMBER ACSLS DEVICE ADDRESS -------------- -------------------- ---------------------------------- /dev/rmt/0 HU92K00200(acs=0, lsm=2, panel=10, drive=17)
/dev/rmt/1 HU92K00208(acs=0, lsm=2, panel=10, drive=18)
/dev/rmt/2 HU92K00339(acs=0, lsm=2, panel=10, drive=10)
/dev/rmt/3 HU92K00289(acs=0, lsm=2, panel=10, drive=12)
:wq
Keep the mappings file.
You will need the information for Configuring the Basic File System (Chapter 6), and you may wish to include it when Backing Up the Oracle HSM Configuration (Chapter 13).
You configure Oracle StorageTek ACSLS network-attached libraries when you configure archiving file systems. So, if you are planning a high-availability file system, go to "Configuring Storage for High-Availability File Systems". Otherwise, go to "Installing Oracle HSM and QFS Software".
To configure a direct-attached tape library, you must physically connect the hardware and, in some cases, configure the SCSI driver (Oracle HSM controls library robotics via the generic sgen
driver rather than the samst
driver used by SAM-QFS prior to release 5.4). Proceed as follows:
Physically connect the library and drives to the Oracle HSM server host.
If you are installing Oracle HSM for the first time or upgrading an Oracle HSM or SAM-QFS 5.4 configuration on Solaris 11, stop once the hardware has been physically connected.
Under Solaris 11, sgen
is the default SCSI driver, so the Oracle HSM installation software can automatically update driver aliases and configuration files.
If you are installing Oracle HSM on a Solaris 10 system, see if one of the driver aliases in the list below is assigned to the sgen
driver. Use the command grep
scs.*,08
/etc/driver_aliases
.
The sgen
driver may be assigned any of the following aliases:
scsa,08.bfcp"
and/or scsa,08.bvhci
scsiclass,08
In the example, Solaris is using the scsiclass,08
alias for the sgen
driver:
root@solaris:~#grep
scs.*,08
/etc/driver_aliases
sgen "scsiclass,08" root@solaris:~#
If the grep
command returns sgen
"
alias
"
, where alias
is an alias in the list above, the sgen
driver is installed and correctly assigned to the alias. So, if you are configuring a high-availability file system, see Configuring Storage for High-Availability File Systems. Otherwise go to "Installing Oracle HSM and QFS Software".
If the grep
command returns some-driver
"
alias
"
, where some-driver
is some driver other than sgen
and where alias
is one of the aliases listed above, then the alias is already assigned to the other driver. So Create a Path-Oriented Alias for the sgen
Driver.
If the command grep
scs.*,08
/etc/driver_aliases
does not return any output, the sgen
driver is not installed. So install it. Use the command add_drv
-i
scsiclass,08
sgen
.
In the example, the grep
command does not return anything. So we install the sgen
driver:
root@solaris:~# grep scs.*,08 /etc/driver_aliases root@solaris:~#add_drv
-i
scsiclass,08
sgen
If the command add_drv
-i
scsiclass,08
sgen
returns the message Driver (sgen) is already installed
, the driver is already installed but not attached. So attach it now. Use the command update_drv
-a
-i
scsiclass,08 sgen
.
In the example, the add_drv
command indicates that the driver is already installed. So we attach the driver:
root@solaris:~# add_drv -i scsiclass,08 sgen Driver (sgen) is already installed. root@solaris:~#update_drv
-a
-i
scsiclass,08 sgen
If the command grep
scs.*,08
/etc/driver_aliases
shows that the alias scsiclass,08
is assigned to the sgen
driver, the driver is properly configured.
root@solaris:~# grep scs.*,08 /etc/driver_aliases sgen "scsiclass,08" root@solaris:~#
If you are configuring a high-availability file system, see Configuring Storage for High-Availability File Systems.
Otherwise, go to "Installing Oracle HSM and QFS Software".
sgen
DriverIf the expected sgen
alias is already assigned to another driver, you need to create a path-oriented alias that attaches the specified library using sgen
, without interfering with existing driver assignments. Proceed as follows:
Log in to the Oracle HSM server host as root
.
root@solaris:~#
Display the system configuration. Use the command cfgadm
-vl
.
Note that cfgadm
output is formatted using a two-row header and two rows per record:
root@solaris:~#cfgadm
-vl
Ap_Id Receptacle Occupant Condition Information When Type Busy Phys_Id c3 connected configured unknown unavailable scsi-sas n /devices/pci@0/pci@0/pci@2/scsi@0:scsi c5::500104f0008e6d78 connected configured unknown unavailable med-changer y /devices/pci@0/.../SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78 ... root@solaris:~#
In the output of cfgadm
-vl
, find the record for the library. Look for med-changer
in the Type
column of the second row of each record.
In the example, we find the library in the second record:
root@solaris:~# cfgadm -vl
Ap_Id Receptacle Occupant Condition Information When
Type Busy Phys_Id
c3 connected configured unknown unavailable
scsi-sas n /devices/pci@0/pci@0/pci@2/scsi@0:scsi
c5::500104f0008e6d78 connected configured unknown unavailable
med-changer
y /devices/pci@0/.../SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78
...
root@solaris:~#
Get the physical path that will serve as the new path-oriented alias. Remove the substring /devices
from the entry in the Phys_Id
column in the output of cfgadm
-vl
.
In the example, the Phys_Id
column of the media changer record contains the path /devices/pci@0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78
, so we select the portion of the string following /devices/
as the alias (note that this physical path has been abbreviated to fit the space available below):
root@solaris:~# grep scsiclass,08 /etc/driver_aliases
sdrv "scsiclass,08"
root@solaris:~# cfgadm -vl
Ap_Id Receptacle Occupant Condition Information When
Type Busy Phys_Id
c3 connected configured unknown unavailable
scsi-sas n /devices/pci@0/pci@0/pci@2/scsi@0:scsi
c5::500104f0008e6d78 connected configured unknown unavailable
med-changer y /devices/pci@0/.../SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78
...
root@solaris:~#
Create the path-oriented alias and assign it to the sgen
driver. Use the command update_drv
-d
-i
'"/
path-to-library
"'
sgen
, where path-to-library
is the path that you identified in the preceding step.
In the example, we use the library path to create the path-oriented alias '"/pci@0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78"'
(note the single and double quotation marks). The command is a single line, but has been formatted as two to fit the page layout:
root@solaris:~#update_drv
-d
-i
\'"/pci@0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78"'
sgen
root@solaris:~#
The library has now been configured using the sgen
driver
If you are configuring a high-availability file system, go to Configuring Storage for High-Availability File Systems.
Otherwise, go to "Installing Oracle HSM and QFS Software".
To configure a high-availability shared file system, you must take care to follow the recommendations in the hardware administration manual for your version of the Solaris Cluster software. These include providing redundant paths and storage devices.
Make sure that Storage Area Network connections cannot suffer single-point failures. Provide multiple interconnects and redundant switches. Install multiple host bus adapters (HBAs) on each node and use Oracle Solaris I/O multipathing software (for additional details, see the Oracle Solaris SAN Configuration and Multipathing Guide in the Oracle Solaris customer documentation library and the stmsboot
man page).
Configure fully redundant primary storage devices. Place Oracle HSM file-system metadata and configuration files on mirrored devices, either hardware-controlled RAID-10 volume groups or RAID-1 Solaris Volume Manager volumes. Place file-system data on hardware-controlled RAID-10 or RAID-5 volume groups or on RAID-1 Solaris Volume Manager volumes.
If you plan to use Solaris Volume Manager (SVM) multi-owner disk groups to provide device redundancy, please note that the SVM software is no longer installed by default with current releases of the Solaris operating system. You must download and install the version of the software that was included with the Solaris 10 9/10 release. Then follow the configuration recommendations in the Solaris Cluster documentation.