StorageTek Storage Archive Manager and StorageTek QFS Software Installation and Configuration Guide Release 5.4 E42062-02 |
|
Previous |
Next |
SAM-QFS high-availability configurations are designed to maintain uninterrupted file-system and archiving services. SAM-QFS software is integrated with Oracle Solaris Cluster software, redundant hardware, and redundant communications. So, if a host system or component fails or is taken out of service by administrators, SAM-QFS services automatically fail over to an alternative host that users and applications can access. High-availability configurations thus minimize downtime due to equipment and system failure.
High-availability configurations are complex, however, and must be carefully designed and deployed to prevent unforeseen interactions and, possibly, data corruption. So this chapter starts with an explanation of the supported configurations, Understanding the Supported SAM-QFS High-Availability Configurations. Study this section and select the configuration that best addresses your availability requirements. Subsequent sections can then explain how you set up your selected configuration.
Note that you cannot mix hardware architectures in a shared Oracle Solaris Cluster configuration. All of the nodes must use either the SPARC architecture, the x86-64 architecture (Solaris 11.1 only), or the 32-bit x86 architecture (Solaris 10 and earlier).
In clustered, multihost solutions, interactions between the file systems, applications, operating systems, clustering software, and storage have to be carefully controlled to insure the integrity of stored data. To minimize complexity and potential risk, supported high-availability SAM-QFS configurations are thus tailored to four specific sets of deployment requirements:
HA-QFS, a High-Availability QFS Unshared, Standalone File-System Configuration
HA-COTC, a QFS Shared File System with High-Availability Metadata Servers
HA-SAM, a High-Availability, Archiving, QFS Shared File-System Configuration
SC-RAC, a High-Availability QFS Shared File-System Configuration for Oracle RAC.
The High Availability QFS (HA-QFS) configuration insures that a QFS unshared, standalone file system remains accessible in the event of a host failure. The file system is configured on both nodes in a two-node cluster that Solaris Cluster software manages as a resource of type SUNW.HAStoragePlus
. But at any given time, only one node mounts the QFS file system. If the node that is mounting the file system fails, the clustering software automatically initiates fail over and re-mounts the file system on the remaining node.
Clients access data via high-availability Network File System (HA-NFS), NFS, or SMB/CIFS shares, with the active cluster node acting as a file server.
For implementation instructions, see "High-Availability QFS Unshared File Systems".
The High Availability-Clients Outside the Cluster (HA-COTC) configuration maintains the availability of a QFS metadata server so that QFS file system clients can continue to access their data even if a server fails. The file system is shared. QFS active and potential metadata servers are hosted on a two-node cluster managed by Solaris Cluster software. A SAM-QFS high-availability resource of type SUNW.qfs
manages failover for the shared file-system servers within the cluster. All clients are hosted outside of the cluster. The clustered servers insure the availability of metadata, issue I/O licenses, and maintain the consistency of the file system.
If the node that hosts the active metadata server fails, Solaris Cluster software automatically activates the potential MDS on the healthy node and initiates failover. The QFS file system is shared, so it is already mounted on the newly activated metadata server node and remains mounted on the clients. Clients continue to receive metadata updates and I/O leases, so file-system can continue without interruption.
HA-COTC configurations must use high performance ma
file systems with physically separate mm
metadata devices and mr
data devices. General purpose ms
file systems and md
devices are not supported. You can share HA-COTC file systems with non-SAM-QFS, network clients using the standard Network File System (NFS), but HA-NFS is not supported.
For implementation instructions, see "High-Availability QFS Shared File Systems, Clients Outside the Cluster".
The High-Availability Storage Archive Manager (HA-SAM) configuration maintains the availability of an archiving file system by insuring that the QFS metadata server and the Storage Archive Manager application continue to operate even if a if a server host fails. The file system is shared between active and potential QFS metadata servers hosted on a two-node cluster that is managed by Solaris Cluster software. A SAM-QFS high-availability resource of type SUNW.qfs
manages failover for the servers.
Clients access data via high-availability Network File System (HA-NFS), NFS, or SMB/CIFS shares, with the active cluster node acting as a file server.
If the active SAM-QFS metadata server node fails, the clustering software automatically activates the potential metadata server node and initiates failover. Since the QFS file system is shared and already mounted on all nodes, access to data and metadata remains uninterrupted.
For implementation instructions, see "High-Availability SAM-QFS Shared Archiving File Systems,".
The Solaris Cluster-Oracle Real Application Cluster (SC-RAC) configuration supports high-availability database solutions that use QFS file systems. RAC software coordinates I/O requests, distributes workload, and maintains a single, consistent set of database files for multiple Oracle Database instances running on the nodes of a cluster. In the SC-RAC configuration, Oracle Database, Oracle Real Application Cluster (RAC), and QFS software run on two or more of the nodes in the cluster. Solaris Cluster software manages the cluster as a resource of type SUNW.qfs
. One node is configured as the metadata server (MDS) of a QFS shared file system. The remaining nodes are configured as potential metadata servers that share the file system as clients. If the active metadata server node fails, Solaris Cluster software automatically activates a potential metadata server on a healthy node and initiates failover. Since the QFS file system is shared and already mounted on all nodes, access to the data remains uninterrupted.
To configure a high-availability QFS (HA-QFS) file system, you set up two identical hosts in a two-node, Solaris Cluster, managed as a resource of type SUNW.HAStoragePlus
. You then configure a QFS unshared file system on both nodes. Only one node mounts the file system at any given time. But, if one node fails, the clustering software automatically initiates fail over and re-mounts the file system on the surviving node.
To set up a high-availability QFS (HA-QFS) file system, proceed as follows:
If required, configure High-Availability Network File System (HA-NFS) sharing.
Detailed procedures for setting up HA-NFS are included in the Oracle Solaris Cluster Data Service for Network File System (NFS) Guide that is included in the Oracle Solaris Cluster online documentation library.
Log in to one of the cluster nodes as root
.
In the example, the hosts are qfs1mds-node1
and qfs1mds-node2
. We log in to the host qfs1mds-node1
:
[qfs1mds-node1]root@solaris:~#
Configure the desired QFS file system on the host, but do not mount it.
Configure the file system using the instructions in "Configure a General-Purpose ms
File System" or "Configure a High-Performance ma
File System". The HA-QFS configuration does not support QFS shared file systems.
Log in to remaining cluster node as root
.
In the example, we log in to the host qfs1mds-node2
using ssh
:
[qfs1mds-node1
]root@solaris:~#ssh
root@qfs1mds-node2
Password: [qfs1mds-node2
]root@solaris:~#
Configure an identical QFS file system on the second node.
Proceed as follows:
Log in to one of the cluster nodes as root
.
In the example, the hosts are qfs1mds-node1
and qfs1mds-node2
. We log in to the host qfs1mds-node1
:
[qfs1mds-node1]root@solaris:~#
If you have not already done so, define the SUNW.HAStoragePlus
resource type for the Solaris Cluster software. Use the command clresourcetype
register
SUNW.HAStoragePlus
.
HAStoragePlus
is the Solaris Cluster resource type defines and manages dependencies between disk device groups, cluster file systems, and local file systems. It coordinates start-up of data services following failovers, so that all required components are ready when the service tries to restart. See the SUNW.HAStoragePlus
man page for further details.
[qfs1mds-node1]root@solaris:~#clresourcetype
register
SUNW.HAStoragePlus
Create a new Solaris Cluster resource of type SUNW.HAStoragePlus
and a new resource group to contain it. Use the command /usr/global/bin/clresource
create
-g
resource-group
-t
SUNW.HAStoragePlus
-x
FilesystemMountPoints=/global/
mount-point
-x FilesystemCheckCommand=/bin/true
QFS-resource
, where:
resource-group
is the name that you have chosen for the file-system resource group.
mount-point
is the directory where the QFS file system is mounted.
QFS-resource
is the name that you have chosen for the SUNW.HAStoragePlus
resource.
In the example, we create the resource group qfsrg
with the mount-point directory /global/qfs1
and the SUNW.HAStoragePlus
resource haqfs
(note that the command below is entered as a single line—the line break is escaped by the backslash):
[qfs1mds-node1]root@solaris:~#create
-g
qfsrg
-t
SUNW.HAStoragePlus
\-x
FilesystemMountPoints=
/global/samqfs1/qfs1
-x
FilesystemCheckCommand=
/bin/true
haqfs
Display the nodes in the cluster. Use the command clresourcegroup
status
.
In the example, the QFS file-system host nodes are qfs1mds-1
and qfs1mds-2
. Node qfs1mds-1
is Online
, so it is the primary node that mounts the file system and hosts the qfsrg
resource group:
[qfs1mds-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------qfsrg
qfs1mds-1 No Online
qfs1mds-2 No Offline
Make sure that the resource group fails over correctly by moving the resource group to the secondary node. Use the Solaris Cluster command clresourcegroup
switch
-n
node2
group-name
, where node2
is the name of the secondary node and group-name
is the name that you have chosen for the resource group. Then use clresourcegroup status
to check the result.
In the example, we move the haqfs
resource group to qfs1mds-node2
and confirm that the resource group comes online on the specified node:
[qfs1mds-node1]root@solaris:~#clresourcegroup
switch
-n
qfs1mds-node2
qfsrg
[qfs1mds-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------qfsrg
qfs1mds-1 No Offlineqfs1mds-2 No Online
Move the resource group back to the primary node. Use the Solaris Cluster command clresourcegroup
switch
-n
node1
group-name
, where node1
is the name of the primary node and group-name
is the name that you have chosen for the resource group. Then use clresourcegroup status
to check the result.
In the example, we successfully move the qfsrg
resource group back to qfs1mds-node1
:
[qfs1mds-node1]root@solaris:~#clresourcegroup
switch
-n
qfs1mds-node1
qfsrg
[qfs1mds-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------qfsrg
qfs1mds-node1 No Online
qfs1mds-node2 No Offline
If you need to configure High-Availability Network File System (HA-NFS) sharing, do so now. For instructions, see the Oracle Solaris Cluster Data Service for Network File System (NFS) Guide that is included in the Oracle Solaris Cluster online documentation library.
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".
The High Availability-Clients Outside the Cluster (HA-COTC) configuration is a non-archiving, QFS shared file system that hosts the crucial metadata server (MDS) on the nodes of a high-availability cluster managed by Solaris Cluster software. This arrangement provides failover protection for QFS metadata and file-access leases, so file system clients do not lose access to their data if a server fails. But file-system clients and data devices remain outside the cluster, so that Solaris Cluster does contend with QFS software for control of QFS shared data.
To configure an HA-COTC file system, carry out the tasks below:
Create a QFS Shared File System Hosts File on Both HA-COTC Cluster Nodes
Create Local Hosts Files on the QFS Servers and Clients Outside the HA-COTC Cluster
Configure an Active QFS Metadata Server on the Primary HA-COTC Cluster Node
Configure a Potential QFS Metadata Server on the Secondary HA-COTC Cluster Node
Configure Hosts Outside the HA-COTC Cluster as QFS Shared File System Clients
If required, configure Network File System (NFS) shares, as described in "Accessing File Systems from Multiple Hosts Using NFS and SMB/CIFS". High-Availability NFS (HA-NFS) is not supported.
In a QFS shared file system, you must configure a hosts file on the metadata servers, so that all hosts can access the metadata for the file system. The hosts file is stored alongside the mcf
file in the /etc/opt/SUNWsamfs/
directory. During the initial creation of a shared file system, the sammkfs -S
command configures sharing using the settings stored in this file. So create it now, using the procedure below.
Log in to the primary node of the HA-COTC cluster as root
.
In the example, the hosts are qfs1mds-node1
and qfs1mds-node2
. We log in to the host qfs1mds-node1
:
[qfs1mds-node1]root@solaris:~#
Display the cluster configuration using the /usr/global/bin/cluster
show
command. Locate the record for each Node Name
, and then note the privatehostname
, the Transport Adapter
name, and the ip_address
property of each network adapter.
The outputs of the commands can be quite lengthy, so, in the examples below, long displays are abbreviated using ellipsis (...) marks.
In the examples, each node has two network interfaces, qfe3
and hme0
:
The hme0
adapters have IP addresses on the private network that the cluster uses for internal communication between nodes. The Solaris Cluster software assigns a private hostname corresponding to each private address.
By default, the private hostname of the primary node is clusternode1-priv
, and the private hostname of the secondary node is clusternode2-priv
.
The qfe3
adapters have public IP addresses and hostnames—qfs1mds-node1
and qfs1mds-node2
—that the cluster uses for data transport.
[qfs1mds-node1]root@solaris:~#cluster
show
... === Cluster Nodes ===Node Name:
qfs1mds-node1
...privatehostname:
clusternode1-priv
...Transport Adapter List: qfe3, hme0
...Transport Adapter:
qfe3
... Adapter Property(ip_address
):172.16.0.12
...Transport Adapter:
hme0
... Adapter Property(ip_address
):10.0.0.129
...Node Name:
qfs1mds-node2
...privatehostname:
clusternode2-priv
...Transport Adapter List: qfe3, hme0
... Adapter Property(ip_address
):172.16.0.13
...Transport Adapter:
hme0
Adapter Property(ip_address
):10.0.0.122
Using a text editor, create the file /etc/opt/SUNWsamfs/hosts.
family-set-name
on the metadata server, where family-set-name
is the name of the family-set name of the file-system.
In the example, we create the file hosts.qfs1
using the vi
text editor. We add some optional headings to show the columns in the hosts table, starting each line with a hash sign (#), indicating a comment:
[qfs1mds-node1]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.qfs1
# /etc/opt/SUNWsamfs/hosts.qfs1
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------ ----------------------------- ------- --- ----------
In the first column of the table, enter the hostnames of the primary and secondary metadata server nodes followed by some spaces. Place each entry on a separate line.
In a hosts file, the lines are rows (records) and spaces are column (field) separators. In the example, the Host Name
column of the first two rows contains the values qfs1mds-node1
and qfs1mds-node2
, the hostnames of the cluster nodes that host the metadata servers for the file system:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ----------qfs1mds-node1
qfs1mds-node2
In the second column of each line, start supplying Network Interface
information for host Host Name
. Enter each HA-COTC cluster node's Solaris Cluster private hostname or private network address followed by a comma.
The HA-COTC server nodes use the private hostnames for server-to-server communications within the high-availability cluster. In the example, we use the private hostnames clusternode1-priv
and clusternode2-priv
, which are the default names assigned by the Solaris Cluster software:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- qfs1mds-node1clusternode1-priv,
qfs1mds-node2clusternode2-priv,
Following the comma in the second column of each line, enter a virtual public hostname for the active metadata server, followed by spaces.
The HA-COTC server nodes use the public data network to communicate with the clients, all of which reside outside the cluster. Since the IP address and hostname of the active metadata server changes during failover (from qfs1mds-node1
to qfs1mds-node2
and vice versa), we use a virtual hostname—qfs1mds
—for both. Later, we will configure the Solaris Cluster software to always route requests for qfs1mds
to the active metadata server:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- qfs1mds-node1clusternode1-priv,qfs1mds
qfs1mds-node2clusternode2-priv,qfs1mds
In the third column of each line, enter the ordinal number of the server (1
for the active metadata server, and 2
for the potential metadata server), followed by spaces.
In this example, there is only one metadata server, the primary node, qfs1mds-node1
, is the active metadata server, so it is ordinal 1
and the secondary node, qfs1mds-node2
, is ordinal 2
:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- qfs1mds-node1 clusternode1-priv,qfs1mds1
qfs1mds-node2 clusternode2-priv,qfs1mds2
In the fourth column of each line, enter 0
(zero), followed by spaces.
A 0
(zero), -
(hyphen), or blank value in the fourth column indicates that the host is on—configured with access to the shared file system. A 1
(numeral one) indicates that the host is off—configured but without access to the file system (for information on using these values when administering shared file systems, see the samsharefs
man page).
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- qfs1mds-node1 clusternode1-priv,qfs1mds 10
qfs1mds-node2 clusternode2-priv,qfs1mds 20
In the fifth column of the line for the primary node, enter the keyword server
.
The server
keyword identifies the default, active metadata server:
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------ ----------------------------- ------- --- ----------
qfs1mds-node1 clusternode1-priv,qfs1mds 1 0 server
qfs1mds-node2 clusternode2-priv,qfs1mds 2 0
Add a line for each client host, setting the Server Ordinal
value to 0
. Then save the file and close the editor.
A server ordinal of 0
identifies the host as a client rather than a server. HA-COTC clients are not members of the cluster and thus communicate only over the cluster's public, data network. They only have public IP addresses. In the example, we add two clients, qfs1client1
and qfs1client2
, using their public IP addresses, 172.16.0.133
and 172.16.0.147
rather than hostnames:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- qfs1mds-node1 clusternode1-priv,qfs1mds 1 0 server qfs1mds-node2 clusternode2-priv,qfs1mds 2 0qfs1client1 172.16.0.133 0
0qfs1client2 172.16.0.147 0
0:wq
[qfs1mds-node1]root@solaris:~#
Place a copy of the global /etc/opt/SUNWsamfs/hosts.
family-set-name
file on the QFS potential metadata server (the second HA-COTC Cluster node).
Now, Create Local Hosts Files on the QFS Servers and Clients Outside the HA-COTC Cluster.
In a high-availability configuration that shares a file system with clients outside the cluster, you need to insure that the clients only communicate with the file system servers using the public, data network defined by the Solaris Cluster software. You do this by using specially configured QFS local hosts files to selectively route network traffic between clients and multiple network interfaces on the server.
Each file-system host identifies the network interfaces for the other hosts by first checking the /etc/opt/SUNWsamfs/hosts.
family-set-name
file on the metadata server. Then it checks for its own, specific /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
file. If there is no local hosts file, the host uses the interface addresses specified in the global hosts file in the order specified in the global file. But if there is a local hosts file, the host compares it with the global file and uses only those interfaces that are listed in both files in the order specified in the local file. By using different addresses in different arrangements in each file, you can thus control the interfaces used by different hosts.
To configure local hosts files, use the procedure outlined below:
Log in to the primary node of the HA-COTC cluster as root
.
In the example, the hosts are qfs1mds-node1
and qfs1mds-node2
. We log in to the host qfs1mds-node1
:
[qfs1mds-node1]root@solaris:~#
Create a local hosts file on each of the active and potential metadata servers, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the equipment identifier for the shared file system. Only include interfaces for the networks that you want the active and potential servers to use.
In our example, we want the active and potential metadata servers to communicate with each other over the private network and with clients via the public network. So the local hosts file on the active and potential servers, hosts.qfs1.local
, lists only cluster private addresses for the active and potential servers:
[qfs1mds-node1
]root@solaris:~#vi /etc/opt/SUNWsamfs/hosts.qfs1.local
# /etc/opt/SUNWsamfs/hosts.qfs1.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- qfs1mds-node1clusternode1-priv
1 0 server qfs1mds-node2clusternode2-priv
2 0 qfs1client1 172.16.0.133 0 0 qfs1client2 172.16.0.147 0 0:wq
[qfs1mds-node1]root@solaris:~#ssh
root@qfs1mds-node2
Password:
[qfs1mds-node2
]root@solaris:~#vi /etc/opt/SUNWsamfs/hosts.qfs1.local
# /etc/opt/SUNWsamfs/hosts.qfs1.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- qfs1mds-node1clusternode1-priv
1 0 server qfs1mds-node2clusternode2-priv
2 0 qfs1client1 172.16.0.133 0 0 qfs1client2 172.16.0.147 0 0:wq
[qfs1mds-node2]root@solaris:~#exit
[qfs1mds-node1]root@solaris:~#
Using a text editor, create a local hosts file on each of the clients, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the equipment identifier for the shared file system. Only include interfaces for the networks that you want the clients to use. Then save the file and close the editor.
In our example, we use the vi
editor. We want the clients to communicate only with the servers and only via the public, data network. So the file includes only the virtual hostname for the active metadata server, qfs1mds
.The Solaris Cluster software will route requests for qfs1mds
to whichever server node is active:
[qfs1mds-node1]root@solaris:~#ssh
root@qfsclient1
Password: [qfs1client1
]root@solaris:~#vi /etc/opt/SUNWsamfs/hosts.qfs1.local
# /etc/opt/SUNWsamfs/hosts.qfs1.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ----------qfs1mds
qfs1mds
1 0 server:wq
qfs1client1]root@solaris:~#exit
[qfs1mds-node1]root@solaris:~#ssh
root@
qfs1client2
Password: [qfs1client2
]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.qfs1.local
# /etc/opt/SUNWsamfs/hosts.qfs1.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ----------qfs1mds
qfs1mds
1 0 server:wq
[qfs1client2]root@solaris:~#exit
[qfs1mds-node1]root@solaris:~#
Next, Configure an Active QFS Metadata Server on the Primary HA-COTC Cluster Node.
To configure the active metadata server, carry out the following tasks:
Select the cluster node that will serve as both the primary node for the HA-COTC cluster and the active metadata server for the QFS shared file system. Log in as root
.
In the example, qfs1mds-node1
is the primary node and active metadata server:
[qfs1mds-node1]root@solaris:~#
Select the global storage devices that will be used for the QFS file system using the Solaris Cluster command /usr/global/bin/cldevice
list
-v
.
Solaris Cluster software assigns unique Device Identifiers (DIDs) to all devices that attach to the cluster nodes. Global devices are accessible from all nodes in the cluster, while local devices are accessible only from the hosts that mount them. Global devices remain accessible following failover. Local devices do not.
In the example, note that devices d1
, d2
, d7
, and d8
are not accessible from both nodes. So we select from devices d3
, d4
, and d5
when configuring the high-availability QFS shared file system:
[qfs1mds-node1]root@solaris:~#cldevice
list
-v
DID Device Full Device Path ---------- ---------------- d1 qfs1mds-node1:/dev/rdsk/c0t0d0 d2 qfs1mds-node1:/dev/rdsk/c0t6d0d3 qfs1mds-node1
:/dev/rdsk/c1t1d0d3 qfs1mds-node2
:/dev/rdsk/c1t1d0d4 qfs1mds-node1
:/dev/rdsk/c1t2d0d4 qfs1mds-node2
:/dev/rdsk/c1t2d0d5 qfs1mds-node1
:/dev/rdsk/c1t3d0d5 qfs1mds-node2
:/dev/rdsk/c1t3d0 d6 qfs1mds-node2:/dev/rdsk/c0t0d0 d7 qfs1mds-node2:/dev/rdsk/c0t1d0
On the selected primary node, create a high-performance ma
file system that uses md
or mr
data devices. In a text editor, open the /etc/opt/SUNWsamfs/mcf
file.
In the example, we configure the file system qfs1
. We configure device d3
as the metadata device (equipment type mm
), and use d4
and d5
as data devices (equipment type mr
):
[qfs1mds-node1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------qfs1
100ma
qfs1 -/dev/did/dsk/d3s0
101mm
qfs1 -/dev/did/dsk/d4s0
102mr
qfs1 -/dev/did/dsk/d5s1
103mr
qfs1 -
In the /etc/opt/SUNWsamfs/mcf
file, enter the shared
parameter in the Additional Parameters
column of the file system entry. Save the file.
[qfs1mds-node1]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ---------------- qfs1 100 ma qfs1 -shared
/dev/did/dsk/d3s0 101 mm qfs1 - /dev/did/dsk/d4s0 102 mr qfs1 - /dev/did/dsk/d5s1 103 mr qfs1 -:wq
[qfs1mds-node1]root@solaris:~#
Check the mcf
file for errors. Use the command /opt/SUNWsamfs/sbin/sam-fsd
, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on host qfs1mds-node1
:
[qfs1mds-node1]root@solaris:~# sam-fsd
Create the file system. Use the command /opt/SUNWsamfs/sbin/
sammkfs
-S
family-set-name
, where family-set-name
is the equipment identifier for the file-system.
The sammkfs
command reads the hosts.
family-set-name
and mcf
files on the primary node, qfs1mds-node1
, and creates a shared file system with the specified properties.
[qfs1mds-node1]root@solaris:~#sammkfs
-S
qfs1
By default, the Solaris Cluster software fences off disk devices for the exclusive use of the cluster. In HA-COTC configurations, however, only the metadata (mm
) devices are part of the cluster. Data (mr
) devices are shared with file-system clients outside the cluster and directly attached to the client hosts. So you have to place the data (mr
) devices outside the cluster software's control. This can be achieved in either of two ways:
Disable Fencing for QFS Data Devices in the HA-COTC Cluster or
Place Shared Data Devices in a Local-Only Device Group on the HA-COTC Cluster.
Log in to the primary node of the HA-COTC cluster and the active metadata server for the QFS shared file system. Log in as root
.
In the example, qfs1mds-node1
is the primary node and active metadata server:
[qfs1mds-node1]root@solaris:~#
For each data (mr
) device defined in the /etc/opt/SUNWsamfs/mcf
file, disable fencing. Use the command cldevice
set
-p
default_fencing=
nofencing-noscrub
device-identifier
, where device-identifier
is the device identifier listed for the device in the first column of the mcf
file.
Do not disable fencing for metadata (mm
) devices! In the HA-COTC configurations, the QFS metadata (mm
) devices are part of the cluster, while the QFS shared data (mr
) devices not. Data devices are directly attached to clients outside the cluster. For this reason HA-COTC data (mr
) devices must be managed as local devices that are not managed by the Solaris Cluster software. Otherwise, the Solaris Cluster software and QFS could work at cross purposes and corrupt data.
In the examples above, we configured devices d4
and d5
as the data devices for the file system qfs1
. So we globally disable fencing for these devices:
[qfs1mds-node1]root@solaris:~#cldevice
set
-p
default_fencing=
nofencing-noscrub
d4
[qfs1mds-node1]root@solaris:~#cldevice
set
-p
default_fencing=
nofencing-noscrub
d5
Next, Mount the QFS File System on the Primary HA-COTC Node.
Log in to the primary node of the HA-COTC cluster and the active metadata server for the QFS shared file system. Log in as root
.
In the example, qfs1mds-node1
is the primary node and active metadata server:
[qfs1mds-node1]root@solaris:~#
Place all data (mr
) devices that are part of the file system in a localonly
device group using the command cldevicegroup
set
-d
device-identifier-list
-p
localonly=
true
-n
active-mds-node
device-group
, where device-list
is a comma-delimited list of device identifiers, active-mds-node
is the primary node where the active metadata server normally resides, and device-group
is the name you choose for your device group.
In the following example, we place data devices d4
and d5
(mcf
equipment numbers 102
and 103
) in the local device group mdsdevgrp
on the primary node:
[qfs1mds-node1]root@solaris:~#cldevicegroup
set
-d
d4,d5
-p
localonly=
true
\-n
node1mds
mdsdevgrp
Next, Mount the QFS File System on the Primary HA-COTC Node.
Log in to the primary node of the HA-COTC cluster and the active metadata server for the QFS shared file system. Log in as root
.
In the example, qfs1mds-node1
is the primary node and active metadata server:
[qfs1mds-node1]root@solaris:~#
Back up the operating system's /etc/vfstab
file.
[qfs1mds-node1]root@solaris:~#cp
/etc/vfstab
/etc/vfstab.backup
Open the operating system's /etc/vfstab
file in a text editor, and start a line for the new file system. Enter the file system name in the first column, spaces, a hyphen in the second column, and more spaces.
In the example, use the vi
text editor. We start a line for the qfs1
file system. The hyphen keeps the operating system from attempting to check file system integrity using UFS tools:
[qfs1mds-node1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ------------ ------ ---- ------- --------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...qfs1 -
In the third column of the /etc/vfstab
file, enter the mount point of the file system relative to the cluster. Select a subdirectory that is not directly beneath the system root directory.
Mounting a shared QFS file system immediately under root can cause failover issues when using the SUNW.qfs
resource type. In the example, we set the mount point on the cluster to /global/ha-cotc/qfs1
:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------------------- ------ ---- ------- --------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
qfs1 - /global/ha-cotc/qfs1
Populate the remaining fields of the /etc/vfstab
file record as you would for any shared QFS file system. Then save the file, and close the editor.
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------------------- ------ ---- ------- -------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... qfs1 - /global/ha-cotc/qfs1samfs - no shared
:wq
[qfs1mds-node1]root@solaris:~#
Create mount point for the high-availability shared file system.
The mkdir
command with the -p
(parents) option creates the /global
directory if it does not already exist:
[qfs1mds-node1]root@solaris:~#mkdir
-p
/global/ha-cotc/qfs1
Mount the high-availability shared file system on the primary node.
[qfs1mds-node1]root@solaris:~#mount
/global/ha-cotc/qfs1
Next, Configure a Potential QFS Metadata Server on the Secondary HA-COTC Cluster Node.
The secondary node of the two-node cluster serves as the potential metadata server. A potential metadata server is a host that can access to the metadata devices and can, therefore, assume the duties of a metadata server. So, if the active metadata server on the primary node fails, the Solaris Cluster software can failover to the secondary node and activate the potential metadata server. To configure the potential metadata server, carry out the following tasks:
Log in to the secondary node of the HA-COTC cluster as root
.
In the example, qfs1mds-node2
is the secondary node and the potential metadata server:
[qfs1mds-node2]root@solaris:~#
Copy the /etc/opt/SUNWsamfs/mcf
file from the primary node to the secondary node.
Check the mcf
file for errors. Use the command /opt/SUNWsamfs/sbin/sam-fsd
, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on host qfs1mds-node1
:
[qfs1mds-node2]root@solaris:~# sam-fsd
Next, Mount the QFS File System on the Secondary HA-COTC Node.
Log in to the secondary node of the HA-COTC cluster as root
.
In the example, qfs1mds-node2
is the secondary node:
[qfs1mds-node2]root@solaris:~#
Back up the operating system's /etc/vfstab
file.
[qfs1mds-node2]root@solaris:~#cp
/etc/vfstab
/etc/vfstab.backup
Open the operating system's /etc/vfstab
file in a text editor, and add the line for the new file system. Then save the file, and close the editor.
In the example, we use the vi
editor:
[qfs1mds-node2]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------------------- ------ ---- ------- ------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...qfs1 - /global/ha-cotc/qfs1 samfs - no shared
:wq
[qfs1mds-node2]root@solaris:~#
Create the mount point for the high-availability shared file system on the secondary node.
[qfs1mds-node2]root@solaris:~#mkdir
-p
/global/ha-cotc/qfs1
Mount the high-availability shared file system on the secondary node.
[qfs1mds-node2]root@solaris:~#mount
/global/ha-cotc/qfs1
When you host a SAM-QFS shared file system in a cluster managed by Solaris Cluster software, you configure failover of the metadata servers by creating a SUNW.qfs
cluster resource, a resource type defined by the SAM-QFS software (see the SUNW.qfs
man page for details). To create and configure the resource for an HA-COTC configuration, proceed as follows:
Log in to the primary node in the HA-COTC cluster as root
.
In the example, qfs1mds-node1
is the primary node:
[qfs1mds-node1]root@solaris:~#
Define the QFS resource type, SUNW.qfs
, for the Solaris Cluster software using the command clresourcetype
register
SUNW.qfs
.
[qfs1mds-node1]root@solaris:~#clresourcetype
register
SUNW.qfs
If registration fails because the registration file cannot be found, place a symbolic link to the /opt/SUNWsamfs/sc/etc/
directory in the directory where Solaris Cluster keeps resource-type registration files, /opt/cluster/lib/rgm/rtreg/
.
You did not install Oracle Solaris Cluster software before installing SAM-QFS software. Normally, SAM-QFS automatically provides the location of the SUNW.qfs
registration file when it detects Solaris Cluster during installation. So you need to create a link manually.
[qfs1mds-node1]root@solaris:~#cd
/opt/cluster/lib/rgm/rtreg/
[qfs1mds-node1]root@solaris:~#ln
-s
/opt/SUNWsamfs/sc/etc/SUNW.qfs
SUNW.qfs
Create a resource group for the QFS metadata server using the Solaris Cluster command clresourcegroup
create
-n
node-list
group-name
, where node-list
is a comma-delimited list of the two cluster node names and group-name
is the name that we want to use for the resource group.
In the example, we create the resource group qfsrg
with the HA-COTC server nodes as members:
[qfs1mds-node1]root@solaris:~#clresourcegroup
create
-n
qfs1mds-node1,qfs1mds-node2
\qfsrg
In the new resource group, set up a virtual hostname for the active metadata server. Use the Solaris Cluster command clreslogicalhostname
create
-g
group-name
virtualMDS
, where group-name
is the name of the QFS resource group and virtualMDS
is the virtual hostname.
Use the same virtual hostname that you used in the hosts files for the shared file system. In the example, we create the virtual host qfs1mds
in the qfsr
resource group:
[qfs1mds-node1]root@solaris:~#clreslogicalhostname
create
-g
qfsrg
qfs1mds
Add the QFS file-system resources to the resource group using the command clresource
create
-g
group-name
-t
SUNW.qfs
-x
QFSFileSystem=
mount-point
-y
Resource_dependencies=
virtualMDS
resource-name
, where:
group-name
is the name of the QFS resource group.
mount-point
is the mount point for the file system in the cluster, a subdirectory that is not directly beneath the system root directory.
Mounting a shared QFS file system immediately under root can cause failover issues when using the SUNW.qfs
resource type.
virtualMDS
is the virtual hostname of the active metadata server.
resource-name
is the name that you want to give to the resource.
In the example, we create a resource named hasqfs
of type SUNW.qfs
in the resource group qfsrg
. We set the SUNW.qfs
extension property QFSFileSystem
to the /global/ha-cotc/qfs1
mount point, and set the standard property Resource_dependencies
to the logical host for the active metadata server, qfs1mds
:
[qfs1mds-node1]root@solaris:~#clresource
create
-g
qfsrg
-t
SUNW.qfs
\-x
QFSFileSystem=
/global/ha-cotc/qfs1
-y
Resource_dependencies=
qfs1mds
hasqfs
Bring the resource group online using the command clresourcegroup
online
-emM
group-name
, where group-name
is the name of the QFS resource group.
In the example, we bring the qfsr
resource group online:
[qfs1mds-node1]root@solaris:~#clresourcegroup
manage
qfsrg
[qfs1mds-node1]root@solaris:~#clresourcegroup
online
-emM
qfsrg
Make sure that the QFS resource group is online. Use the Solaris Cluster clresourcegroup
status
command.
In the example, the qfsrg
resource group is online
on the primary node, sam1mds-node1
:
[qfs1mds-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------qfsrg
qfs1mds-node1
No
Online
qfs1mds-node2 No Offline
Make sure that the resource group fails over correctly by moving the resource group to the secondary node. Use the Solaris Cluster command clresourcegroup
switch
-n
node2
group-name
, where node2
is the name of the secondary node and group-name
is the name that you have chosen for the resource group. Then use clresourcegroup status
to check the result.
In the example, we move the qfsrg
resource group to qfs1mds-node2
and confirm that the resource group comes online on the specified node:
[qfs1mds-node1]root@solaris:~#clresourcegroup
switch
-n
qfs1mds-node2
qfsrg
[qfs1mds-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------qfsrg
qfs1mds-node1 No Offlineqfs1mds-node2 No Online
Move the resource group back to the primary node. Use the Solaris Cluster command clresourcegroup
switch
-n
node1
group-name
, where node1
is the name of the primary node and group-name
is the name that you have chosen for the resource group. Then use clresourcegroup status
to check the result.
In the example, we successfully move the qfsrg
resource group back to qfs1mds-node1
:
[qfs1mds-node1]root@solaris:~#clresourcegroup
switch
-n
qfs1mds-node1
qfsrg
[qfs1mds-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------qfsrg
qfs1mds-node1 No Online
qfs1mds-node2 No Offline
Next, Configure Hosts Outside the HA-COTC Cluster as QFS Shared File System Clients.
Configure each host as a QFS client that does not have access to the file system's metadata devices, so that the clients do not interfere with the high-availability configuration of the metadata servers inside the cluster.
For each client of the HA-COTC shared file system, proceed as follows:
Log in to the primary node in the HA-COTC cluster as root
, display the device configuration for the cluster using the /usr/global/bin/cldevice
list
-v
command, and make a note of the /dev/rdsk/
path corresponding to the device identifier for each QFS data (mr
) device.
In the example, the QFS data devices are d4
and d5
:
[qfs1mds-node1
]root@solaris:~#cldevice list -v
DID Device Full Device Path ---------- ---------------- d1 qfs1mds-node1:/dev/rdsk/c0t0d0 d2 qfs1mds-node1:/dev/rdsk/c0t6d0 d3 qfs1mds-node1:/dev/rdsk/c1t1d0 d3 qfs1mds-node2:/dev/rdsk/c1t1d0d4
qfs1mds-node1:/dev/rdsk/c1t2d0
d4 qfs1mds-node2:/dev/rdsk/c1t2d0d5
qfs1mds-node1:/dev/rdsk/c1t3d0
d5 qfs1mds-node2:/dev/rdsk/c1t3d0 d6 qfs1mds-node2:/dev/rdsk/c0t0d0 d7 qfs1mds-node2:/dev/rdsk/c0t1d0
Log in to the client host of the HA-COTC cluster as root
.
In the example, qfs1client1
is the client host:
[qfs1mds-node1]root@solaris:~#ssh
root@
qfs1client1
[qfs1client1
]root@solaris:~#
On the client host, retrieve the configuration information for the shared file system using the samfsconfig
/dev/rdsk/*
command.
The samfsconfig
/dev/rdsk/*
command searches the specified path for attached devices that belong to a QFS file system. In the example, the command finds the paths to the qfs1
data (mr
) devices. As expected, it does not find the metadata (mm
) devices, so it returns the Missing slices
and Ordinal
0
messages before listing the shared data devices:
[qfs1client1]root@solaris:~#samfsconfig /dev/rdsk/*
# Family Set 'qfs1' Created Thu Dec 21 07:17:00 2013 # Missing slices # Ordinal 0 #/dev/rdsk/c1t2d0
s0102 mr qfs1
- #/dev/rdsk/c1t3d0
s1103 mr qfs1
-
Compare the /dev/rdisk/
paths reported for the data (mr
) devices by samfsconfig
with those on the server cluster node, as listed by the cldevice
list
command.
The samfsconfig
and cldevice
list
commands should indicate the same devices, though the controller numbers (c
N
) may differ.
In the example, the samfsconfig
and cldevice
list
commands do point to the same devices. On the metadata server node, the /etc/opt/SUNWsamfs/mcf
file identifies shared mr
data devices 102
and 103
using cluster device identifiers d4
and d5
:
/dev/did/dsk/d4
s0 102 mr qfs1 - /dev/did/dsk/d5
s1 103 mr qfs1 -
The cldevice
list
command on the server maps cluster device identifiers d4
and d5
to the paths /dev/rdisk/c1t2d0
and /dev/rdisk/c1t3d0
:
d4
qfs1mds-node1:/dev/rdsk/c1t2d0
d5
qfs1mds-node1:/dev/rdsk/c1t3d0
On the client node, the samfsconfig
command also identifies shared mr
data devices 102
and 103
with the paths /dev/rdisk/c1t2d0
and /dev/rdisk/c1t3d0
:
#/dev/rdsk/c1t2d0
s0 102 mr qfs1 - #/dev/rdsk/c1t3d0
s1 103 mr qfs1 -
On the client, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and enter a line for the HA-COTC shared file system that is identical to those in the mcf
files on the metadata servers.
In the example, we use the vi
editor to create an entry for the file QFS share system qfs1
(equipment ordinal number 100
):
[qfs1client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------qfs1 100 ma qfs1 - shared
In the mcf
file, under the entry for the HA-COTC shared file system, add a line for the file system's metadata (mm
) devices. Use the same equipment ordinal numbers, family set and device state parameters as are used in the mcf
files on the metadata servers. But use the keyword nodev
in the Equipment Identifier
column.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ---------------- qfs1 100 ma qfs1 - sharednodev
101
mm
qfs1
-
Copy the data (mr
) device information from the samfsconfig
output and paste it into the /etc/opt/SUNWsamfs/mcf
file, under the entry for the HA-COTC metadata (mm
) devices. Remove the leading comment (#
) marks that samfsconfig
inserts. Then save the file, and close the editor.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------- qfs1 100 ma qfs1 - shared nodev 101 mm qfs1 -/dev/rdsk/c1t2d0s0
102 mr qfs1 -
/dev/rdsk/c1t3d0
s1103 mr qfs1 -
:wq
[qfs1client1]root@solaris:~#
Check the mcf
file for errors. Use the command /opt/SUNWsamfs/sbin/sam-fsd
, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on host qfs1client1
:
[qfs1client1]root@solaris:~# sam-fsd
Open the operating system's /etc/vfstab
file in a text editor, and add the line for the new file system. Then save the file, and close the editor.
In the example, we use the vi
editor:
[qfs1client1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------------------- ------ ---- ------- ------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...qfs1 - /global/ha-cotc/qfs1 samfs - no shared
:wq
[qfs1client1]root@solaris:~#
Create the mount point for the high-availability shared file system on the client.
[qfs1client1]root@solaris:~#mkdir
-p
/global/qfs1
Mount the high-availability shared file system on the client.
[qfs1client1]root@solaris:~#mount
/global/qfs1
[qfs1client1]root@solaris:~#exit
[qfs1mds-node1]root@solaris:~#
Repeat this procedure until all HA-COTC clients have been configured.
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".
The High-Availability Storage Archive Manager (HA-SAM) configuration maintains the availability of an archiving file system by insuring that the QFS metadata server and the Storage Archive Manager application continue to operate even if a server host fails. The file system is shared between active and potential QFS metadata servers hosted on a two-node cluster that is managed by Solaris Cluster software. If the active cluster node fails, the clustering software automatically activates the potential SAM-QFS server on the surviving node and transfers control over running operations. Since the QFS file system and the SAM application's local storage directories are shared and already mounted, access to data and metadata remains uninterrupted.
The HA-SAM configuration insures file-system consistency in a clustered environment by sending all I/O through the active metadata server. You share the HA-SAM file system purely for accessibility reasons. You cannot use the potential metadata server host as a file-system client, as you would in other SAM-QFS shared file-system configurations. The potential metadata server does not perform I/O unless it is activated during node failover. You can share an HA-SAM file system with clients using NFS. But you must insure that the shares are exported exclusively from the active metadata server node.
High-availability archiving file systems depend on two Solaris Cluster resource types:
SUNW.qfs
If the primary host fails, the SUNW.qfs
resource manages failover for the QFS metadata server and Storage Archive Manager applications. The SUNW.qfs
software is included with the SAM-QFS software distribution (for more information, see the SUNW.qfs
man page).
SUNW.HAStoragePlus
If the primary host fails, the SUNW.HAStoragePlus
resource manages failover of the Storage Archive Manager's local storage. The SAM application maintains volatile archiving information—job queues and removable media catalogs—in the server host's local file system. SUNW.HAStoragePlus
is included in the Solaris Cluster software as a standard resource type (for more information on resource types, see the Data Services Planning and Administration documentation in the Oracle Solaris Cluster Documentation Library).
To configure instances of the required components and integrate them into a working HA-SAM archiving configuration, carry out the following tasks:
Create a SAM-QFS Shared File System Hosts File on Both HA-SAM Cluster Nodes
Configure an Active QFS Metadata Server on the Primary HA-SAM Cluster Node
Configure a Potential QFS Metadata Server on the Secondary HA-SAM Cluster Node
If required, configure High-Availability Network File System (HA-NFS) sharing.
Detailed procedures for setting up HA-NFS are included in the Oracle Solaris Cluster Data Service for Network File System (NFS) Guide that is included in the Oracle Solaris Cluster online documentation library.
In an archiving SAM-QFS shared file system, you must configure a hosts file on the metadata servers, so that the hosts on both nodes can access the metadata for the file system. The hosts file is stored alongside the mcf
file in the /etc/opt/SUNWsamfs/
directory. During the initial creation of a shared file system, the sammkfs
-S
command configures sharing using the settings stored in this file. So create it now, using the procedure below.
Log in to the primary node of the HA-SAM cluster as root
.
In the example, sam1mds-node1
is the primary node:
[sam1mds-node1]root@solaris:~#
Display the cluster configuration. Use the /usr/global/bin/cluster
show
command. In the output, locate the record for each Node Name
, and note the privatehostname
and the Transport Adapter
name and ip_address
property of each network adapter.
In the example, each node has two network interfaces, hme0
and qfe3
:
The hme0
adapters have IP addresses on the private network that the cluster uses for internal communication between nodes. The Solaris Cluster software assigns a privatehostname
corresponding to each private address.
By default, the private hostname of the primary node is clusternode1-priv
and the private hostname of the secondary node is clusternode2-priv
.
The qfe3
adapters have public IP addresses and public hostnames—sam1mds-node1
and sam1mds-node2
—that the cluster uses for data transport.
Note that the display has been abbreviated using ellipsis (...
) marks:
[sam1mds-node1]root@solaris:~#cluster
show
... === Cluster Nodes ===Node Name:
sam1mds-node1
...privatehostname:
clusternode1-priv
...Transport Adapter List: qfe3, hme0
...Transport Adapter:
qfe3
... Adapter Property(ip_address
):172.16.0.12
...Transport Adapter:
hme0
... Adapter Property(ip_address
):10.0.0.129
...Node Name:
sam1mds-node2
...privatehostname:
clusternode2-priv
...Transport Adapter List: qfe3, hme0
... Adapter Property(ip_address
):172.16.0.13
...Transport Adapter:
hme0
... Adapter Property(ip_address
):10.0.0.122
Using a text editor, create the file /etc/opt/SUNWsamfs/hosts.
family-set-name
, where family-set-name
is the family-set name that the /etc/opt/SUNWsamfs/mcf
file assigns to the file-system equipment.
In the example, we create the file hosts.sam1
using the vi
text editor. We add some optional headings to show the columns in the hosts table, starting each line with a hash sign (#
) to indicate a comment:
[sam1mds-node1]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.sam1
# /etc/opt/SUNWsamfs/hosts.sam1 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ----------
In the first column of the table, enter the hostnames of the primary and secondary metadata server nodes followed by some spaces, with each entry on a separate line.
In a hosts file, the lines are rows (records) and spaces are column (field) separators. In the example, the Host Name
column of the first two rows contains the values sam1mds-node1
and sam1mds-node2
, the hostnames of the cluster nodes that host the metadata servers for the file system:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ----------sam1mds-node1
sam1mds-node2
In the second column of each line, start supplying Network Interface
information for the hosts listed in the Host Name
column. Enter each HA-SAM cluster node's Solaris Cluster private hostname or private network address followed by a comma.
The HA-SAM server nodes use the private hostnames for server-to-server communications within the high-availability cluster. In the example, we use the private hostnames clusternode1-priv
and clusternode2-priv
, which are the default names assigned by the Solaris Cluster software:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- sam1mds-node1clusternode1-priv,
sam1mds-node2clusternode2-priv,
Following the comma in the second column of each line, enter the public hostname for the active metadata server followed by spaces.
The HA-SAM server nodes use the public data network to communicate with hosts outside the cluster. Since the IP address and hostname of the active metadata server changes during failover (from sam1mds-node1
to sam1mds-node2
and vice versa), we use a virtual hostname—sam1mds
—for both. Later, we will configure the Solaris Cluster software to always route requests for sam1mds
to the active metadata server:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- sam1mds-node1clusternode1-priv,sam1mds
sam1mds-node2clusternode2-priv,sam1mds
In the third column of each line, enter the ordinal number of the server (1
for the active metadata server, and 2
for the potential metadata server), followed by spaces.
In this example, there is only one metadata server, the primary node, sam1mds-node1
, is the active metadata server, so it is ordinal 1
and the secondary node, sam1mds-node2
, is ordinal 2
:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- sam1mds-node1 clusternode1-priv,sam1mds1
sam1mds-node2 clusternode2-priv,sam1mds2
In the fourth column of each line, enter 0
(zero), followed by spaces.
A 0
, -
(hyphen), or blank value in the fourth column indicates that the host is on—configured with access to the shared file system. A 1
(numeral one) indicates that the host is off—configured but without access to the file system (for information on using these values when administering shared file systems, see the samsharefs
man page).
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- sam1mds-node1 clusternode1-priv,sam1mds 10
sam1mds-node2 clusternode2-priv,sam1mds 20
In the fifth column of the line for the primary node, enter the keyword server
. Then save the file and close the editor.
The server keyword identifies the default, active metadata server:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- sam1mds-node1 clusternode1-priv,sam1mds 1 0server
sam1mds-node2 clusternode2-priv,sam1mds 2 0:wq
[sam1mds-node1]root@solaris:~#
Place a copy of the global /etc/opt/SUNWsamfs/hosts.
family-set-name
file on the potential metadata server.
In a high-availability archiving shared file system, you need to insure that the servers communicate with each other using the private network defined by the Solaris Cluster software. You do this by using specially configured local hosts files to selectively route network traffic between the network interfaces on the servers.
Each file-system host identifies the network interfaces for the other hosts by first checking the /etc/opt/SUNWsamfs/hosts.
family-set-name
file on the metadata server. Then it checks for its own, specific /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
file. If there is no local hosts file, the host uses the interface addresses specified in the global hosts file in the order specified in the global file. But if there is a local hosts file, the host compares it with the global file and uses only those interfaces that are listed in both files in the order specified in the local file. By using different addresses in different arrangements in each file, you can thus control the interfaces used by different hosts.
To configure local hosts files, use the procedure outlined below:
Log in to the primary node of the HA-SAM cluster as root
.
In the example, sam1mds-node1
is the primary node:
[sam1mds-node1]root@solaris:~#
Using a text editor, create a local hosts file on the active metadata server, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the family set name that the /etc/opt/SUNWsamfs/mcf
file assigns to the file system equipment. Only include interfaces for the networks that you want the active server to use when communicating with the potential server. Then save the file and close the editor.
In our example, we want the active and potential metadata servers to communicate with each other over the private network. So the local hosts file on the active metadata server, hosts.sam1.local
, lists only cluster private addresses for the active and potential servers:
[sam1mds-node1]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.sam1.local
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- sam1mds-node1clusternode1-priv
1 0 server sam1mds-node2clusternode2-priv
2 0:wq
[sam1mds-node1]root@solaris:~#
Log in to the secondary cluster node as root
.
In the example, sam1mds-node2
is the secondary node:
[sam1mds-node1]root@solaris:~#ssh
root@
sam1mds-node2
Password: [sam1mds-node2
]root@solaris:~#
Using a text editor, create a local hosts file on the potential metadata server, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the family-set name that the /etc/opt/SUNWsamfs/mcf
file assigns to the file-system equipment. Only include interfaces for the networks that you want the potential server to use when communicating with the active server. Then save the file and close the editor.
In our example, we want the active and potential metadata servers to communicate with each other over the private network. So the local hosts file on the potential metadata server, hosts.sam1.local
, lists only cluster private addresses for the active and potential servers:
[sam1mds-node2]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.sam1.local
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ ----------------------------- ------- --- ---------- sam1mds-node1clusternode1-priv
1 0 server sam1mds-node2clusternode2-priv
2 0:wq
[sam1mds-node2]root@solaris:~#exit
[sam1mds-node1]root@solaris:~#
Next, Configure an Active QFS Metadata Server on the Primary HA-SAM Cluster Node.
Select the cluster node that will serve as both the primary node for the HA-SAM cluster and the active metadata server for the QFS shared file system. Log in as root
.
In the example, sam1mds-node1
is the primary node:
[sam1mds-node1]root@solaris:~#
Select the global storage devices that will be used for the QFS file system. Use the command /usr/global/bin/cldevice
list
-v
.
Solaris Cluster software assigns unique Device Identifiers (DIDs) to all devices that attach to the cluster nodes. Global devices are accessible from all nodes in the cluster, while local devices are accessible only from the hosts that mount them. Global devices remain accessible following failover. Local devices do not.
In the example, note that devices d1
, d2
, d7
, and d8
are not accessible from both nodes. So we select from devices d3
, d4
, and d5
when configuring the high-availability QFS shared file system:
[sam1mds-node1]root@solaris:~#cldevice
list
-v
DID Device Full Device Path ---------- ---------------- d1 sam1mds-node1:/dev/rdsk/c0t0d0 d2 sam1mds-node1:/dev/rdsk/c0t6d0d3 sam1mds-node1
:/dev/rdsk/c1t1d0d3 sam1mds-node2
:/dev/rdsk/c1t1d0d4 sam1mds-node1
:/dev/rdsk/c1t2d0d4 sam1mds-node2
:/dev/rdsk/c1t2d0d5 sam1mds-node1
:/dev/rdsk/c1t3d0d5 sam1mds-node2
:/dev/rdsk/c1t3d0 d6 sam1mds-node2:/dev/rdsk/c0t0d0 d7 sam1mds-node2:/dev/rdsk/c0t1d0
On the selected primary node, create a high-performance ma
file system that uses mr
data devices. In a text editor, open the /etc/opt/SUNWsamfs/mcf
file.
In the example, we configure the file system sam1
. We configure device d3
as the metadata device (equipment type mm
), and use d4
and d5
as data devices (equipment type mr
):
[sam1mds-node1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ -----------------sam1
100ma
sam1 -/dev/did/dsk/d3s0
101mm
sam1 -/dev/did/dsk/d4s0
102mr
sam1 -/dev/did/dsk/d5s1
103mr
sam1 -
In the /etc/opt/SUNWsamfs/mcf
file, enter the shared
parameter in the Additional Parameters
column of the file system entry. Save the file.
[sam1mds-node1]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------- sam1 100 ma sam1 -shared
/dev/did/dsk/d3s0 101 mm sam1 - /dev/did/dsk/d4s0 102 mr sam1 - /dev/did/dsk/d5s1 103 mr sam1 -:wq
[sam1mds-node1]root@solaris:~#
Check the mcf
file for errors. Use the command /opt/SUNWsamfs/sbin/sam-fsd
, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on host sam1mds-node1
:
[sam1mds-node1]root@solaris:~# sam-fsd
Create the file system. Use the command /opt/SUNWsamfs/sbin/sammkfs
-S
family-set-name
, where family-set-name
is the family-set name that the /etc/opt/SUNWsamfs/mcf
file assigns to the file-system equipment.
The sammkfs
command reads the hosts.
family-set-name
and mcf
files and creates a SAM-QFS file system with the specified properties.
[sam1mds-node1]root@solaris:~#sammkfs
-S
sam1
Open the operating system's /etc/vfstab
file in a text editor, and start a line for the new file system. Enter the file system name in the first column, spaces, a hyphen in the second column, and more spaces.
In the example, we use the vi
text editor. We start a line for the sam1
file system. The hyphen keeps the operating system from attempting to check file system integrity using UFS tools:
[sam1mds-node1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ------------ ------ ---- ------- --------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...sam1 -
In the third column of the /etc/vfstab
file, enter the mount point of the file system relative to the cluster. Select a subdirectory that is not directly beneath the system root directory.
Mounting a shared QFS file system immediately under root can cause failover issues when using the SUNW.qfs
resource type. In the example, we set the mount point on the cluster to /global/ha-sam/sam1
:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- ------------------- ------ ---- ------- ---------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
sam1 - /global/ha-sam/sam1
Populate the remaining fields of the /etc/vfstab
file record as you would for any SAM-QFS shared file system. Then save the file, and close the editor.
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ------------------- ------ ---- ------- --------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... sam1 - /global/ha-sam/sam1samfs - no shared
:wq
[sam1mds-node1]root@solaris:~#
Create mount point for the high-availability file system.
The mkdir
command with the -p
(parents) option creates the /global
directory if it does not already exist:
[sam1mds-node1]root@solaris:~#mkdir
-p
/global/ha-sam/sam1
Mount the high-availability shared file system on the primary node.
[sam1mds-node1]root@solaris:~#mount
/global/ha-sam/sam1
Next, Configure a Potential QFS Metadata Server on the Secondary HA-SAM Cluster Node.
The secondary node of the two-node cluster serves as the potential metadata server. A potential metadata server is a host that can access to the metadata devices and can, therefore, assume the duties of a metadata server. So, if the active metadata server on the primary node fails, the Solaris Cluster software can failover to the secondary node and activate the potential metadata server.
Log in to the secondary node of the HA-SAM cluster as root
.
In the example, sam1mds-node2
is the secondary node:
[sam1mds-node2]root@solaris:~#
Copy the /etc/opt/SUNWsamfs/mcf
file from the primary node to the secondary node.
Check the mcf
file for errors. Use the command /opt/SUNWsamfs/sbin/sam-fsd
, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on host sam1mds-node1
:
[sam1mds-node2]root@solaris:~# sam-fsd
Create the file system. Use the command /opt/SUNWsamfs/sbin/sammkfs
-S
family-set-name
, where family-set-name
is the family-set name that the /etc/opt/SUNWsamfs/mcf
file assigns to the file-system equipment.
The sammkfs
command reads the hosts.
family-set-name
and mcf
files and creates a SAM-QFS file system with the specified properties.
[sam1mds-node2]root@solaris:~#sammkfs
sam1
Open the operating system's /etc/vfstab
file in a text editor, and add the line for the new file system. Then save the file, and close the editor.
In the example, we use the vi
editor:
[sam1mds-node2
]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ------------------- ------ ---- ------- ------------------------ /devices - /devices devfs - no - /proc - /proc proc - no - ...sam1 - /global/ha-samsam1 samfs - no shared
:wq
[sam1mds-node2
]root@solaris:~#
Create the mount point for the high-availability shared file system on the secondary node.
[sam1mds-node2
]root@solaris:~#mkdir
-p
/global/ha-sam/sam1
Mount the high-availability shared file system on the secondary node.
[sam1mds-node2
]root@solaris:~#mount
/global/ha-sam/sam1
The Storage Archive Manager software maintains state information for archiving operations in the metadata server's local storage. By default, catalogs of archival media and queues of staging jobs reside in /var/opt/SUNWsamfs/catalog
and /var/opt/SUNWsamfs/stager
. Since local storage is no longer available to a surviving node following failover, we need to configure a high-availability storage resource to hold this data, relocate the subdirectories to the new resource, and provide symbolic links to the new file system in the default locations.
HAStoragePlus
is the Solaris Cluster resource type that defines the storage resource that we need. This resource type manages dependencies between disk device groups, cluster file systems, and local file systems, and it coordinates start-up of data services following failovers, so that all required components are ready when the service tries to restart (see the SUNW.HAStoragePlus
man page for further details).
To create and configure the HAStoragePlus
resource, proceed as follows:
Log in to the active metadata server (the primary cluster node of the HA-SAM cluster) as root
.
In the example, the primary node, sam1mds-node1
, is the is active metadata server:
[sam1mds-node1]root@solaris:~#
Register the SUNW.HAStoragePlus
resource type as part of the cluster configuration. Use the Solaris Cluster command clresourcetype
register
SUNW.HAStoragePlus
.
[sam1mds-node1]root@solaris:~#clresourcetype
register
SUNW.HAStoragePlus
Create the resource and associate it with a Solaris Cluster resource group. Use the command clresource
create
-g
groupname
-t
SUNW.HAStoragePlus
-x
FilesystemMountPoints=
mountpoint
-x
AffinityOn=TRUE
resourcename
, where:
groupname
is the name that you have chosen for the resource group. This resource group will hold all cluster resources required by the HA-SAM configuration.
SUNW.HAStoragePlus
is the Solaris Cluster resource type that supports failover of local file systems.
mountpoint
is the mount point for the high-availability local file system that will hold the catalogs and stager queue files.
resourcename
is the name that you have chosen for the resource itself.
In the example, we create a resource named samlocalfs
of type SUNW.HAStoragePlus
and add it to the resource group samrg
. Then we configure the resource by setting the SUNW.HAStoragePlus
extension properties: we set FilesystemMountPoints
to /sam_shared
and AffinityOn
to TRUE
:
[sam1mds-node1]root@solaris:~#clresource
create
-g
samrg
-t
\SUNW.HAStoragePlus
-x
FilesystemMountPoints=
/sam_shared
\-x
AffinityOn=
TRUE
samlocalfs
Copy the catalog/
and stager/
directories from their default location in /var/opt/SUNWsamfs/
to a temporary location.
In the example, we recursively copy the directories to /var/tmp/
:
[sam1mds-node1]root@solaris:~#cp
-r
/var/opt/SUNWsamfs/catalog
\/var/tmp/catalog
[sam1mds-node1]root@solaris:~#cp
-r
/var/opt/SUNWsamfs/stager
/var/tmp/stager
Delete catalog/
and stager/
directories from the default location, /var/opt/SUNWsamfs/
.
[sam1mds-node1]root@solaris:~#rm
-rf
/var/opt/SUNWsamfs/catalog
[sam1mds-node1]root@solaris:~#rm
-rf
/var/opt/SUNWsamfs/stager
Create new catalog/
and stager/
directories under the mount point of the high-availability HAStoragePlus
file system.
In the example, we create the new directories under the /sam_shared
mount point:
[sam1mds-node1]root@solaris:~#mkdir
/sam_shared/catalog
[sam1mds-node1]root@solaris:~#mkdir
/sam_shared/stager
In the /var/opt/SUNWsamfs/
directory on the active metadata server, create a symbolic link, catalog
, to the new catalog/
directory on the HAStoragePlus
mount point.
When the Storage Archive Manager application looks for data in its local file system, the symbolic link will automatically redirect it to the highly available file system. In the example, we create a catalog
link that points to the new location /sam_shared/catalog
:
[sam1mds-node1]root@solaris:~#ln
-s
/sam_shared/catalog
\/var/opt/SUNWsamfs/catalog
In the /var/opt/SUNWsamfs/
directory on the active metadata server, create a symbolic link, stager
, to the new stager/
directory on the HAStoragePlus
mount point.
In the example, we create a stager
link that points to the new location /sam_shared/stager
:
[sam1mds-node1]root@solaris:~#ln
-s
/sam_shared/stager \ /var/opt/SUNWsamfs/stager
On the active metadata server, make sure that symbolic links have replaced the default /var/opt/SUNWsamfs/catalog
and /var/opt/SUNWsamfs/stager
locations and that the links point to the new locations in the high-availability file system.
In the example, the links are correct:
[sam1mds-node1]root@solaris:~#ls
-l
/var/opt/SUNWsamfs/catalog
lrwxrwxrwx 1 root other .../var/opt/SUNWsamfs/catalog -> /sam_shared/catalog
[sam1mds-node1]root@solaris:~#ls
-l
/var/opt/SUNWsamfs/stager
lrwxrwxrwx 1 root other .../var/opt/SUNWsamfs/stager -> /sam_shared/stager
Copy the contents of the catalog/
and stager/
directories from the temporary location to the new, high-availability, shared file system.
In the example, we create a stager
link that points to the new location /sam_shared/stager
:
[sam1mds-node1]root@solaris:~#cp
-rp
/var/tmp/catalog/*
\/var/opt/SUNWsamfs/catalog
[sam1mds-node1]root@solaris:~#cp
-rp
/var/tmp/stager/*
\/var/opt/SUNWsamfs/stager
Log in to the potential metadata server (the secondary node of the HA-SAM cluster) as root
.
In the example, sam1mds-node2
is the secondary node:
[sam1mds-node2
]root@solaris:~#
In the /var/opt/SUNWsamfs/
directory on the potential metadata server, create a symbolic link, catalog
, to the new catalog/
directory on the HAStoragePlus
mount point.
When the Storage Archive Manager application looks for data in its local file system, the symbolic link will automatically redirect it to the highly available file system. In the example, we create a catalog
link that points to the new location /sam_shared/catalog
:
[sam1mds-node2]root@solaris:~#ln
-s
/sam_shared/catalog
\/var/opt/SUNWsamfs/catalog
In the /var/opt/SUNWsamfs/
directory on the potential metadata server, create a symbolic link, stager
, to the new stager/
directory on the HAStoragePlus
mount point.
In the example, we create a stager
link that points to the new location /sam_shared/stager
:
[sam1mds-node2]root@solaris:~#ln
-s
/sam_shared/stager
\/var/opt/SUNWsamfs/stager
On the potential metadata server, make sure that symbolic links have replaced the default /var/opt/SUNWsamfs/catalog
and /var/opt/SUNWsamfs/stager
locations, and make sure that the links point to the new locations in the high-availability file system.
In the example, the links are correct:
[sam1mds-node2]root@solaris:~#ls
-l
/var/opt/SUNWsamfs/catalog
lrwxrwxrwx 1 root other .../var/opt/SUNWsamfs/catalog -> /sam_shared/catalog
[sam1mds-node2]root@solaris:~#ls
-l
/var/opt/SUNWsamfs/stager
lrwxrwxrwx 1 root other .../var/opt/SUNWsamfs/stager -> /sam_shared/stager
When you host a SAM-QFS shared file system in a cluster managed by Solaris Cluster software, you configure failover of the metadata servers by creating a SUNW.qfs
cluster resource, a resource type defined by the SAM-QFS software (see the SUNW.qfs
man page for details). To create and configure the resource for an HA-SAM configuration, proceed as follows:
Log in to the primary cluster node of the HA-SAM cluster as root
.
In the example, the primary node is sam1mds-node1
:
[sam1mds-node1]root@solaris:~#
Define the resource type, SUNW.qfs
, for the Solaris Cluster software using the command clresourcetype register SUNW.qfs
.
[sam1mds-node1]root@solaris:~#clresourcetype
register
SUNW.qfs
If registration fails because the registration file cannot be found, place a symbolic link to the /opt/SUNWsamfs/sc/etc/
directory in the directory where Solaris Cluster keeps resource-type registration files, /opt/cluster/lib/rgm/rtreg/
.
You did not install Oracle Solaris Cluster software before installing SAM-QFS software. Normally, SAM-QFS automatically provides the location of the SUNW.qfs
registration file when it detects Solaris Cluster during installation. So you need to create a link manually.
[qfs1mds-node1]root@solaris:~#cd
/opt/cluster/lib/rgm/rtreg/
[qfs1mds-node1]root@solaris:~#ln
-s
/opt/SUNWsamfs/sc/etc/SUNW.qfs
SUNW.qfs
In the new resource group, set up a virtual hostname for the active metadata server. Use the Solaris Cluster command clreslogicalhostname
create
-g
group-name
virtualMDS
, where group-name
is the name of the QFS resource group and virtualMDS
is the virtual hostname.
Use the same virtual hostname that you used in the hosts files for the shared file system. In the example, we add the virtual hostname sam1mds
to the samrg
resource group:
[sam1mds-node1]root@solaris:~#clreslogicalhostname
create
-g
samrg
sam1mds
Add the SAM-QFS file-system resources to the resource group using the command clresource create
-g
groupname
-t
SUNW.qfs
-x
QFSFileSystem=
mount-point
-y
Resource_dependencies=
virtualMDS
resource-name
, where:
groupname
is the name that you have chosen for the resource group. This resource group will hold all cluster resources required by the HA-SAM configuration.
SUNW.qfs
is the Solaris Cluster resource type that supports failover of the metadata servers and the Storage Archive Manager application.
mount-point
is the mount point for the file system in the cluster, a subdirectory that is not directly beneath the system root directory.
Mounting a shared QFS file system immediately under root can cause failover issues when using the SUNW.qfs
resource type.
virtualMDS
is the virtual hostname of the active metadata server.
resource-name
is the name that you have chosen for the resource itself.
In the example, we create a resource named hasam
of type SUNW.qfs
in the resource group samrg
. We set the SUNW.qfs
extension property QFSFileSystem
to the /global/ha-sam/sam1
mount point, and set the standard property Resource_dependencies
to the virtual hostname for the active metadata server, sam1mds
:
[sam1mds-node1]root@solaris:~#clresource
create
-g
samrg
-t
SUNW.qfs
\-x
QFSFileSystem=
/global/ha-sam/sam1
-y
Resource_dependencies=
sam1mds
hasam
Create a dependency between the SUNW.HAStoragePlus
resource that supports failover of Storage Archive Manager local files and the SUNW.qfs
resource that manages failover of the active metadata server. Use the Solaris Cluster command clresource set -p Resource_dependencies=
dependency
resource-name
, where resource-name
is the name of the SUNW.qfs
resource and dependency
is name of the SUNW.HAStoragePlus
resource.
In the example, we specify samlocalfs
as a dependency of the hasam
resource:
[sam1mds-node1]root@solaris:~#clresource
set
-p
Resource_dependencies=
samlocalfs
hasam
Create a dependency between the SUNW.qfs
resource that manages failover of the active metadata server and the entire HA-SAM resource group. Use the Solaris Cluster command clresource set -p Resource_dependencies=
dependency
resource-name
, where resource-name
is the name of the HA-SAM resource group and dependency
is name of the SUNW.qfs
resource.
In the example, we specify hasam
as a dependency of the samrg
resource group:
[sam1mds-node1]root@solaris:~#clresource
set
-p
Resource_dependencies=
hasam
\samrg
Bring the resource group online. Use the Solaris Cluster commands clresourcegroup
manage
groupname
, and clresourcegroup
online
-emM
groupname
, where groupname
is the name of the QFS resource group.
In the example, we bring the samrg
resource group online:
[sam1mds-node1]root@solaris:~#clresourcegroup
manage
samrg
[sam1mds-node1]root@solaris:~#clresourcegroup
online
-emM
samrg
Make sure that the QFS resource group is online. Use the Solaris Cluster clresourcegroup
status
command.
In the example, the samrg
resource group is online
on the primary node, sam1mds-node1
:
[sam1mds-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------samrg
sam1mds-node1
No
Online
sam1mds-node2 No Offline
Make sure that the resource group fails over correctly by moving the resource group to the secondary node. Use the Solaris Cluster command clresourcegroup
switch
-n
node2
groupname
, where node2
is the name of the secondary node and groupname
is the name that you have chosen for the resource group. Then use clresourcegroup
status
to check the result.
In the example, we move the samrg
resource group to sam1mds-node2
and confirm that the resource group comes online on the specified node:
sam1mds-node1]root@solaris:~#clresourcegroup
switch
-n
sam1mds-node2
samrg
[sam1mds-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------samrg
sam1mds-node1 No Offlinesam1mds-node2
No
Online
Move the resource group back to the primary node. Use the Solaris Cluster command clresourcegroup
switch
-n
node1
groupname
, where node1
is the name of the primary node and groupname
is the name that you have chosen for the resource group. Then use clresourcegroup status
to check the result.
In the example, we successfully move the samrg
resource group back to sam1mds-node1
:
sam1mds-node1]root@solaris:~#clresourcegroup
switch
-n
sam1mds-node1
samrg
[sam1mds-node1]root@solaris:~#clresourcegroup status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------samrg
sam1mds-node1
No
Online
sam1mds-node2 No Offline
If required, configure High-Availability Network File System (HA-NFS) sharing now.
Detailed procedures for setting up HA-NFS are included in the Oracle Solaris Cluster Data Service for Network File System (NFS) Guide that is included in the Oracle Solaris Cluster online documentation library.
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".
In the Solaris Cluster-Oracle Real Application Cluster (SC-RAC) configuration, Solaris Cluster software manages a QFS shared file system as a SUNW.qfs
resource mounted on nodes that also host Oracle Database and Oracle Real Application Cluster (RAC) software. All nodes are configured as QFS servers, with one the active metadata server and the others potential metadata servers. If the active metadata server node fails, Solaris Cluster software automatically activates a potential metadata server on a healthy node and initiates failover. Since the QFS file system is shared and already mounted on all nodes while I/O is coordinated through Oracle RAC, access to the data remains uninterrupted.
In the SC-RAC configuration, the RAC software coordinates I/O requests, distributes workload, and maintains a single, consistent set of database files for multiple Oracle Database instances running on the cluster nodes. Since file-system integrity is assured under RAC, the QFS potential metadata servers can perform I/O as clients of the shared file system.
For additional information, see the Oracle Solaris Cluster Data Service documentation for Oracle Real Application Clusters in Oracle Solaris Cluster Online Documentation Library.
To configure a SC-RAC file system, carry out the tasks below:
Create a QFS Shared File System Hosts File on All SC-RAC Cluster Nodes
Create Local Hosts Files on the QFS Servers and Clients Outside the HA-COTC Cluster
Configure an Active QFS Metadata Server on the Primary SC-RAC Cluster Node or Configure QFS Metadata Servers on SC-RAC Nodes Using Software RAID Storage
Configure a Potential QFS Metadata Server on the Remaining SC-RAC Cluster Nodes
If required, configure Network File System (NFS) shares, as described in "Accessing File Systems from Multiple Hosts Using NFS and SMB/CIFS". High-Availability NFS (HA-NFS) is not supported.
In a QFS shared file system, you must configure a hosts file on the metadata servers, so that all hosts can access the metadata for the file system. The hosts file is stored alongside the mcf
file in the /etc/opt/SUNWsamfs/
directory. During the initial creation of a shared file system, the sammkfs
-S
command configures sharing using the settings stored in this file. So create it now, using the procedure below.
Log in to the primary cluster node of the SC-RAC cluster as root
.
In the example, the primary node is qfs1rac-node1
:
[qfs1rac-node1]root@solaris:~#
Display the cluster configuration. Use the command /usr/global/bin/cluster
show
. In the output, locate the record for each Node Name
, and then note the privatehostname
and the Transport Adapter
name and ip_address
property of each network adapter.
In the examples, each node has two network interfaces, qfe3
and hme0
:
The hme0
adapters have IP addresses on the private network that the cluster uses for internal communication between nodes. The Solaris Cluster software assigns a privatehostname
corresponding to each private address.
By default, the private hostname of the primary node is clusternode1-priv
, and the private hostname of the secondary node is clusternode2-priv
.
The qfe3
adapters have public IP addresses and public hostnames—qfs1rac-node1
and qfs1rac-node2
—that the cluster uses for data transport.
Note that the display has been abbreviated using ellipsis (...
) marks:
[qfs1rac-node1]root@solaris:~#cluster show
... === Cluster Nodes ===Node Name:
qfs1rac-node1
...privatehostname:
clusternode1-priv
...Transport Adapter List: qfe3, hme0
...Transport Adapter:
qfe3
... Adapter Property(ip_address
):172.16.0.12
...Transport Adapter:
hme0
... Adapter Property(ip_address
):10.0.0.129
...Node Name:
qfs1rac-node2
...privatehostname:
clusternode2-priv
...Transport Adapter List: qfe3, hme0
... Adapter Property(ip_address
):172.16.0.13
...Transport Adapter:
hme0
Adapter Property(ip_address
):10.0.0.122
...Node Name:
qfs1rac-node3
...privatehostname:
clusternod3-priv
...Transport Adapter List: qfe3, hme0
... Adapter Property(ip_address
):172.16.0.33
...Transport Adapter:
hme0
Adapter Property(ip_address
):10.0.0.092
Using a text editor, create the file /etc/opt/SUNWsamfs/hosts.
family-set-name
, where family-set-name
is the family-set name that the /etc/opt/SUNWsamfs/mcf
file assigns to the file-system equipment.
In the example, we create the file hosts.qfs1rac
using the vi
text editor. We add some optional headings to show the columns in the hosts table, starting each line with a hash sign (#
) to indicate a comment:
[qfs1rac-node1]root@solaris:~#vi
/etc/opt/SUNWsamfs/hosts.qfs1rac
# /etc/opt/SUNWsamfs/hosts.qfs1rac # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ -------------------------------- ------- --- ----------
In the first column of the table, enter the hostnames of the primary and secondary metadata server nodes followed by some spaces. Place each entry on a separate line.
In a hosts file, the lines are rows (records) and spaces are column (field) separators. In the example, the Host Name column of the first two rows lists the hostnames of the cluster nodes qfs1rac-node1
, qfs1rac-node2
, and qfs1rac-node3
.
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ -------------------------------- ------- --- ----------qfs1rac-node1
qfs1rac-node2
qfs1rac-node3
In the second column of each line, start supplying Network Interface
information for host Host Name
. Enter each SC-RAC cluster node's Solaris Cluster private hostname or private network address followed by a comma.
The SC-RAC server nodes use the private hostnames for server-to-server communications within the high-availability cluster. In the example, we use the private hostnames clusternode1-priv
, clusternode2-priv
, and clusternode3-priv
, which are the default names assigned by the Solaris Cluster software:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ -------------------------------- ------- --- ---------- qfs1rac-node1clusternode1-priv,
qfs1rac-node2clusternode2-priv,
qfs1rac-node3clusternode3-priv,
Following the comma in the second column of each line, enter the public hostname for the active metadata server followed by spaces.
The SC-RAC server nodes use the public data network to communicate with the clients, all of which reside outside the cluster. Since the IP address and hostname of the active metadata server changes during failover (from qfs1rac-node1
to qfs1rac-node2
, for example), we represent the active server with a virtual hostname, qfs1rac-mds
. Later, we will configure the Solaris Cluster software to always route requests for qfs1rac-mds
to the node that currently hosts the active metadata server:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ -------------------------------- ------- --- ---------- qfs1rac-node1clusternode1-priv,qfs1rac-mds
qfs1rac-node2clusternode2-priv,qfs1rac-mds
qfs1rac-node3clusternode3-priv,qfs1rac-mds
In the third column of each line, enter the ordinal number of the server (1
for the active metadata server, and 2
for the potential metadata server), followed by spaces.
In this example, there is only one metadata server, the primary node, qfs1rac-node1
, is the active metadata server, so it is ordinal 1
, the secondary node, qfs1rac-node2
, is ordinal 2
, and so on:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ -------------------------------- ------- --- ---------- qfs1rac-node1 clusternode1-priv,qfs1rac-mds1
qfs1rac-node2 clusternode2-priv,qfs1rac-mds2
qfs1rac-node3 clusternode3-priv,qfs1rac-mds3
In the fourth column of each line, enter 0
(zero), followed by spaces.
A 0
, -
(hyphen), or blank value in the fourth column indicates that the host is on—configured with access to the shared file system. A 1
(numeral one) indicates that the host is off
—configured but without access to the file system (for information on using these values when administering shared file systems, see the samsharefs
man page).
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ -------------------------------- ------- --- ---------- qfs1rac-node1 clusternode1-priv,qfs1rac-mds 10
qfs1rac-node2 clusternode2-priv,qfs1rac-mds 20
qfs1rac-node3 clusternode3-priv,qfs1rac-mds 30
In the fifth column of the line for the primary node, enter the keyword server
. Save the file and close the editor.
The server keyword identifies the default, active metadata server:
# Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------ -------------------------------- ------- --- ---------- qfs1rac-node1 clusternode1-priv,qfs1rac-mds 1 0server
qfs1rac-node2 clusternode2-priv,qfs1rac-mds 2 0 qfs1rac-node3 clusternode3-priv,qfs1rac-mds 2 0:wq
[qfs1rac-node1]root@solaris:~#
Place a copy of the global /etc/opt/SUNWsamfs/hosts.
family-set-name
file on each node in the SC-RAC cluster.
Now, Configure an Active QFS Metadata Server on the Primary SC-RAC Cluster Node.
Select the cluster node that will serve as both the primary node for the SC-RAC cluster and the active metadata server for the QFS shared file system. Log in as root
.
In the example, the primary node is qfs1rac-node1
:
[qfs1rac-node1]root@solaris:~#
Select the global storage devices that will be used for the QFS file system. Use the command /usr/global/bin/cldevice
list
-v
.
Solaris Cluster software assigns unique Device Identifiers (DIDs) to all devices that attach to the cluster nodes. Global devices are accessible from all nodes in the cluster, while local devices are accessible only from the hosts that mount them. Global devices remain accessible following failover. Local devices do not.
In the example, note that devices d1
, d2
, d6
, d7
, and d8
are not accessible from both nodes. So we select from devices d3
, d4
, and d5
when configuring the high-availability QFS shared file system:
[qfs1rac-node1]root@solaris:~#cldevice
list
-v
DID Device Full Device Path ---------- ---------------- d1 qfs1rac-node1:/dev/rdsk/c0t0d0 d2 qfs1rac-node1:/dev/rdsk/c0t6d0d3 qfs1rac-node1
:/dev/rdsk/c1t1d0d3 qfs1rac-node2
:/dev/rdsk/c1t1d0d3 qfs1rac-node3
:/dev/rdsk/c1t1d0d4 qfs1rac-node1
:/dev/rdsk/c1t2d0d4 qfs1rac-node2
:/dev/rdsk/c1t2d0d4 qfs1rac-node3
:/dev/rdsk/c1t2d0d5 qfs1rac-node1
:/dev/rdsk/c1t3d0d5 qfs1rac-node2
:/dev/rdsk/c1t3d0d5 qfs1rac-node3
:/dev/rdsk/c1t3d0 d6 qfs1rac-node2:/dev/rdsk/c0t0d0 d7 qfs1rac-node2:/dev/rdsk/c0t1d0 d8 qfs1rac-node3:/dev/rdsk/c0t1d0
Create a shared, high-performance ma
file system that uses mr
data devices. In a text editor, open the /etc/opt/SUNWsamfs/mcf
file.
In the example, we configure the file system qfs1rac
. We configure device d3
as the metadata device (equipment type mm
), and use d4
and d5
as data devices (equipment type mr
):
[qfs1rac-node1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ -----------------qfs1rac
100ma
qfs1rac -/dev/did/dsk/d3s0
101mm
qfs1rac -/dev/did/dsk/d4s0
102mr
qfs1rac -/dev/did/dsk/d5s0
103mr
qfs1rac - ...
In the /etc/opt/SUNWsamfs/mcf
file, enter the shared
parameter in the Additional Parameters
column of the file system entry. Save the file.
[qfs1rac-node1]root@solaris:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------- ------ ----------------- qfs1rac 100 ma qfs1rac -shared
/dev/did/dsk/d3s0 101 mm qfs1rac - /dev/did/dsk/d4s0 102 mr qfs1rac - /dev/did/dsk/d5s0 103 mr qfs1rac - ...:wq
[qfs1rac-node1]root@solaris:~#
Check the mcf
file for errors. Use the command /opt/SUNWsamfs/sbin/sam-fsd
, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on host qfs1rac-node1
:
[qfs1rac-node1]root@solaris:~# sam-fsd
Create the file system. Use the command /opt/SUNWsamfs/sbin/sammkfs
-S
family-set-name
, where family-set-name
is the equipment identifier for the file-system.
The sammkfs
command reads the hosts.
family-set-name
and mcf
files and creates a shared file system with the specified properties.
[qfs1rac-node1]root@solaris:~#sammkfs
-S
qfs1rac
Open the operating system's /etc/vfstab
file in a text editor, and start a line for the new file system. Enter the file system name in the first column, spaces, a hyphen in the second column, and more spaces.
In the example, use the vi
text editor. We start a line for the qfs1rac
file system. The hyphen keeps the operating system from attempting to check file system integrity using UFS tools:
[qfs1rac-node1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- --------------- ------ ---- ------- ------------------ /devices - /devices devfs - no - /proc - /proc proc - no - ...qfs1rac -
In the third column of the /etc/vfstab
file, enter the mount point of the file system relative to the cluster. Specify a subdirectory that is not directly beneath the system root directory.
Mounting a shared QFS file system immediately under root can cause failover issues when using the SUNW.qfs
resource type. In the example, the mount point for the qfs1rac
file system is /global/sc-rac/qfs1rac
:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- ----------------------- ------ ---- ------- ----------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
qfs1rac - /global/sc-rac/qfs1rac
Enter the file-system type, samfs
, in the fourth column and -
(hyphen) and no
in the fifth and sixth columns.
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ----------------------- ------ ---- ------- ---------- /devices - /devices devfs - no - /proc - /proc proc - no - ... qfs1rac - /global/sc-rac/qfs1racsamfs - no
:wq
[qfs1rac-node1]root@solaris:~#
In the sixth column of the /etc/vfstab
file, enter the mount options listed below. Then save the file, and close the editor.
The following mount options are recommended for the SC-RAC cluster configuration. They can be specified here, in /etc/vfstab
, or in the file /etc/opt/SUNWsamfs/samfs.cmd
, if more convenient:
shared
stripe=1
sync_meta=1
mh_write
qwrite
forcedirectio
notrace
rdlease=300
wrlease=300
aplease=300
In the example, the list has been abbreviated to fit the page layout:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... qfs1rac - /global/sc-rac/qfs1racsamfs - no shared,...aplease=300
:wq
[qfs1rac-node1]root@solaris:~#
Create mount point for the high-availability shared file system.
[qfs1rac-node1]root@solaris:~#mkdir
-p
/global/sc-rac/qfs1rac
Mount the high-availability shared file system on the primary node.
[qfs1rac-node1]root@solaris:~#mount
/global/sc-rac/qfs1rac
Next, Configure a Potential QFS Metadata Server on the Remaining SC-RAC Cluster Nodes.
The remaining nodes of the cluster serve as the potential metadata servers. A potential metadata server is a host that can access to the metadata devices and can, therefore, assume the duties of a metadata server. So, if the active metadata server on the primary node fails, the Solaris Cluster software can failover to the secondary node and activate the potential metadata server.
For each remaining node in the SC-RAC cluster, proceed as follows:
Log in to the node as root
.
In the example, the current node is qfs1rac-node2
:
[qfs1rac-node2]root@solaris:~#
Copy the /etc/opt/SUNWsamfs/mcf
file from the primary node to the current node.
Check the mcf
file for errors. Run the command /opt/SUNWsamfs/sbin/sam-fsd
, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on host qfs1rac-node2
:
[qfs1rac-node2]root@solaris:~# sam-fsd
Open the operating system's /etc/vfstab
file in a text editor, and start a line for the new file system.
In the example, we use the vi
editor:
[qfs1rac-node2]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ---------------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ...qfs1rac - /global/sc-rac/qfs1rac samfs - no
In the sixth column of the /etc/vfstab
file, enter the mount options listed below. Then save the file, and close the editor.
The following mount options are recommended for the SC-RAC cluster configuration. They can be specified here, in /etc/vfstab
, or in the file /etc/opt/SUNWsamfs/samfs.cmd
, if more convenient:
shared
stripe=1
sync_meta=1
mh_write
qwrite
forcedirectio
notrace
rdlease=300
wrlease=300
aplease=300
In the example, the list has been abbreviated to fit the page layout:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ---------------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... qfs1rac - /global/sc-rac/qfs1racsamfs - no ...,aplease=300
:wq
[qfs1rac-node2]root@solaris:~#
Create the mount point for the high-availability shared file system on the secondary node.
[qfs1rac-node2]root@solaris:~#mkdir
-p
/global/sc-rac/qfs1rac
Mount the high-availability shared file system on the secondary node.
[qfs1rac-node2]root@solaris:~#mount
/global/sc-rac/qfs1rac
When you host a SAM-QFS shared file system in a cluster managed by Solaris Cluster software, you configure failover of the metadata servers by creating a SUNW.qfs
cluster resource, a resource type defined by the SAM-QFS software (see the SUNW.qfs
man page for details). To create and configure the resource for an SC-RAC configuration, proceed as follows:
Log in to the primary node in the SC-RAC cluster as root
.
In the example, the primary node is qfs1rac-node1
:
[qfs1rac-node1]root@solaris:~#
Define the QFS resource type, SUNW.qfs
, for the Solaris Cluster software. Use the command clresourcetype
register
SUNW.qfs
.
[qfs1rac-node1]root@solaris:~#clresourcetype
register
SUNW.qfs
If registration fails because the registration file cannot be found, place a symbolic link to the /opt/SUNWsamfs/sc/etc/
directory in the directory where Solaris Cluster keeps resource-type registration files, /opt/cluster/lib/rgm/rtreg/
.
You did not install Oracle Solaris Cluster software before installing SAM-QFS software. Normally, SAM-QFS automatically provides the location of the SUNW.qfs
registration file when it detects Solaris Cluster during installation. So you need to create a link manually.
[qfs1rac-node1]root@solaris:~#cd
/opt/cluster/lib/rgm/rtreg/
[qfs1rac-node1]root@solaris:~#ln
-s
/opt/SUNWsamfs/sc/etc/SUNW.qfs
SUNW.qfs
Create a resource group for the QFS metadata server using the Solaris Cluster command clresourcegroup
create
-n
node-list
group-name
, where node-list
is a comma-delimited list of the two cluster nodes and group-name
is the name that we want to use for the resource group.
In the example, we create the resource group qfsracrg
with the SC-RAC server nodes as members:
[qfs1rac-node1]root@solaris:~#clresourcegroup
create
\-n
qfs1rac-node1,qfs1rac-node2
qfsracrg
In the new resource group, set up a virtual hostname for the active metadata server. Use the Solaris Cluster command clreslogicalhostname
create
-g
group-name
, where group-name
is the name of the QFS resource group and virtualMDS
is the virtual hostname.
Use the same virtual hostname that you used in the hosts files for the shared file system. In the example, we create the virtual host qfs1rac-mds
in the qfsracrg
resource group:
[qfs1rac-node1]root@solaris:~#clreslogicalhostname
create
\-g
qfsracrg
qfs1rac-mds
Add the QFS file-system resources to the resource group using the command clresource
create
-g
group-name
-t
SUNW.qfs
-x
QFSFileSystem=
mount-point
-y
Resource_dependencies=
virtualMDS
resource-name
, where:
group-name
is the name of the QFS resource group.
mount-point
is the mount point for the file system in the cluster, a subdirectory that is not directly beneath the system root directory.
Mounting a shared QFS file system immediately under root can cause failover issues when using the SUNW.qfs
resource type.
virtualMDS
is the virtual hostname of the active metadata server.
resource-name
is the name that you want to give to the resource.
In the example, we create a resource named scrac
of type SUNW.qfs
in the resource group qfsracrg
. We set the SUNW.qfs
extension property QFSFileSystem
to the /global/sc-rac/qfs1rac
mount point, and set the standard property Resource_dependencies
to the logical host for the active metadata server, qfs1rac-mds
:
[qfs1rac-node1]root@solaris:~#create
-g
qfsracrg
-t
SUNW.qfs
\-x
QFSFileSystem=
/global/sc-rac/qfs1rac
\-y
Resource_dependencies=
qfs1rac-mds
scrac
Bring the resource group online. Use the Solaris Cluster commands clresourcegroup
manage
group-name
and clresourcegroup
online
-emM
group-name
, where group-name
is the name of the QFS resource group.
In the example, we bring the qfsracrg
resource group online:
[qfs1rac-node1]root@solaris:~#clresourcegroup
manage
qfsracrg
[qfs1rac-node1]root@solaris:~#clresourcegroup
online
-emM
qfsracrg
Make sure that the QFS resource group is online. Use the Solaris Cluster command clresourcegroup
status
.
In the example, the qfsracrg
resource group is online
on the primary node, qfs1rac-node1
:
[qfs1rac-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------qfsracrg
qfs1rac-node1 No Online
qfs1rac-node2 No Offline qfs1rac-node3 No Offline
Make sure that the resource group fails over correctly by moving the resource group to the secondary node. Use the Solaris Cluster command clresourcegroup
switch
-n
node2
group-name
, where node2
is the name of the secondary node and group-name
is the name that you have chosen for the resource group. Then use clresourcegroup
status
to check the result.
In the example, we move the resource group to qfs1rac-node2
and qfs1rac-node3
, confirming that the resource group comes online on the specified node:
[qfs1rac-node1]root@solaris:~#clresourcegroup
switch
-n
qfs1rac-node2
qfsracrg
[qfs1rac-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------qfsracrg
qfs1rac-node1 No Offlineqfs1rac-node2
No Online
qfs1rac-node3 No Offline [qfs1rac-node1]root@solaris:~#clresourcegroup switch -n qfs1rac-node3 qfsracrg
[qfs1rac-node1]root@solaris:~#clresourcegroup status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------qfsracrg
qfs1rac-node1 No Offline qfs1rac-node2 No Offlineqfs1rac-node3 No Online
[qfs1rac-node1]root@solaris:~#
Move the resource group back to the primary node. Use the Solaris Cluster command clresourcegroup
switch
-n
node1
group-name
, where node1
is the name of the primary node and group-name
is the name that you have chosen for the resource group. Then use clresourcegroup
status
to check the result.
In the example, we successfully move the samr
resource group back to qfs1rac-node1
:
[qfs1rac-node1]root@solaris:~#clresourcegroup
switch
-n
qfs1rac-node1
qfsracrg
[qfs1rac-node1]root@solaris:~#clresourcegroup
status
=== Cluster Resource Groups === Group Name Node Name Suspended Status ---------- ------------- --------- ------samr
qfs1rac-node1 No Online
qfs1rac-node2 No Offline qfs1rac-node3 No Offline [qfs1rac-node1]root@solaris:~#
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".
A high-availability file system must store data and metadata on redundant primary storage devices. Redundant disk array hardware can provide this redundancy using RAID-1 or RAID-10 for metadata and RAID-5 for data. But if you need to use plain, dual-port SCSI disk devices or a JBOD (just a bunch of disks) array as primary storage, you need to provide the required redundancy in software.
For this reason, the SC-RAC configuration supports software RAID configurations based on Oracle Solaris Volume Manager (SVM) multi-owner disk sets. This section outlines the basic steps that you need to take when setting up this variant of the SC-RAC file-system configuration.
Note that you should use Solaris Volume Manager purely for managing the redundant storage array. Do not concatenate storage on separate devices. Doing so distributes I/O to the component devices inefficiently and degrades QFS file-system performance.
Carry out the following tasks:
Solaris Volume Manager (SVM) is no longer included with Solaris 11, but the Solaris Cluster 4 software continues to support Solaris Volume Manager on Solaris 11. So, to use the software, you must download and install the version that was included with the Solaris 10 9/10 release. For each node in the cluster, proceed as follows:
Log in to the node as root
.
In the example, we configure cluster node qfs2rac-node1
:
[qfs2rac-node1]root@solaris:~#
Check for locally available Solaris Volume Manager (SVM) packages. Use the command the pkg
info
svm
.
[qfs2rac-node1]root@solaris:~#pkg
info
svm
pkg: info: no packages matching the following patterns you specified are installed on the system. Try specifying -r to query remotely: svm [qfs2rac-node1]root@solaris:~#
If no packages are found locally, check the Solaris Image Packaging System (IPS) repository. Use the command pkg
-r
svm
.
[qfs2rac-node1]root@solaris:~#pkg
-r
svm
Name: storage/svm Summary: Solaris Volume Manager Description: Solaris Volume Manager commands Category: System/Core State: Not installed Publisher: solaris Version: 0.5.11 Build Release: 5.11 Branch: 0.175.0.0.0.2.1 Packaging Date: October 19, 2011 06:42:14 AM Size: 3.48 MB FMRI: pkg://solaris/storage/svm@0.5.11,5.11-0.175.0.0.0.2.1:20111019T064214Z [qfs2rac-node1]root@solaris:~#
Install the package using the command pkg
install
storage/svm
:
[qfs2rac-node1]root@solaris:~#pkg
install
storage/svm
Packages to install: 1 Create boot environment: No Create backup boot environment: Yes Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 104/104 1.6/1.6 PHASE ACTIONS Install Phase 168/168 PHASEITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 [qfs2rac-node1]root@solaris:~#
When the installation finishes, check the location of metadb
. Use the command the which
metadb
.
[qfs2rac-node1]root@solaris:~#which
metadb
/usr/sbin/metadb [qfs2rac-node1]root@solaris:~#
Check the installation. Use the command metadb
.
[qfs2rac-node1]root@solaris:~# metadb
[qfs2rac-node1]root@solaris:~#
If metadb
returns an error, see if the kernel/drv/md.conf
file exists.
[qfs2rac-node1]root@solaris:~# metadb metadb: <HOST>: /dev/md/admin: No such file or directory [qfs2rac-node1]root@solaris:~#ls
-l
/kernel/drv/md.conf
-rw-r--r-- 1 root sys 295 Apr 26 15:07 /kernel/drv/md.conf [qfs2rac-node1]root@solaris:~#
If the kernel/drv/md.conf
file does not exist, create it. Make root
the file's owner, and make sys
the group owner. Set permissions to 644
.
The content of the file should look like this:
[qfs2rac-node1]root@solaris:~#vi
kernel/drv/md.conf
###################################################
#pragma ident "@(#)md.conf 2.1 00/07/07 SMI"
#
# Copyright (c) 1992-1999 by Oracle Microsystems, Inc.
# All rights reserved.
#
name="md" parent="pseudo" nmd=128 md_nsets=4;
####################################################
:wq
[qfs2rac-node1]root@solaris:~#chown
root
:
sys
kernel/drv/md.conf
[qfs2rac-node1]root@solaris:~#chmod
644
[qfs2rac-node1]root@solaris:~#
Dynamically rescan the md.conf
file and make sure that the device tree is updated. Use the command update_drv
-f
md
:
In the example, the device tree is updated. So Solaris Volume Manager is installed:
[qfs2rac-node1]root@solaris:~#update_drv
-f
md
[qfs2rac-node1]root@solaris:~#ls -l
/dev/md/admin
lrwxrwxrwx 1 root root 31 Apr 20 10:12 /dev/md/admin -> ../../devices/pseudo/md@0:admin
Next, Create Solaris Volume Manager Multi-Owner Disk Groups.
Log in to all nodes in the SC-RAC configuration as root
.
In the example, we log in to node qfs2rac-node1
. We then open new terminal windows and use ssh
to log in to nodes qfs2rac-node2
and qfs2rac-node3
:
[qfs2rac-node1
]root@solaris:~# [qfs2rac-node1]root@solaris:~#ssh
root@
qfs2rac-node2
Password: [qfs2rac-node2
]root@solaris:~# [qfs2rac-node1]root@solaris:~#ssh
root@
qfs2rac-node3
Password: [qfs2rac-node3
]root@solaris:~#
If you are using Oracle Solaris Cluster 4.x on Solaris 11.x and have not already done so, Install Solaris Volume Manager on Solaris 11 on each node before proceeding further.
Solaris Volume Manager is not installed on Solaris 11 by default.
On each node, attach a new state database device and create three state database replicas. Use the command metadb
-a
-f
-c3
device-name
, where device-name
is a physical device name of the form c
X
t
Y
d
Y
s
Z
.
Do not use Solaris Cluster Device Identifiers (DIDs). Use the physical device name. In the example, we create state database devices on all three cluster nodes:
[qfs2rac-node1
]root@solaris:~#metadb
-a
-f
-c3
/dev/rdsk/c0t0d0
[qfs2rac-node2
]root@solaris:~#metadb
-a
-f
-c3
/dev/rdsk/c0t6d0
[qfs2rac-node3
]root@solaris:~#metadb
-a
-f
-c3
/dev/rdsk/c0t4d0
Create a Solaris Volume Manager multi-owner disk group on one node. Use the command metaset
-s
diskset
-M
-a
-h
host-list
, where host-list
is a space-delimited list of owners.
Solaris Volume Manager supports up to four hosts per disk set. In the example, we create the disk group datadisks
on qfs2rac-node1
and specify the three nodes qfs2rac-node1
, qfs2rac-node2
, and qfs2rac-node3
as owners:
[qfs2rac-node1]root@solaris:~#metaset
-s
datadisks
-M
-a
-h
qfs2rac-node1
\qfs2rac-node2
qfs2rac-node3
List the devices on one of the nodes. Use the Solaris Cluster command cldevice
list
-n
-v
.
[qfs2rac-node1]root@solaris:~#cldevice
list
-n
-v
DID Device Full Device Path ---------- ---------------- d13 qfs2rac-node1:/dev/rdsk/c6t600C0FF00000000000332B62CF3A6B00d0 d14 qfs2rac-node1:/dev/rdsk/c6t600C0FF0000000000876E950F1FD9600d0 d15 qfs2rac-node1:/dev/rdsk/c6t600C0FF0000000000876E9124FAF9C00d0 ... [qfs2rac-node1]root@solaris:~#
In the output of the cldevice
list
-n
-v
command, select the devices that will be mirrored.
In the example, we select four pairs of devices for four mirrors: d21
and d13
, d14
and d17
, d23
and d16
, and d15
and d19
.
[qfs2rac-node1]root@solaris:~#cldevice
list
-n
-v
DID Device Full Device Path ---------- ----------------d13
qfs2rac-node1:/dev/rdsk/c6t600C0FF00000000000332B62CF3A6B00d0d14
qfs2rac-node1:/dev/rdsk/c6t600C0FF0000000000876E950F1FD9600d0d15
qfs2rac-node1:/dev/rdsk/c6t600C0FF0000000000876E9124FAF9C00d0d16
qfs2rac-node1:/dev/rdsk/c6t600C0FF00000000000332B28488B5700d0d17
qfs2rac-node1:/dev/rdsk/c6t600C0FF000000000086DB474EC5DE900d0 d18 qfs2rac-node1:/dev/rdsk/c6t600C0FF0000000000876E975EDA6A000d0d19
qfs2rac-node1:/dev/rdsk/c6t600C0FF000000000086DB47E331ACF00d0 d20 qfs2rac-node1:/dev/rdsk/c6t600C0FF0000000000876E9780ECA8100d0d21
qfs2rac-node1:/dev/rdsk/c6t600C0FF000000000004CAD5B68A7A100d0 d22 qfs2rac-node1:/dev/rdsk/c6t600C0FF000000000086DB43CF85DA800d0d23
qfs2rac-node1:/dev/rdsk/c6t600C0FF000000000004CAD7CC3CDE500d0 d24 qfs2rac-node1:/dev/rdsk/c6t600C0FF000000000086DB4259B272300d0 .... [qfs2rac-node1]root@solaris:~#
Add the selected devices to the disk set on the same node. Use the command metaset
-a
devicelist
, where devicelist
is a space-delimited list of one or more cluster device identifiers.
In the example, we add the listed disks to multi-owner disk set dataset1
:
[qfs2rac-node1]root@solaris:~#metaset
-s
dataset1
-M
-a
-h
/dev/did/rdsk/d21
\/dev/did/rdsk/d13 /dev/did/rdsk/d14 /dev/did/rdsk/d17 /dev/did/rdsk/d23
... [qfs2rac-node1]root@solaris:~#
Next, Create Mirrored Volumes for the QFS Data and Metadata.
To keep the relationships between components clear, decide on a naming scheme for the RAID-0 logical volumes and RAID-1 mirrors that you will create.
Commonly, RAID-1 mirrors are named d
n
, where n
is an integer. The RAID-0 volumes that make up the RAID-1 mirrors are named d
n
X
, where X
is an integer representing the device's position within the mirror (usually 0
or 1
for a two-way mirror).
In the examples throughout this procedure, we create two-way RAID-1 mirrors from pairs of RAID-0 logical volumes. So we name the mirrors d1
, d2
, d3
, d4
, and so on. Then we name each pair of RAID-0 volumes for the RAID-1 mirror that includes it: d10
and d11
, d20
and d21
, d30
and d31
, d40
and d41
, etc.
Log in to the node where you created the multi-owner disk set as root
.
In the examples above, we created the disk set on qfs2rac-node1
:
[qfs2rac-node1]root@solaris:~#
Create the first RAID-0 logical volume. Use the command metainit
-s
diskset-name
device-name
number-of-stripes
components-per-stripe
component-names
, where:
diskset-name
is the name that you have chosen for the disk set.
device-name
is the name that you have chosen for the RAID-0 logical volume.
number-of-stripes
is 1
.
components-per-stripe
is 1
.
component-name
is the device name of the disk set component to use in the RAID-0 volume.
In the example, we use the cluster (DID) device /dev/did/dsk/d21s0
in multi-owner disk set dataset1
to create RAID-0 logical volume d10
:
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d10
1
1
/dev/did/dsk/d21s0
Create the remaining RAID-0 logical volumes.
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d11 1 1 /dev/did/dsk/d13s0
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d20 1 1 /dev/did/dsk/d14s0
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d21 1 1 /dev/did/dsk/d17s0
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d30 1 1 /dev/did/dsk/d23s0
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d31 1 1 /dev/did/dsk/d16s0
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d40 1 1 /dev/did/dsk/d15s0
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d41 1 1 /dev/did/dsk/d19s0
...
Create the first RAID-1 mirror. Use the command metainit
-s
diskset-name
RAID-1-mirrorname
-m
RAID-0-volume0
, where:
diskset-name
is the name of the multi-owner disk set
RAID-1-mirrorname
is the name of the RAID-1 mirrored volume
RAID-0-volume0
is the first RAID-0 logical volume that you are adding to the mirror.
In the example, we create mirror d1
and add the first RAID-0 volume in the mirror, d10
:
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d1
-m
d10
[qfs2rac-node1]root@solaris:~#metattach
-s
dataset1
d11
d1
Add the remaining RAID-0 volumes to the first RAID-1 mirror. Use the command metattach
-s
diskset-name
RAID-1-mirrorname
RAID-0-volume
, where:
diskset-name
is the name of the multi-owner disk set
RAID-1-mirrorname
is the name of the RAID-1 mirrored volume
RAID-0-volume
is the RAID-0 logical volume that you are adding to the mirror.
In the example, d1
is a two-way mirror, so we add a single RAID-0 volume, d11
:
[qfs2rac-node1]root@solaris:~#metattach
-s
dataset1
d11
d1
Create the remaining mirrors.
In the example, we create mirrors, d2
, d3
, d4
, etc.
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d2
-m
d20
[qfs2rac-node1]root@solaris:~#metattach
-s
dataset1
d21 d2
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset2
d3
-m
d30
[qfs2rac-node1]root@solaris:~#metattach
-s
dataset2
d31 d3
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset2
d4
-m
d40
[qfs2rac-node1]root@solaris:~#metattach
-s
dataset2
d41 d4
...
Select the mirrors that will hold the QFS file-system metadata.
For the examples below, we choose mirrors d1
and d2
.
In the selected mirrors, create soft partitions to hold the QFS metadata. For each mirror, use the command metainit
-s
diskset-name
partition-name
-p
RAID-1-mirrorname
size
, where:
where diskset-name
is the name of the multi-owner disk set.
partition-name
is the name of the new partition.
RAID-1-mirrorname
is the name of the mirror.
size
is the size of the partition.
In the example, we create two 500-gigabyte partitions: d53
on mirror d1
and d63
on mirror d2
:
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d53
-p
d1
500g
[qfs2rac-node1]root@solaris:~#metainit
-s
dataset1
d63
-p
d2
500g
Next, Create a QFS Shared File System on the SC-RAC Cluster Using Mirrored Volumes.
If you have not already done so, carry out the procedure "Create a QFS Shared File System Hosts File on All SC-RAC Cluster Nodes". When finished, return here.
Select the cluster node that will serve as both the primary node for the SC-RAC cluster and the active metadata server for the QFS shared file system. Log in as root
.
In the example, we select node qfs2rac-node1
:
[qfs2rac-node1]root@solaris:~#
On the primary node, create a shared, high-performance, ma
file system. Use Solaris Volume Manager mirrored-disk volumes as mm
metadata devices and mr
data devices. In a text editor, open the /etc/opt/SUNWsamfs/mcf
file, make the required edits, and save the file.
In the example, we use the vi
text editor to create the file system qfs2rac
. Partitions on mirrored volumes d1
and d2
serve as the file system's two mm
metadata devices, 110
and 120
. Mirrored volumes d3
and d4
serve as the file system's two mr
data devices, 130
and 140
.
[qfs2rac-node1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# /etc/opt/SUNWsamfs/mcf file: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters # ----------------------- --------- -------- ------- ------ ----------qfs2rac
100
ma
qfs2rac
on
shared
/dev/md/dataset1/dsk/d53
110
mm
qfs2rac
on
/dev/md/dataset1/dsk/d63
120
mm
qfs2rac
on
/dev/md/dataset1/dsk/d3
130
mr
qfs2rac
on
/dev/md/dataset1/dsk/d4
140
mr
qfs2rac
on
:wq
[qfs2rac-node1]root@solaris:~#
Check the mcf
file for errors. Use the command /opt/SUNWsamfs/sbin/
sam-fsd
, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on host qfs2rac-node1
:
[qfs2rac-node1]root@solaris:~# sam-fsd
Create the file system. Use the command /opt/SUNWsamfs/sbin/
sammkfs
-S
family-set-name
, where family-set-name
is the equipment identifier for the file-system.
The sammkfs
command reads the hosts.
family-set-name
and mcf
files and creates a shared file system with the specified properties.
[qfs2rac-node1]root@solaris:~#sammkfs
-S
qfs2rac
Open the operating system's /etc/vfstab
file in a text editor, and start a line for the new file system. Enter the file system name in the first column, spaces, a hyphen in the second column, and more spaces.
In the example, use the vi
text editor. We start a line for the qfs2rac
file system. The hyphen keeps the operating system from attempting to check file system integrity using UFS tools:
[qfs2rac-node1]root@solaris:~#vi
/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ---------------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ...qfs2rac -
In the third column of the /etc/vfstab
file, enter the mount point of the file system relative to the cluster, the file system type (samfs
), the fsck
pass option (-
), and the mount-at-boot option (no
). Specify a mount-point subdirectory that is not directly beneath the system root directory.
Mounting a shared QFS file system immediately under root can cause failover issues when using the SUNW.qfs
resource type. In the example, the mount point for the qfs2rac
file system is /global/sc-rac/qfs2rac
:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ---------------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... qfs2rac -/global/sc-rac/qfs2rac
samfs
-
no
In the sixth column of the /etc/vfstab
file, enter the sw_raid
mount option and the recommended mount options for the SC-RAC configuration. Then save the file, and close the editor.
The following mount options are recommended. They can be specified here, in /etc/vfstab
, or in the file /etc/opt/SUNWsamfs/samfs.cmd
, if more convenient:
shared
stripe=1
sync_meta=1
mh_write
qwrite
forcedirectio
notrace
rdlease=300
wrlease=300
aplease=300
In the example, the list has been abbreviated to fit the page layout:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ---------------------- ------ ---- ------- ------------ /devices - /devices devfs - no - /proc - /proc proc - no - ... qfs2rac - /global/sc-rac/qfs2rac samfs - noshared,...sw_raid
:wq
[qfs2rac-node1]root@solaris:~#
Create the mount point for the high-availability shared file system.
[qfs2rac-node1]root@solaris:~#mkdir
-p
/global/sc-rac/qfs2rac
Mount the high-availability shared file system on the primary node.
[qfs2rac-node1]root@solaris:~#mount
/global/sc-rac/qfs2rac
Set up the second node using the procedure "Configure a Potential QFS Metadata Server on the Remaining SC-RAC Cluster Nodes".
Configure failover using the procedure "Configure Failover of the SC-RAC Metadata Servers". Then return here.
You have successfully configured an SC-RAC configuration using Solaris Volume Manager mirrored volumes.
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".