C H A P T E R 2 |
After you have set up the installation environment, you are ready to manually install the Solaris Operating System and the Foundation Services manually on the master-eligible nodes of the cluster. The master-eligible nodes take on the roles of master node and vice-master node in the cluster. For more information about the types of nodes, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
To manually install and configure Netra HA Suite software on the master-eligible nodes of your cluster, see the following sections:
Installing the Solaris Operating System on the Master-Eligible Nodes
Configuring Solaris Volume Manager With Reliable NFS and Shared Disk
The master-eligible nodes store current data for all nodes in the cluster, whether the cluster has diskless nodes or dataless nodes. One master-eligible node is to be the master node, while the other master-eligible node is to be the vice-master node. The vice-master node takes over the role of master in case the master node fails or is taken offline for maintenance. Therefore, the disks of both these nodes must have exactly the same partitions. Create the disk partitions of the master-eligible node according to the needs of your cluster. For example, the disks of the master-eligible nodes must be configured differently if diskless nodes are part of the cluster.
The following table lists the space requirements for example disk partitions of master-eligible nodes in a cluster with diskless nodes.
Disk Partition | File System Name | Description | Example Size |
---|---|---|---|
0 | / | The root file system, boot partition, and volume management software. This partition must be mounted with the logging option. | 2 Gbytes minimum |
1 | /swap | Minimum size when physical memory is less than 1 Gbyte. | 1 Gbyte |
2 | overlap | Entire disk. | Size of entire disk |
3 | /export | Exported file system reserved for diskless nodes. This partition must be mounted with the logging option. This partition is further partitioned if diskless nodes are added to the cluster. | 1 Gbyte + 100 Mbytes per diskless node |
4 | /SUNWcgha/local | This partition is reserved for NFS status files, services, and configuration files. This partition must be mounted with the logging option. | 2 Gbytes |
5 | Reserved for Reliable NFS internal use | Bitmap partition reserved for the nhcrfsd daemon. This partition is associated with the /export file system. | See TABLE 2-3 |
6 | Reserved for Reliable NFS internal use | Bitmap partition reserved for the nhcrfsd daemon. This partition is associated with the /SUNWcgha/local file system. | See TABLE 2-3 |
7 | /mypartition | For any additional applications. | The remaining space |
For replication, create a bitmap partition for each partition containing an exported, replicated file system on the master-eligible nodes. The bitmap partition must be at least the following size.
1 Kbyte + 4 Kbytes per Gbyte of data in the associated data partition
In this example, the bitmaps are created on partitions 5 and 6. The bitmap partition sizes can be as shown in the following table.
For information, see the Sun Storage
Tek Availability Suite 3.1 Remote Mirror Software Installation Guide in
the Sun Storage Tek Availability
Suite 3.1 documentation set.
Note - In a cluster without diskless nodes, the /export file system and the associated bitmap partition are not required. |
To install the Solaris Operating System on each master-eligible node, use the Solaris JumpStart tool on the installation server. The Solaris JumpStart tool requires the Solaris distribution to be on the installation server. For information about creating a Solaris distribution, see Netra High Availability Suite 3.0 1/08 Foundation Services Installation Guide.
|
Create the Solaris JumpStart environment on the installation server by using the appropriate document for the Solaris release:
You can access these documents on http://www.oracle.com/technetwork/indexes/documentation/index.html.
In the /etc/hosts file, add the names and IP addresses of the master-eligible nodes.
Share the Solaris-distribution-dir and Jumpstart-dir directories by adding these lines to the /etc/dfs/dfstab file:
share -F nfs -o ro,anon=0 Solaris-distribution-dirshare -F nfs -o ro,anon=0 Jumpstart-dir |
Share the directories that are defined in the /etc/dfs/dfstab file:
# shareall |
Change to the directory where the add_install_client command is located:
# cd Solaris-dir/Solaris_x/Tools |
Run the add_install_client command for each master-eligible node.
Boot each master-eligible node with the appropriate command using a network boot.
If you are unsure of the appropriate command, refer to the hardware documentation for your platform. The common command for SPARC systems is shown in the following example:
ok> boot net - install |
If the installation server is connected to the second Ethernet interface, type:
ok> boot net2 - install |
This command installs the Solaris Operating System on the master-eligible nodes.
For information about performing this task on the AMD64 platform, refer to the hardware documentation.
To prepare the master-eligible nodes for the installation of the Netra HA Suite, you must configure the master-eligible nodes. You must also mount the installation server directory that contains the Netra HA Suite distribution.
# touch /etc/notrouter |
Modify the /etc/default/login file so that you can connect to a node from a remote system as superuser:
# mv /etc/default/login /etc/default/login.orig # chmod 644 /etc/default/login.orig # sed '1,$s/^CONSOLE/#CONSOLE/' /etc/default/login.orig > /etc/default/login # chmod 444 /etc/default/login |
# touch /noautoshutdown |
Modify the .rhosts file according to the security policy for your cluster:
# touch /.rhosts # cp /.rhosts /.rhosts.orig # echo "+ root" > /.rhosts # chmod 444 /.rhosts |
# /usr/sbin/eeprom local-mac-address?=true # /usr/sbin/eeprom auto-boot?=true # /usr/sbin/eeprom diag-switch?=false |
The preceding example is for SPARC-based hardware. For the commands required on the x64 platform, refer to the hardware documentation.
If you are using the Network Time Protocol (NTP) to run an external clock, configure the master-eligible node as an NTP server.
If your master-eligible node has an IDE disk, edit the /usr/kernel/drv/sdbc.conf file.
Change the value of the sdbc_max_fbas parameter from 1024 to 256.
Create the data/etc and data/var/dhcp directories in the /SUNWcgha/local/export/ file system on the master-eligible node:
# mkdir -p /SUNWcgha/local/export/data/etc # mkdir -p /SUNWcgha/local/export/data/var/dhcp |
Repeat Step 1 through Step 9 on the second master-eligible node.
|
Check that the mountd and nfsd daemons are running on the installation server.
For example, use the ps command:
# ps -ef | grep mountd root 184 1 0 Aug 03 ? 0:01 /usr/lib/autofs/automountd root 290 1 0 Aug 03 ? 0:00 /usr/lib/nfs/mountd root 2978 2974 0 17:40:34 pts/2 0:00 grep mountd # ps -ef | grep nfsd root 292 1 0 Aug 03 ? 0:00 /usr/lib/nfs/nfsd -a 16 root 2980 2974 0 17:40:50 pts/2 0:00 grep nfsd # |
If a process ID is not returned for the mountd and nfsd daemons, start the NFS daemons as follows:
On the Solaris 9 OS, use the following command
# /etc/init.d/nfs.server start |
On the Solaris 10 OS, use the following command
# svcadm enable svc:/network/nfs/server |
Share the directory containing the distributions for the Netra HA Suite and the Solaris Operating System by adding the following lines to the /etc/dfs/dfstab file:
share -F nfs -o ro,anon=0 software-distribution-dir |
where software-distribution-dir is the directory that contains the Netra HA Suite packages and Solaris patches.
Share the directories that are defined in the /etc/dfs/dfstab file:
# shareall |
Create the mount point directories Solaris and NetraHASuite on the master-eligible node:
# mkdir /NetraHASuite # mkdir /Solaris |
Mount the Netra HA Suite and Solaris distribution directories on the installation server:
# mount -F nfs \installation-server-IP-address:/software-distribution-dir/Product/NetraHASuite_3.0/ FoundationServices/Solaris_x/sparc # mount -F nfs installation-server-IP-address:/Solaris-distribution-dir/Solaris |
installation-server-IP-address is the IP address of the cluster network interface that is connected to the installation server.
software-distribution-dir is the directory that contains the Netra HA Suite packages. (Note that in the preceding example, this line wraps, due to space constraints; however, when typing this path, it should contain no spaces or line breaks.)
Solaris-distribution-dir is the directory that contains the Solaris distribution.
Repeat Step 5 through Step 7 on the other master-eligible node.
After you have completed the Solaris installation, you must install the Solaris patches delivered in the Netra HA Suite distribution. See the Netra High Availability Suite 3.0 1/08 Foundation Services README for the list of patches.
Note - Some of these patches are required for CGTP. If you do not plan to install CGTP, do not install the CGTP patches. For more information about the impact of not installing CGTP, see Choosing a Cluster Network. |
The following procedures explain how to install the Netra HA Suite on the master-eligible nodes:
As superuser, install the following CMM packages on each master-eligible node:
# pkgadd -d /NetraHASuite/Packages/ SUNWnhas-common-libs \SUNWnhas-common SUNWnhas-cmm-libs SUNWnhas-cmm |
For instructions on configuring the CMM, see Configuring the Netra HA Suite on the Master-Eligible Nodes.
For information about the CMM, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
The nhadm tool is a cluster administration tool that can verify that the installation was completed correctly. You can run this tool when your cluster is up and running.
CGTP enables a redundant network for your cluster.
Note - If you do not require CGTP, do not install the CGTP packages. For more information about the impact of not installing CGTP, see Choosing a Cluster Network. |
As superuser, install the Node State Manager packages on each master-eligible node:
# pkgadd -d /NetraHASuite/Packages/ SUNWnhas-nsm |
For more information about the Node State Manager, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
Install the Reliable NFS packages to enable the Reliable NFS service and data-replication features of Netra HA Suite. For a description of the Reliable NFS service, see “File Sharing and Data Replication” in Netra High Availability Suite 3.0 1/08 Foundation Services Overview. The Reliable NFS feature is enabled by the StorEdge Network Data Replicator (SNDR), which is provided with the Reliable NFS packages.
Note - SNDR is supplied for use only with the Netra HA Suite. Any use of this product other than on a Netra HA Suite cluster is not supported. |
As superuser, install the following Reliable NFS and SNDR packages on a master-eligible node in the following order:
# pkgadd -d /NetraHASuite/Packages/ SUNWscmr SUNWscmu SUNWspsvr \ SUNWspsvu SUNWrdcr SUNWrdcu SUNWnhas-rnfs-client \ SUNWnhas-rnfs-server |
Repeat Step 1 on the second master-eligible node.
Install the SNDR patches on each master-eligible node.
See the Netra High Availability Suite 3.0 1/08 Foundation Services README for a list of SNDR patches.
Edit the /usr/kernel/drv/rdc.conf file on each master-eligible node to change the value of the rdc_bitmap_mode parameter.
To have changes to the bitmaps written on the disk at each update, change the value of the rdc_bitmap_mode parameter to 1.
To have changes to the bitmaps stored in memory at each update, change the value of the rdc_bitmap_mode parameter to 2. In this case, changes are written on the disk when the node is shut down. However, if both master-eligible nodes fail, both disks must be synchronized.
Install the Reliable NFS packages to enable the Reliable NFS service and disk mirroring features of Netra HA Suite. For a description of the Reliable NFS service, see “File Sharing and Data Replication” in Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
As superuser, install the following Reliable NFS packages on a master-eligible node in the following order:
# pkgadd -d /NetraHASuite/Packages/ SUNWnhas-rnfs-client \ # SUNWnhas-rnfs-server |
Repeat Step 1 on the second master-eligible node.
Install the Node Management Agent (NMA) packages to gather statistics on Reliable NFS, CGTP, and CMM. For a description of the NMA, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
The NMA consists of four packages. One NMA package is installed on both master-eligible nodes. Three packages are NFS-mounted as shared middleware software on the first master-eligible node. The first master-eligible node is the node that is booted first after you complete installing and configuring all the services on the master-eligible nodes.
The NMA requires the Java DMK
packages, SUNWjsnmp and SUNWjdrt,
to run. For information about installing the entire Java DMK software,
see the Java Dynamic Management Kit
5.0 Installation Guide.
The following table describes the packages that are required on each type of node.
Package | Description | Installed On |
---|---|---|
SUNWjsnmp | Java DMK 5.0 Simple Network Management Protocol (SNMP) manager API classes | Both master-eligible nodes |
SUNWjdrt | Java DMK 5.0 dynamic management runtime classes | First master-eligible node |
SUNWnhas-nma-local | NMA configuration and startup script | Both master-eligible nodes |
SUNWnhas-nma-shared | NMA shared component | First master-eligible node |
As superuser, install the following NMA package and Java DMK package on both master-eligible nodes:
# pkgadd -d /NetraHASuite/Packages/ SUNWnhas-nma-local SUNWjsnmp |
Note - If you plan to use shared disks, do not advance to Step 2 until the metadevice used for shared disks has been created. See Step 2 in To Set Up File Systems on the Master-Eligible Nodes. |
On the first master-eligible node, install the following shared Java DMK package and NMA packages:
# pkgadd -d /NetraHASuite/Packages/ \-M -R /SUNWcgha/local/export/services/ha_3.0 \ SUNWjdrt SUNWnhas-nma-shared |
The packages are installed with a predefined root path in the /SUNWcgha/local/export/services/ha_3.0 directory.
Note - Ignore error messages related to packages that have not been installed. Always answer Y to continue the installation. |
To configure the NMA, see the Netra High Availability Suite 3.0 1/08 Foundation Services NMA Programming Guide.
As superuser, install the Process Monitor Daemon (PMD) packages on each master-eligible node, as follows:
# pkgadd -d /NetraHASuite/Packages/ SUNWnhas-pmd \ SUNWnhas-pmd-avs SUNWnhas-pmd-solaris |
# pkgadd -d /NetraHASuite/Packages/ SUNWnhas-pmd \SUNWnhas-pmd-solaris |
For a description of the PMD, see the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
Before assigning IP addresses to the network interfaces of the master-eligible nodes, see “Cluster Addressing and Networking” in the Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
In the Netra HA Suite, three IP addresses must be configured for each master-eligible node:
An IP address for the first physical interface, NIC0, corresponding to the first network interface. This interface could be hme0.
An IP address for the second physical interface, NIC1, corresponding to the second network interface. This interface could be hme1.
An IP address for the virtual physical interface, cgtp0
The virtual physical interface should not be configured on a physical interface. The configuration is done automatically when you configure Reliable NFS. For more information about the cgtp0 interface, see the cgtp7D man page.
The IP addresses can be IPv4 addresses of any class with the following structure:
network_id.host_id |
When you configure the IP addresses, make sure that the node ID, nodeid, is the decimal equivalent of host_id. You define the nodeid in the cluster_nodes_table file and the nhfs.conf file. For more information, see Configuring the Netra HA Suite on the Master-Eligible Nodes.
The following procedures explain how to create and configure IP addresses for master-eligible nodes.
Examples in these procedures use IPv4 Class C addresses.
In the /etc/hosts file on each master-eligible node, add the three IP addresses, followed by the name of each interface:
10.250.1.10 netraMEN1-nic0 10.250.2.10 netraMEN1-nic1 10.250.3.10 netraMEN1-cgtp 10.250.1.20 netraMEN2-nic0 10.250.2.20 netraMEN2-nic1 10.250.3.20 netraMEN2-cgtp 10.250.1.1 master-nic0 10.250.2.1 master-nic1 10.250.3.1 master-cgtp |
In the rest of this book, the node netraMEN1 is the first master-eligible node. The first master-eligible node is the node that is booted first after you complete installing the Netra HA Suite. The node netraMEN2 is the second master-eligible node that is booted after the first master-eligible node has completed booting.
In the /etc directory on each master-eligible node, you must create a hostname file for each of the three interfaces. In addition, update the nodename and netmasks files.
Create or update the file /etc/hostname.NIC0 for the NIC0 interface.
This file must contain the name of the master-eligible node on the first interface, for example, netraMEN1-nic0.
Create or update the file /etc/hostname.NIC1 for the NIC1 interface.
This file must contain the name of the master-eligible node on the second interface, for example, netraMEN1-nic1.
Create or update the file /etc/hostname.cgtp0 for the cgtp0 interface.
This file must contain the name of the master-eligible node on the cgtp0 interface, for example, netraMEN1-cgtp.
Update the /etc/nodename file with the IP address of the master-eligible node.
Create a /etc/netmasks file with a netmask of 255.255.255.0 for all subnetworks in the cluster.
To configure external IP addresses for a master-eligible node, the node must have an extra physical network interface or logical network interface. An extra physical network interface is an unused interface on an existing Ethernet card or a supplemental Ethernet card, for example, hme2. A logical network interface is an interface that is configured on an existing Ethernet card, for example, hme1:101.
|
Note - The procedure described in this section requires the installation of the External Address Manager. |
If required, add the hostname associated with the external floating address in /etc/host on each master-eligible node.
129.253.1.13 ext-float |
Add, if required, the associated netmask for the subnetwork in /etc/netmasks on each master-eligible node.
129.253.1.0 255.255.255.0 |
Create or update the file /etc/hostname.interface for the interface supporting the external floating address on each master-eligible node.
If the file does not exist, create the following lines (the file must contain at least two lines for the arguments to be taken into account):
ext-float netmask + broadcast + |
If the file already exists, add the following line:
addif ext-float netmask + broadcast + down |
Configure the external floating address parameter in the nhfs.conf file on each master-eligible node.
|
Note - The procedure described in this section requires the installation of the External Address Manager. |
To configure the external floating address, the node must have two network interfaces not already used for a CGTP network. Using a different VLAN can be considered if no network interfaces are available.Each interface must be configured with a special IP address used for monitoring. The external floating address must be configured in one of them, and all of these IP addresses must be part of the same subnetwork.
Add, if required, the hostname associated to test IP addresses and the external floating address in /etc/host on each master-eligible node.
IP addresses for testing must be different on each node.
129.253.1.11 test-ipmp-1 |
Add, if required, the associated netmask for the subnetwork in /etc/netmasks on each master-eligible node.
129.253.1.0 255.255.255.0 |
Create or update the file /etc/hostname.interface for the first interface on each master-eligible node.
The file must contain the definition of the test IP address for this interface and the external floating address in this format:
test IP address #1 netmask + broadcast + -failover deprecated group name up addif floating address netmask + broadcast + failover down
test-ipmp-1 netmask + broadcast + -failover deprecated group ipmp-group up |
Create or update the file /etc/hostname.interface for the second interface on each master-eligible node.
The file must contain the definition of the test IP address for this interface in this format:
test IP address #1 netmask + broadcast + -failover deprecated group name up
test-ipmp-2 netmask + broadcast + -failover deprecated group ipmp-group up |
Configure the external floating address parameters (floating address and IPMP group to be monitored) in the nhfs.conf file on each master-eligible node.
Configure the services that are installed on the master-eligible nodes by modifying the nhfs.conf and the cluster_nodes_table files on each master-eligible node in the cluster. Master-eligible nodes have read-write access to these files. Diskless nodes or dataless nodes in the cluster have read-only access to these files.
This file contains configurable parameters for each node and for the Netra HA Suite. This file must be configured on each node in the cluster.
This file contains information about nodes in the cluster, such as nodeid and domainid. This file is used to elect the master node in the cluster. Therefore, this file must contain the most recent information about the nodes in the cluster.
There is one line in the table for each peer node. When the cluster is running, the table is updated by the nhcmmd daemon on the master node. The file is copied to the vice-master node every time the file is updated. The cluster_nodes_table must be located on a local partition that is not exported. For information about the nhcmmd daemon, see the nhcmmd1M man page.
The following procedures describe how to configure the nhfs.conf file.
To Create the Floating Address Triplet Assigned to the Master Role
To Configure a Direct Link Between the Master-Eligible Nodes
For more information, including parameter descriptions, see the nhfs.conf4 man page.
The nhfs.conf file enables you to configure the node after you have installed the Netra HA Suite on the node. This file provides parameters for configuring the node, CMM, Reliable NFS, the direct link between the master-eligible nodes, the Node State Manager, and daemon scheduling.
As superuser, copy the template /etc/opt/SUNWcgha/nhfs.conf.template file:
# cp /etc/opt/SUNWcgha/nhfs.conf.template \ /etc/opt/SUNWcgha/nhfs.conf |
For each property that you want to change, uncomment the associated parameter (delete the comment mark at the beginning of the line).
Modify the value of each parameter that you want to change.
For descriptions of each parameter, see the nhfs.conf4 man page.
If you have not installed the CGTP patches and packages, do the following:
|
The floating address triplet is a triplet of three logical addresses active on the node holding the master role. When the cluster is started, the floating address triplet is activated on the master node. In the event of a switchover or a failover, these addresses are activated on the new master node. Simultaneously, the floating address triplet is deactivated automatically on the old master node, that is, the new vice-master node.
To create the floating address triplet, you must define the master ID in the nhfs.conf file.
The floating address triplet is calculated from the master ID, the netmask, and the network interface addresses.
For more information about the floating address triplet of the master node, see “Cluster Addressing and Networking” in Netra High Availability Suite 3.0 1/08 Foundation Services Overview.
|
You can configure a direct link between the master-eligible nodes to prevent a split brain cluster. A split brain cluster is a cluster that has two master nodes because the network between the master node and the vice-master node has failed.
The cluster_nodes_table file contains the configuration data for each node in the cluster. Create this file on each master-eligible node. Once the cluster is running, this file is accessed by all nodes in the cluster. Therefore, the cluster_nodes_table on both master-eligible nodes must be exactly the same.
Copy the template file from /etc/opt/SUNWcgha/cluster_nodes_table.template to /etc/opt/SUNWcgha/cluster_nodes_table.
You can save the cluster_nodes_table file in a directory other than the /etc/opt/SUNWcgha directory. By default, the cluster_nodes_table file is located in the /etc/opt/SUNWcgha directory.
Edit the cluster_nodes_table file to add a line for each node in the cluster.
For more information, see the cluster_nodes_table4 man page.
Edit the nhfs.conf file to specify the directory that contains the cluster_nodes_table file:
CMM.LocalConfig.Dir=/etc/opt/SUNWcgha |
Copy the /etc/opt/SUNWcgha/cluster_nodes_table file from the first master-eligible node to the same directory on the second master-eligible node.
If you saved the cluster_nodes_table file in a directory other than /etc/opt/SUNWcgha, copy the file to that other directory on the second master-eligible node. The cluster_nodes_table file must be available in the same directory on both master-eligible nodes.
Repeat Step 4 on the second master-eligible node.
|
This procedure uses the following values for its code examples:
Detailed information about Solaris VM and how to set up a shared disk can be found in the Solaris Volume Manager Administration Guide.
On the first master-eligible node, change the node name with the name of the host associated to the CGTP interface:
# uname -S netraMEN1-cgtp |
Repeat for the second master-eligible node:
# uname -S netraMEN2-cgtp |
On the first master-eligible node, restart the rpcbind daemon to make it use the new node name:
# pkill -x -u 0 rpcbind |
Repeat Step 3 on the second master-eligible node.
Create the database replicas for the dedicated root disk slice on each master-eligible node:
# metadb -a -c 3 -f /dev/rdsk/c0t0d0s7 |
Repeat Step 9 for the second master-eligible node:
# cat /etc/nodename |
(Optional) If you plan to use CGTP, configure a temporary network interface on the first private network and make it match the name and IP address of the CGTP interface on the first master-eligible node:
# ifconfig hme0:111 plumb |
(Optional) If you plan to use CGTP, repeat Step 7 for the second master-eligible node:
# ifconfig hme0:111 plumb |
On the first master-eligible node, verify that the /etc/nodename file matches the name of the CGTP interface (or the name of the private network interface, if CGTP is not used):
# cat /etc/nodename |
Note - The rest of the procedure only applies to the first master-eligible node. |
Create the Solaris VM diskset that manages the shared disks:
# metaset -s nhas_diskset -a -h netraMEN1-cgtp netraMEN2-cgtp |
Remove any possible existing SCSI3-PGR keys from the shared disks.
In the following example, there was no key lying on the disks:
# /opt/SUNWcgha/sbin/nhscsitool /dev/rdsk/c1t8d0s2
Performing a SCSI bus reset ... done. There are no keys on disk ’/dev/rdsk/c1t8d0s2’. # /opt/SUNWcgha/sbin/nhscsitool /dev/rdsk/c1t9d0s2 |
On x64 master-eligible nodes, repartition the shared disk as described in .
Add the names of the shared disks to the previously created diskset:
# metaset -s nhas_diskset -a /dev/rdsk/c1t8d0 /dev/rdsk/c1t9d0 |
Note - This step will reformat the shared disks, and all existing data on the shared disks will be lost. |
Verify that the SVM configuration is set up correctly:
# metaset |
Note - If you do not plan to install diskless nodes, skip to Step 17 |
On SPARC master-eligible nodes, repartition the shared disk as described in .
Create the metadevices for partition mapping and mirroring.
Create the metadevices on the primary disk:
# metainit -s nhas_diskset d11 1 1 /dev/rdsk/c1t8d0s0 |
Create the metadevices on the secondary disk:
# metainit -s nhas_diskset d21 1 1 /dev/rdsk/c1t9d0s0 |
# metainit -s nhas_diskset d1 -m d11
# metattach -s nhas_diskset d1 d21 |
Note - This ends the section specific to the configuration for diskless installation. To complete diskless installation, jump to Step 19. |
Create your specific Solaris VM RAID configuration (refer to the Solaris Volume Manager Administration Guide for information on specific configurations).
In the following example, the two disks form a mirror called d0:
# metainit -s nhas_diskset d18 1 1 /dev/rdsk/c1t8d0s0
# metainit -s nhas_diskset d19 1 1 /dev/rdsk/c1t9d0s0 |
Create soft partitions to host the shared data.
These soft partitions are the file systems managed by Reliable NFS. In the following example, d1 and d2 are managed by Reliable NFS.
# metainit -s nhas_diskset d1 -p d0 2g |
The devices managed by Reliable NFS are now accessible through /dev/md/nhas_diskset/dsk/d1 and /dev/md/nhas_diskset/dsk/d2.
Create the file systems on the soft partitions:
# newfs /dev/md/nhas_diskset/rdsk/d1 |
Create the following directories on both master-eligible nodes:
# mkdir /SUNWcgha |
Mount the file systems on the metadevice on the first node:
# mount /dev/md/nhas_diskset/dsk/d1 /export |
Retrieve disk geometry information using the prtvtoc command.
A known problem in the diskless management tool, smosservice, prevents the creation of the diskless environment on a metadevice. To avoid this problem, mount the /export directory on a physical partition during the diskless environment creation.
To support access to the /export via a metadevice without preventing its access on a physical partition, the disk must be re-partitioned in a particular way after it has been inserted into a diskset. This re-partitioning preserves data already stored by SVM, since there is no formatting of created partitions.
The following table gives an example of the prtvtoc command output after inserting a disk into a diskset.
# prtvtoc /dev/rdsk/c1t8d0s0 |
* /dev/rdsk/c1t8d0s0 partition map |
* |
* Dimensions: |
* 512 bytes/sector |
* 107 sectors/track |
* 27 tracks/cylinder |
* 2889 sectors/cylinder |
* 24622 cylinders |
* 24620 accessible cylinders |
* |
* Flags: |
* 1: unmountable |
* 10: read-only |
* |
* First Sector Last |
* Partition Tag Flags Sector Count Sector Mount Directory |
0 4 00 8667 71118513 71127179 |
7 4 01 0 8667 8666 |
Create the data file using the fmthard command.
The fmthard command (see its man page for more information) is used to create physical partitions. It requires you to input a data file describing the partitions to be created. There is one entry per partition, using the following format:
slice # tag flag starting sector size in sectors |
starting sector and size in sectors values must be rounded to a cylinder boundary and must be computed as explained below.
starting sector = starting sector of the previous slice + size in sectors of the previous slice
size in sectors = the required partition size in bytes divided by bytes per sector, the result being rounded to sectors per cylinder (upper value)
Three particular slices must be created:
Slice 7 containing the meta-database (also called metadb). This slice must be created the same size as that created by the Solaris VM to overlap the existing one (to preserve data).
A slice to support /SUNWcgha/local (shared Netra HA Suite packages and files)
Other slices can be added depending on your application requirements. The following table gives an example for partitioning:
Slice Number | Usage | Size in MBytes |
---|---|---|
0 | /export | 4096 |
1 | /SUNWcgha/local | 2048 |
7 | metadb | Not Applicable |
The following slice constraints must be respected:
Slice 7 (metadb) is the first slice of the disk starting at sector # 0, with the size size of slice 7 with tag 4 (user partition) and flag 0x01: (unmountable)
Slice 2: On SPARC, Slice 2 maps the whole disk: size in bytes = accessible cylinders* sectors per cylinder * bytes per sector with tag 5 (backup) and flag 0x01 (unmountable). On x64, Slice 2 maps the whole disk except slice 7 with tag 5 (backup) and flag 0x01 (unmountable). This particular partitioning may provoke the display of warning message while formatting the disk.
Other slices use tag 0 (unassigned) and flag 0x00 (mountable in R/W)
An example of computing for slice 0 (located after slice 7):
These values would display the following content in the data file (datafile.txt):
On x64, the content of the file appears as follows:
Note that this example leaves some unallocated spaces on the disk that can be used for user-specific partitions.
Execute the following commands for the primary and for the secondary disk:
# fmthard -s datafile.txt /dev/rdsk/c1t8d0s2 |
Ensure that the following directories exist on the first master-eligible node:
# mkdir /SUNWcgha/local/export # mkdir /SUNWcgha/local/export/data # mkdir /SUNWcgha/local/export/services # mkdir /SUNWcgha/local/export/services/NetraHASuite_version/opt |
where NetraHASuite_version is the version of the Netra HA Suite you install, for example, ha_3.0.
These directories contain packages and data shared between the master-eligible nodes.
If you are using shared disks, install the shared Java DMK package and NMA packages onto the first master-eligible node as explained in Step 2 of To Install the Node Management Agent.
Create the following mount points on each master-eligible node:
# mkdir /SUNWcgha/services # mkdir /SUNWcgha/remote # mkdir /SUNWcgha/swdb |
These directories are used as mount points for the directories that contain shared data.
Add the following lines to the /etc/vfstab file on each master-eligible node:
If you have configured the CGTP, use the floating IP address for the cgtp0 interface that is assigned to the master role to define the mount points.
master-cgtp:/SUNWcgha/local/export/data - \ /SUNWcgha/remote nfs - no rw,hard,fg,intr,noac master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt \ - /SUNWcgha/services nfs - no rw,hard,fg,intr,noac master-cgtp:/SUNWcgha/local/export/services/ha_3.0 - \ /SUNWcgha/swdb nfs - no rw,hard,fg,intr,noac |
where master-cgtp is the host name associated with the floating address of the cgtp0 interface of the master node. For more information, see To Create the Floating Address Triplet Assigned to the Master Role.
If you have not configured the CGTP, use the floating IP address for the NIC0 interface that is assigned to the master role.
master-nic0:/SUNWcgha/local/export/data - \ /SUNWcgha/remote nfs - no rw,hard,fg,intr,noac master-nic0:/SUNWcgha/local/export/services/ha_3.0/opt \ - /SUNWcgha/services nfs - no rw,hard,fg,intr,noac master-nic0:/SUNWcgha/local/export/services/ha_3.0 - \ /SUNWcgha/swdb nfs - no rw,hard,fg,intr,noac |
where master-nic0 is the host name associated with the floating address of the NIC0 interface of the master node. For more information, see To Create the Floating Address Triplet Assigned to the Master Role.
Note - The noac mount option suppresses data and attribute caching. Use the noac option only if the impact on performance is acceptable. |
Check the following in the /etc/vfstab file:
The mount at boot field is set to no for all RNFS-managed partitions.
/dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /SUNWcgha/local ufs 2 no logging |
/dev/md/nhas_diskset/dsk/d1 /dev/md/nhas_diskset/rdsk/dl /SUNWcgha/local ufs 2 no logging |
The root file system (/) has the logging option.
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 logging |
Note - Only partitions identified in the nhfs.conf file can be managed by RNFS. For more information about the nhfs.conf file, see Configuring the nhfs.conf File. |
(Only applicable to SNDR) Create the file systems on the replicated partitions:
# newfs /dev/rdsk/c0t0d0s3 |
The Reliable NFS daemon, nhcrfsd, is installed on each master-eligible node. To determine which partitions are managed by this daemon, do the following:
Check the RNFS.Slice parameters of the /etc/opt/SUNWcgha/nhfs.conf file.
# grep -i RNFS.slice /etc/opt/SUNWcgha/nhfs.conf |
This means that slice /dev/rdsk/c0t0d0s3 is being replicated and slice /dev/rdsk/c0t0d0s5 is the corresponding bitmap partition.
# grep -i RNFS.slice /etc/opt/SUNWcgha/nhfs.conf |
This means that soft partition d1 of diskset nhas_diskset is being managed by Reliable NFS.
The /etc/opt/SUNWcgha/not_configured file was installed automatically when you installed the CMM packages. This file enables you to reboot a cluster node during the installation process without starting the Netra HA Suite.
Unmount the shared file system, /NetraHASuite, on each master-eligible node by using the umount command.
See the umount1M man page and To Mount an Installation Server Directory on the Master-Eligible Nodes.
Reboot the first master-eligible node, which becomes the master node:
After the first master-eligible node has completed rebooting, reboot the second master-eligible node:
This node becomes the vice-master node. To check the role of each node in the cluster, see the nhcmmrole1M man page.
Create the INST_RELEASE file to allow patching of shared packages:
# /opt/SUNWcgha/sbin/nhadm confshare |
Copyright © 2008, Sun Microsystems, Inc. All rights reserved.