C H A P T E R 5 |
If the impact on performance is acceptable, do not use data and attribute caches when writing data to shared file systems. If it is necessary to use data and attribute caches to improve performance, ensure that your applications minimize the risk of using inconsistent data. If the cluster is running the Solaris OS, consider using the O_SYNC or O_DSYNC mount options on some files. For information about these options, see the fcntl3head man page.
Data and attribute caching is disabled by the noac mount option. The following procedure describes how to enable or disable the noac mount option.
Open the /etc/vfstab file in a text editor.
If data and attribute caching is disabled, the file should contain the noac option, as follows:
master-cgtp:/SUNWcgha/local/export/data - \ /SUNWcgha/remote nfs - no rw,hard,fg,intr,noac master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt \ - /SUNWcgha/services nfs - no rw,hard,fg,intr,noac master-cgtp:/SUNWcgha/local/export/services/ha_3.0 - \ /SUNWcgha/swdb nfs - no rw,hard,fg,intr,noac |
If data and attribute caching is enabled, the file should not contain the noac option, as follows:
master-cgtp:/SUNWcgha/local/export/data - \ /SUNWcgha/remote nfs - no rw,hard,fg,intr master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt \ - /SUNWcgha/services nfs - no rw,hard,fg,intr master-cgtp:/SUNWcgha/local/export/services/ha_3.0 - \ /SUNWcgha/swdb nfs - no rw,hard,fg,intr |
# uadmin 1 1 |
Trigger a switchover, as described in To Trigger a Switchover With nhcmmstat.
Log in to each of the diskless peer nodes or dataless peer nodes and repeat Step 2 through Step 5.
Open the /etc/fstab file in a text editor.
If data and attribute caching is disabled, the file should contain the noac option, as follows:
master-cgtp:/SUNWcgha/local/export/data \ /SUNWcgha/remote nfs noauto,rw,hard,fg,intr,noac 0 0 master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt \ /SUNWcgha/services nfs noauto,rw,hard,fg,intr,noac 0 0 master-cgtp:/SUNWcgha/local/export/services/ha_3.0 \ /SUNWcgha/swdb nfs noauto,rw,hard,fg,intr,noac 0 0 |
If data and attribute caching is enabled, the file should not contain the noac option, as follows:
master-cgtp:/SUNWcgha/local/export/data \ /SUNWcgha/remote nfs noauto,rw,hard,fg,intr 0 0 master-cgtp:/SUNWcgha/local/export/services/ha_3.0/opt \ /SUNWcgha/services nfs noauto,rw,hard,fg,intr 0 0 master-cgtp:/SUNWcgha/local/export/services/ha_3.0 \ /SUNWcgha/swdb nfs noauto,rw,hard,fg,intr 0 0 |
# reboot -n -f |
Trigger a switchover, as described in To Trigger a Switchover With nhcmmstat.
Log in to each of the diskless peer nodes or dataless peer nodes and repeat Step 2 through Step 5.
When data is written to the master node, a write is made to the replicated partition on the disk and to the corresponding scoreboard bitmap.
The scoreboard bitmap can be configured in two ways:
The scoreboard bitmap can be stored on a replicated partition and updated every time that the corresponding data partition is updated.
The scoreboard bitmap can be stored in memory and updated every time that the corresponding data partition is updated. The scoreboard bitmap is written to a replicated partition only when the node is shut down gracefully.
The scoreboard bitmap is only needed for IP-replicated systems. Systems using shared disk do not need it.
For examples of the two methods available for storing the scoreboard bitmaps, see “IP Mirroring” in the Netra High Availability Suite 3.0 1/08 Foundation Services Overview. For information about how to reconfigure the scoreboard bitmap, see the following section and procedure.
The "bitmaps on disk" and "bitmaps in memory" setting is a system-wide tunable, which cannot be set for each slice. The setting can be changed in the file /usr/kernel/drv/rdc.conf. Netra HA Suite software supports two of the available modes for the rdc_bitmap_mode parameter:
rdc_bitmap_mode=1 (store the bitmap on the replicated partition) forces bitmap writes for every write operation, so an update resync can be performed after a crash or reboot.
rdc_bitmap_mode=2 (store the bitmap in memory) only writes the bitmap on shutdown, so a full resync is required after a crash, but an update resync is required after a reboot.
These options have the following advantages and drawbacks:
Storing the bitmap to memory (mode 2) is preferred over storing it on the replicated partition (mode 1) when higher throughput is to be considered; mode 2 is about 50 percent faster than mode 1.
Mode 2 is preferred over mode 1 when many writers are to be considered. In case of failover recovery, the time required to become "Synchro Ready" (for example, the time required for replicated slices to become synchronized again) can be significantly longer if mode 1 is used instead of mode 2.
Storing the bitmap on the replicated partition (mode 1) is preferred over storing it to memory (mode 2) when faster recovery in case of dual failure (for example, if both the master and vice-master fail) is to be considered. When mode 2 is used, a full synchronization takes place when a node is elected Master upon boot; when using mode 1, only a regular synchronization happens. Future enhancements to the product should improve the resynchronization time involved when using mode 2.
# uadmin 1 1 |
Trigger a switchover, as described in To Trigger a Switchover With nhcmmstat.
Verify that the master node and vice-master node are synchronized, as described in To Verify That the Master Node and Vice-Master Node Are Synchronized.
Files on a shared file system have the same content, as viewed from the master node and vice-master node. The following files are stored locally on the master node and vice-master node. The files must contain identical information, but they are not shared.
cluster_nodes_table | Contains the nodeid and node name of each peer node. For more information, see the cluster_nodes_table4 man page. |
/etc/hosts | Contains the hostnames of all nodes on the cluster network. For more information, see the hosts4 man page. |
nhfs.conf | Describes the cluster configuration, including network interfaces, mirrored disk partitions, and the floating external address. For more information, see the nhfs.conf4 man page. |
To manage differences that exist between files that are not shared, perform the procedure as follows.
Open or create the /SUNWcgha/remote/etc/nhadmsync.conf file in a text editor.
Specify the names of the files that you want to compare by adding them to the nhadmsync.conf file.
Verify that the listed files are the same on the master node and the vice-master node:
# nhadm synccheck |
If the files are not identical on the master node and vice-master node, analyze the differences between the copies of the files.
If the differences between the files are acceptable, accept them:
# nhadm syncgen |
If you accept the differences between two files, the differences will no longer be signaled by the nhadm synccheck command.
For more information about the nhadm command, see the nhadm1M man page.
This section provides guidelines for using naming services with the Foundation Services.
If you use a naming service such as the Network Information Services (NIS) or the Domain Name System (DNS), avoid conflicts between the names of nodes and services by doing the following:
Verify that the names of nodes specified in the /etc/hosts file are used before node names generated by your naming service.
The name assigned to a node during cluster configuration must not conflict with the name assigned to a node by the naming service.
Verify that the entries for hosts, network, and services in the /etc/nsswitch.conf file are set as follows:
[...] hosts: files [...] network: files [...] [...] services: files [...] |
Copyright © 2008, Sun Microsystems, Inc. All rights reserved.