Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Concepts Guide Oracle Solaris Cluster 4.1 |
2. Key Concepts for Hardware Service Providers
3. Key Concepts for System Administrators and Application Developers
Device IDs and DID Pseudo Driver
Cluster Configuration Repository (CCR)
Local and Global Namespaces Example
Using the cldevice Command to Monitor and Administer Disk Paths
Using the clnode set Command to Manage Disk Path Failure
Adhering to Quorum Device Requirements
Adhering to Quorum Device Best Practices
Recommended Quorum Configurations
Quorum in Two-Node Configurations
Quorum in Greater Than Two-Node Configurations
Characteristics of Scalable Services
Data Service API and Data Service Development Library API
Using the Cluster Interconnect for Data Service Traffic
Resources, Resource Groups, and Resource Types
Resource and Resource Group States and Settings
Resource and Resource Group Properties
Support for Oracle Solaris Zones
Support for Zones on Cluster Nodes Through Oracle Solaris Cluster HA for Solaris Zones
Criteria for Using Oracle Solaris Cluster HA for Solaris Zones
Requirements for Using Oracle Solaris Cluster HA for Solaris Zones
Additional Information About Oracle Solaris Cluster HA for Solaris Zones
Data Service Project Configuration
Determining Requirements for Project Configuration
Setting Per-Process Virtual Memory Limits
Two-Node Cluster With Two Applications
Two-Node Cluster With Three Applications
Failover of Resource Group Only
Public Network Adapters and IP Network Multipathing
SPARC: Dynamic Reconfiguration Support
SPARC: Dynamic Reconfiguration General Description
SPARC: DR Clustering Considerations for CPU Devices
SPARC: DR Clustering Considerations for Memory
SPARC: DR Clustering Considerations for Disk and Tape Drives
SPARC: DR Clustering Considerations for Quorum Devices
SPARC: DR Clustering Considerations for Cluster Interconnect Interfaces
SPARC: DR Clustering Considerations for Public Network Interfaces
Time among all cluster nodes in a cluster must be synchronized. Whether you synchronize the cluster nodes with any outside time source is not important to cluster operation. The Oracle Solaris Cluster software employs the Network Time Protocol (NTP) to synchronize the clocks between nodes.
In general, a change in the system clock of a fraction of a second causes no problems. However, if you run date or rdate on an active cluster, you can force a time change much larger than a fraction of a second to synchronize the system clock to the time source. This forced change might cause problems with file modification timestamps or confuse the NTP service.
When you install the Oracle Solaris Operating System on each cluster node, you have an opportunity to change the default time and date setting for the node. In general, you can accept the factory default.
When you install Oracle Solaris Cluster software using the scinstall command, the software supplies template files (see /etc/inet/ntp.conf and /etc/inet/ntp.conf.sc on an installed cluster node) that establish a peer relationship between all cluster nodes. One node is designated the “preferred” node. Nodes are identified by their private host names and time synchronization occurs across the cluster interconnect. Examples of private host names include clusternode1-priv and clusternode2-priv. For instructions on how to configure NTP, see Configuring Network Time Protocol (NTP) in Oracle Solaris Cluster Software Installation Guide.
In normal operation, you should never need to adjust the time on the cluster. However, if the time was set incorrectly when you installed the Oracle Solaris Operating System and you want to change it, the procedure for doing so is included in Chapter 9, Administering the Cluster, in Oracle Solaris Cluster System Administration Guide.