JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 4.1 Release Notes     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris Cluster 4.1 Release Notes

What's New in the Software

Support for Oracle Solaris 11.2 OS

New clsetup Wizards to Create a Zone Cluster

Support for solaris10 Brand Zone Clusters

Support for Exclusive-IP Zone Clusters

Support for Trusted Extensions With Zone Clusters

Resource Dependencies Can Be Defined on a Per-Node Basis

Support for Kernel Cage Dynamic Reconfiguration (DR)

Cluster Security Framework Is Enhanced

Support for Socket Direct Protocol Over the Cluster Interconnect

Faster Failure Detection and Response by Storage Monitors

ZFS Storage Pools

New clsetup Wizard to Configure the Oracle PeopleSoft Application Server Data Service

New clsetup Wizard to Configure the Oracle WebLogic Server Data Service

Support for MySQL and MySQL Cluster Data Services

New Data Service for PostgreSQL

New Data Service for Samba

New Data Service for SAP liveCache

New Data Service for SAP MaxDB

New Data Service for Siebel 8.2.2

New Data Service for Sybase ASE

New Data Service for Oracle Traffic Director

New Data Service for Oracle TimesTen

New Manual for SAP NetWeaver Data Service

New Data Service for Oracle External Proxy

New Data Service for Oracle PeopleSoft Enterprise Process Scheduler

New Data Service for Oracle Web Tier

Support for Oracle E-Business 12.1.1 Data Service

Support for Sun ZFS Storage Appliance Data Replication With Geographic Edition

Support for EMC Symmetrix Remote Data Facility With Geographic Edition

Support for MySQL Replication With Geographic Edition

New Man Pages for the ccradm and dcs_config Advanced Maintenance Commands

Selected Support for Non-Global Zones

What's Not Included in the Oracle Solaris Cluster 4.1 Software

Restrictions

Solaris Volume Manager Disk Sets in a Zone Cluster

Commands Modified in This Release

Compatibility Issues

Logical Host Does not Fail Over with Public Net Fault (16979921)

Oracle ASM With Solaris Volume Manager Mirrored Logical Volumes

osysmond Core Dumps in S10 Brand Zone During GI root.sh and Starting of CRS (14456069)

Oracle Clusterware Fails to Create All SIDs for ora.asm Resource (12680224)

Oracle Solaris 11 SRU Installation Might Fail Due to Out-of-Date pkg Command

Unable to Install Just Patches Using clzonecluster install-cluster to the solaris10 Branded Zone Cluster (7200532)

Adding Main Adapter to IPMP Group Removes DNS Configuration (7198718)

SAP JAVA Issue Affects HA for SAP NetWeaver Ability to Fail Over in Unplanned Outage (7191360)

Zone Does Not Boot if pkg:/system/resource-mgmt/resource-cap Is Not Installed and capped-memory Is Configured (7087700)

Active:Active ZFS Storage Appliance Clustered Configurations Are Not Supported With Geographic Edition (6770212)

Accessibility Information

Supported Products

Data Replication

Data Service

File Systems

Geographic Edition Software Requirements

Memory Requirements

Oracle Solaris Operating System

Oracle VM Server for SPARC

Volume Management Software

Product Localization

Known Issues and Bugs

Administration

A clzc reboot Command Causes the solaris10 Brand Exclusive-IP Zone Cluster to Panic the Global Zone Nodes (16941521)

The /usr/sbin/shutdown Command in a Zone of an Exclusive-IP Zone Cluster Can Result in a Halt of Other Running Zones of the Zone Cluster (16963753)

The svc_private_network:default SMF Service Goes Into Maintenance in a solaris10 Brand Exclusive-IP Zone Cluster (16716992)

Cannot Set the Jumbo Frame MTU Size for the clprivnet Interface (16618736)

Public Net Failure Does Not Fail Over DB Server Resource with SCAN Listener (16231523)

The Data Service Configuration Wizards Do Not Support Storage Resources and Resource Groups for Scalable HAStoragePlus (7202824)

Removing a Node From an Exclusive-IP Zone Cluster Panics Cluster Nodes (7199744)

Nonexisting privnet Stops Zone Clusters From Booting Despite Good privnet (7199431)

The clzonecluster Command Fails to Verify That defrouter Cannot Be Specified Without allowed-addr, CCR Has Failed Configuration (7199135)

clzonecluster boot, reboot, and halt Subcommands Fail if Any One of the Cluster Nodes Is Not in the Cluster (7193998)

Cluster File System Does Not Support Extended Attributes (7167470)

Using chmod to Set setuid Permission Returns Error in a Non-Global Zone on PxFS Secondary Server (7020380)

Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)

Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)

EMC SRDF Rejects Switchover When Replicated Device-Group Status Will Cause Switchover and Switchback to Fail (6798901)

Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)

More Validation Checks Needed When Combining DIDs (6605101)

Data Services

Active-Standby Configuration Not Supported for HA for TimesTen (16861602)

Failure to Update Properties of SUNW.ScalMountPoint Resource Configured with NAS for Zone Cluster (7203506)

Global File System Configured in Zone Cluster's Scalable HAStoragePlus Resource Is Not Accessible (7197623)

RAC Wizard Failing With "ERROR: Oracle ASM is either not installed or the installation is invalid!" (7196184)

clsetup Wizard Fails While Configuring WebLogic Server Domain in the Zones/Zone Cluster With WebLogic Server Installed in the NFS (7196102)

With a Large Number of Non-Network-Aware GDS Resources, Some Fail to Restart and Remain Offline (7189659)

SUNW.Proxy_SMF_failover sc_delegated_restarter File Descriptor Leak (7189211)

When set Debug_level=1, pas-rg Fails Over to Node 2 And Cannot Start on Node 1 Anymore (7184102)

Scalable Applications Are Not Isolated Between Zone Clusters (6911363)

Running clnas add or clnas remove Command on Multiple Nodes at the Same Time Could Cause Problem (6791618)

Developer Environment

clresource show -p Command Returns Wrong Information (7200960)

Geographic Edition

Cluster Node Does Not Have Access to Sun ZFS Storage Appliance Projects or iSCSI LUNs (15924240)

DR State Stays Reporting unknown on One Partner (7189050)

Takeover to the Secondary Is Failing Because fs umount Failed On the Primary (7182720)

ZFS Storage Appliance Protection Group Creation And Validation Fail if Project Replication Is Stopped by Using the BUI (7176292)

Multiple Notification Emails Sent From Global Cluster When Zone Clusters Are in Use (7098290)

Installation

Unable to Install Data Service Agents on Existing 3.3 5/11 solaris10 Brand Zone Without Specifying Patch Options (7197399)

clzonecluster Does Not Report Errors When install Is Used Instead of install-cluster for solaris10 Branded Zones (7190439)

ASM Instance Proxy Resource Creation Errored When a Hostname Has Uppercase Letters (7190067)

Wizard Won't Discover the ASM SID (7190064)

RAC Proxy Resource Creation Fails When the Cluster Node's Hostname Has Uppercase Letters (7189565)

Hard to Get Data Service Names for solaris10 Brand Zone Noninteractive Data Service Installation (7184714)

cacao Cannot Communicate on Machines Running Trusted Extensions (7183625)

The Command clnode remove -F nodename Fails to Remove the Node nodename From Solaris Volume Manager Device Groups (6471834)

Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)

Runtime

Logical Hostname Failover Could Create Duplicate Addresses, Lead To Outage (7201091)

sc_delegated_restarter Does Not Take Into Account Environment Variable Set in Manifest (7173159)

Unable to Re-enable Transport Interface After Disabling With ipadm disable-if -t interface (7141828)

Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)

Upgrade

x86: scinstall -u update Sometimes Fails to Upgrade the Cluster Packages on an x86 Node (7201491)

Software Updates

Patch Management Tools

My Oracle Support

Oracle Solaris Cluster 4.1 Documentation Set

Documentation Issues

Upgrade Guide

HA for Oracle Guide

HA for Oracle RAC Guide

HA for Oracle Solaris Zones Guide

Solaris Volume Manager

Geographic Edition Data Replication Guide for Oracle Solaris Availability Suite

Man Pages

clzonecluster(1CL)

ORCL.sapcenter(5)

ORCL.saprepenq(5)

ORCL.saprepenq_preempt(5)

ORCL.sapstartsrv(5)

scdpmd.conf(4)

scha_check_app_user(1HA)

SUNW.HAStoragePlus(5)

SUNW.ScalDeviceGroup(5)

SUNW.ScalMountPoint(5)

A.  ORCL.otd(5) Man Page

Known Issues and Bugs

The following known issues and bugs affect the operation of the Oracle Solaris Cluster and Oracle Solaris Cluster Geographic Edition 4.1 software, as of the time of release. Bugs and issues are grouped into the following categories:

Contact your Oracle support representative to see whether a fix becomes available.

Administration

A clzc reboot Command Causes the solaris10 Brand Exclusive-IP Zone Cluster to Panic the Global Zone Nodes (16941521)

Problem Summary: A reboot or halt of a solaris10 branded exclusive–IP zone cluster node can cause the global zone nodes to panic. This occurs when the zone cluster nodes use the base network as the primary (public) network interface and there are VNICs on that base network interface that are configured for other zone cluster nodes in that cluster.

Workaround: Create and use VNICs as primary network interfaces for exclusive-IP zone clusters.

The /usr/sbin/shutdown Command in a Zone of an Exclusive-IP Zone Cluster Can Result in a Halt of Other Running Zones of the Zone Cluster (16963753)

Problem Summary: If you use the /usr/sbin/shutdown command in a zone of an exclusive-IP zone cluster to halt or reboot the zone, any other zones of the zone cluster that are alive and running can be halted by cluster software.

Workaround: Do not use the /usr/sbin/shutdown command inside a zone of an exclusive-IP zone cluster to halt or reboot the zone. Instead, use the /usr/cluster/bin/clzonecluster command in the global zone to halt or reboot a zone of an exclusive-IP zone cluster. The /usr/cluster/bin/clzonecluster command is the correct way to halt or reboot a zone of any type of zone cluster. If you see this problem, use the /usr/cluster/bin/clzonecluster command to boot any such zones that were halted by cluster software.

The svc_private_network:default SMF Service Goes Into Maintenance in a solaris10 Brand Exclusive-IP Zone Cluster (16716992)

Problem Summary: When you perform system identification in a zone of a solaris10 brand exclusive-IP zone cluster, the svc_private_network:default SMF service goes into maintenance in that zone. On subsequent reboots of the zone, the problem does not occur.

Workaround: After you perform system identification configuration in a zone of a solaris10 brand exclusive-IP zone cluster, reboot that zone.

Cannot Set the Jumbo Frame MTU Size for the clprivnet Interface (16618736)

Problem Summary: The MTU of the cluster clprivnet interface is always set to the default value of 1500 and does not match the MTU of the underlying private interconnects. Therefore, you cannot set the jumbo frame MTU size for the clprivnet interface.

Workaround: There is no known workaround.

Public Net Failure Does Not Fail Over DB Server Resource with SCAN Listener (16231523)

Problem Summary: The HA-Oracle database resource will not fail over when the public network fails when the HA-Oracle database is configured to use the Grid Infrastructure SCAN listener.

Workaround: When using the Oracle Grid Infrastructure SCAN listener with an HA-Oracle database, add a logical host with an IP address that is on the same subnet as the SCAN listener to the HA-Oracle database resource group.

The Data Service Configuration Wizards Do Not Support Storage Resources and Resource Groups for Scalable HAStoragePlus (7202824)

Problem Summary: The existing data service configuration wizards do not support configuring scalable HAStoragePlus resources and resource groups. In addition, the wizards are also not able to detect existing resources and resource groups for scalable HAStoragePlus.

For example, while configuring HA for WebLogic Server in multi-instance mode, the wizard will display No highly available storage resources are available for selection., even when there are existing scalable HAStoragePlus resources and resource groups on the cluster.

Workaround: Configure data services that use scalable HAStoragePlus resources and resource groups in the following way:

  1. Use the clresourcegroup and clresource commands to configure HAStoragePlus resources groups and resources in scalable mode.

  2. Use the clsetup wizard to configure data services as if they are on local file systems, meaning as if no storage resources are involved.

  3. Use the CLI to create an offline-restart dependency on the scalable HAStoragePlus resources, which you configured in Step 1, and a strong positive affinity on the scalable HAStoragePlus resource groups.

Removing a Node From an Exclusive-IP Zone Cluster Panics Cluster Nodes (7199744)

Problem Summary: When a zone-cluster node is removed from an exclusive-IP zone cluster, the global—cluster nodes that host the exclusive-IP zone cluster panic. The issue is seen only on a global cluster with InfiniBand interconnects.

Workaround: Halt the exclusive-IP zone cluster before you remove the zone-cluster node.

Nonexisting privnet Stops Zone Clusters From Booting Despite Good privnet (7199431)

Problem Summary: If invalid or nonexisting network links are specified as the privnetresources in an exclusive-IP zone cluster configuration (ip-type=exclusive), the zone-cluster node fails to join the zone cluster despite presence of valid privnet resources.

Workaround: Remove the invalid privnet resource from the zone cluster configuration, then reboot the zone-cluster node.

# clzonecluster reboot -n nodename zone-cluster

Alternatively, create the missing network link that corresponds to the invalid privnet resource, then reboot the zone. See the dladm(1M) man page for more information.

The clzonecluster Command Fails to Verify That defrouter Cannot Be Specified Without allowed-addr, CCR Has Failed Configuration (7199135)

Problem Summary: In an exclusive-IP zone cluster, if you configure a net resource in the node scope with the defrouter property specified and the allowed-address property unspecified, the Oracle Solaris software errors out. Oracle Solaris software requires that, for an exclusive-IP zone cluster, you must always specify allowed-address property if you specify the defrouter property. If you do not, the Oracle Solaris software reports the proper error message, but the cluster would have already populated the CCR with the zone-cluster information. This action leaves the zone cluster in the Unknown state.

Workaround: Specify the allowed-address property for the zone cluster.

clzonecluster boot, reboot, and halt Subcommands Fail if Any One of the Cluster Nodes Is Not in the Cluster (7193998)

Problem Summary: The clzonecluster boot, reboot, and halt subcommands fail, even if one of the cluster nodes is not in the cluster. An error similar to the following is displayed:

root@pnode1:~# clzc reboot zoneclustername 
clzc:  (C827595) "pnode2" is not in cluster mode.
clzc:  (C493113) No such object.

root@pnode1:~# clzc halt zoneclustername
clzc:  (C827595) "pnode2" is not in cluster mode.
clzc:  (C493113) No such object.

The clzonecluster boot, reboot, and halt subcommands should skip over nodes that are in noncluster mode, rather than fail.

Workaround: Use the following option with the clzonecluster boot or clzonecluster halt commands to specify the list of nodes for the subcommand:

-n nodename[,…]

The -n option allows running the subcommands on the specified subset of nodes. For example, if, in a three-node cluster with the nodes pnode1, pnode2, and pnode3, the node pnode2 is down, you could run the following clzonecluster subcommands to exclude the down node:

clzonecluster halt -n pnode1,pnode3 zoneclustername
clzonecluster boot -n pnode1,pnode3 zoneclustername
clzonecluster reboot -n pnode1,pnode3 zoneclustername

Cluster File System Does Not Support Extended Attributes (7167470)

Problem Summary: Extended attributes are not currently supported by cluster file systems. When a user mounts a cluster file system with the xattrmount option, the following behavior is seen:

So any program accessing the extended attributes of files in a cluster file system might not get the expected results.

Workaround: Mounted a cluster file system with the noxattrmount option.

Using chmod to Set setuid Permission Returns Error in a Non–Global Zone on PxFS Secondary Server (7020380)

Problem Summary: The chmod command might fail to change setuid permissions on a file in a cluster file system. If the chmod command is run on a non-global zone and the non-global zone is not on the PxFS primary server, the chmod command fails to change the setuid permission.

For example:

# chmod 4755 /global/oracle/test-file
chmod: WARNING: can't change /global/oracle/test-file

Workaround: Do one of the following:

Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)

Problem Summary: When you use an XML configuration file to create resources, if any of the resources have extension properties that are not tunable, that is, the Tunable resource property attribute is set to None, the command fails to create the resource.

Workaround: Edit the XML configuration file to remove the non-tunable extension properties from the resource.

Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)

Problem Summary: Turning off fencing for a shared device with an active I/O load might result in a reservation conflict panic for one of the nodes that is connected to the device.

Workaround: Quiesce I/O to a device before you turn off fencing for that device.

EMC SRDF Rejects Switchover When Replicated Device-Group Status Will Cause Switchover and Switchback to Fail (6798901)

Problem Summary: If an EMC SRDF device group whose replica pair is split, attempts to switch the device group over to another node, the switchover fails. Furthermore, the device group is unable to come back online on the original node until the replica pair is been returned to a paired state.

Workaround: Verify that SRDF replicas are not split, before you attempt to switch the associated Oracle Solaris Cluster global-device group to another cluster node.

Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)

Problem Summary: Changing a cluster configuration from a three-node cluster to a two-node cluster might result in complete loss of the cluster, if one of the remaining nodes leaves the cluster or is removed from the cluster configuration.

Workaround: Immediately after removing a node from a three-node cluster configuration, run the cldevice clear command on one of the remaining cluster nodes.

More Validation Checks Needed When Combining DIDs (6605101)

Problem Summary: The cldevice command is unable to verify that replicated SRDF devices that are being combined into a single DID device are, in fact, replicas of each other and belong to the specified replication group.

Workaround: Take care when combining DID devices for use with SRDF. Ensure that the specified DID device instances are replicas of each other and that they belong to the specified replication group.

Data Services

Active-Standby Configuration Not Supported for HA for TimesTen (16861602)

Problem Summary: The TimesTen active-standby configuration requires an integration of Oracle Solaris Cluster methods in the TimesTen ttCWadmin utility. This integration has not yet occurred, even though it is described in the Oracle Solaris Cluster Data Service for Oracle TimesTen Guide. Therefore, do not use the TimesTen active-standby configuration with Oracle Solaris Cluster HA for TimesTen and do not use the TimesTen ttCWadmin utility on Oracle Solaris Cluster.

The Oracle Solaris Cluster TimesTen data service comes with a set of resource types. Most of these resource types are meant to be used with TimesTen active-standby configurations, You must use only the ORCL.TimesTen_server resource type for your highly available TimesTen configurations with Oracle Solaris Cluster.

Workaround: Do not use the TimesTen active-standby configuration.

Failure to Update Properties of SUNW.ScalMountPoint Resource Configured with NAS for Zone Cluster (7203506)

Problem Summary: The update of any properties in a SUNW.ScalMountPoint resource that is configured with a NAS file system for a zone cluster can fail with an error message similar to the following:

clrs:   hostname:zone-cluster : Bad address

Workaround: Use the clresource command to delete the resource and then recreate resource with all required properties.





Global File System Configured in Zone Cluster's Scalable HAStoragePlus Resource Is Not Accessible (7197623)

Problem Summary: Consider a cluster file system with the following entry in the global cluster's /etc/vfstab file, with a mount-at-boot value of no:

# cat /etc/vfstab
/dev/md/datadg/dsk/d0   /dev/md/datadg/rdsk/d0 /global/fs-data ufs   5  no   logging,global

When an HAStoragePlus resource is created in a zone cluster's scalable resource group and the above cluster file system has the mount-at-boot value set tono, the cluster file system data might not be visible through the zone-cluster node mount point.

Workaround: Perform the following steps to avoid the problem:

  1. From one global-cluster node, take offline the zone cluster's scalable resource group that contains HAStoragePlus.

    # clresourcegroup offline -Z zonecluster scalable-resource-group
  2. In the /etc/vfstab file on each global-cluster node, change the mount-at-boot value of the cluster file system entry to yes.

    /dev/md/datadg/dsk/d0   /dev/md/datadg/rdsk/d0 /global/fs-data ufs   5  yes   logging,global
  3. From one global-cluster node, bring online the zone cluster's scalable resource group that contains HAStoragePlus.

    # clresourcegroup online -Z zonecluster scalable-resource-group

RAC Wizard Failing With "ERROR: Oracle ASM is either not installed or the installation is invalid!" (7196184)

Problem Summary: The Oracle RAC configuration wizard fails with the message, ERROR: Oracle ASM is either not installed or the installation is invalid!.

Workaround: Ensure that the “ASM” entry is first within the /var/opt/oracle/oratab file, as follows:

root@phys-schost-1:~# more /var/opt/oracle/oratab
…
+ASM1:/u01/app/11.2.0/grid:N            # line added by Agent
MOON:/oracle/ora_base/home:N

clsetup Wizard Fails While Configuring WebLogic Server Domain in the Zones/Zone Cluster With WebLogic Server Installed in the NFS (7196102)

Problem Summary: The configuration of the HA-WebLogic Server resource using the clsetup wizard inside a zone/zone cluster would fail if the WebLogic Server is installed on an NFS mount point.

This issue won't occur with the NFS storage on global cluster, and if storage other than NFS is used.

Condition for this issue to occur : Mount the NFS storage with WebLogic Server installed inside the zones and configure the WebLogic Server using the clsetup wizard.

Error Message : ERROR: The specified path is not a valid WebLogic Server domain location. Similar message will be displayed for Home Location, Start Script and Environment file

Finally it fails in Administration/Managed/RPS server discovery.

Not able to find the WebLogic Administration Server Instance. 
Make sure the provided WebLogic Domain Location (<DOMAIN_LOCATION_PROVIDED>) 
is the valid one.

No Reverse Proxy Server Instances found. You can't proceed further.

No Managed Server instances found. You can't proceed further.

Workaround: Configure the WebLogic Server resource manually.

With a Large Number of Non-Network-Aware GDS Resources, Some Fail to Restart and Remain Offline (7189659)

Problem Summary: This problem affects Generic Data Service (GDS) resources that meet all of the following conditions:

If the resources continue to fail to start, GDS will continue to restart it, forever. There is an issue where the error Restart operation failed: cluster is reconfiguringis produced. This results in the GDS resource not being automatically restarted.

Workaround: Manually disable and then re-enable the affected GDS resources.

SUNW.Proxy_SMF_failover sc_delegated_restarter File Descriptor Leak (7189211)

Problem Summary: Every time the SMF proxy resource SUNW.Proxy_SMF_failover is disabled or enabled, the file descriptor count increases by one. Repeated switches can grow the file descriptors to 256 and reach the limit at which point the resource cannot be switched online anymore.

Workaround: Disable and re-enable the sc_restarter SMF service.

# svcadm disable sc_restarter
# svcadm enable sc_restarter

When set Debug_level=1, pas-rg Fails Over to Node 2 And Cannot Start on Node 1 Anymore (7184102)

Problem Summary: If you set the Debug_level property to 1, a start of a dialogue instance resource is impossible on any node.

Workaround: Use Debug_level=2, which is a superset of Debug_level=1.

Scalable Applications Are Not Isolated Between Zone Clusters (6911363)

Problem Summary: If scalable applications configured to run in different zone clusters bind to INADDR_ANY and use the same port, then scalable services cannot distinguish between the instances of these applications that run in different zone clusters.

Workaround: Do not configure the scalable applications to bind to INADDR_ANY as the local IP address, or bind them to a port that does not conflict with another scalable application.

Running clnas add or clnas remove Command on Multiple Nodes at the Same Time Could Cause Problem (6791618)

Problem Summary: When adding or removing a NAS device, running the clnas addor clnas removecommand on multiple nodes at the same time might corrupt the NAS configuration file.

Workaround: Run the clnas addor clnas removecommand on one node at a time.

Developer Environment

clresource show -p Command Returns Wrong Information (7200960)

Problem Summary: In a solaris10 brand non-global zone, the clresource show -p property command returns the wrong information.

Workaround: This bug is caused by pre-Oracle Solaris Cluster 4.1 binaries in the solaris10 brand zone. Run the following command from the global zone to get the correct information about local non-global zone resources:

# clresource show -p property -Z zone-name

Geographic Edition

Cluster Node Does Not Have Access to Sun ZFS Storage Appliance Projects or iSCSI LUNs (15924240)

Problem Summary: If a node leaves the cluster when the site is the primary, the projects or iSCSI LUNs are fenced off. However, after a switchover or takeover when the node joins the new secondary, the projects or iSCSI LUNs are not unfenced and the applications on this node are not able to access the file system after it is promoted to the primary.

Workaround: Reboot the node.

DR State Stays Reporting unknown on One Partner (7189050)

Problem Summary: DR state stays reporting unknown, although DR resources are correctly reporting replication state.

Workaround: Run the geopg validate protection-group command to force a resource-group state notification to the protection group.

Takeover to the Secondary Is Failing Because fs umount Failed On the Primary (7182720)

Problem Summary: Takeover of a protection group fails if umount of the file system fails on the primary site.

Workaround: Perform the following steps:

  1. Issue fuser -cu file-system.

  2. Check for non-application process IDs, like cd, on the primary site.

  3. Terminate such processes before you perform a takeover operation.

ZFS Storage Appliance Protection Group Creation And Validation Fail if Project Replication Is Stopped by Using the BUI (7176292)

Problem Summary: If you use the browser user interface (BUI) to stop replication, the protection group goes to a configuration error state when protection-group validation fails.

Workaround: From the BUI, perform the following actions to stop replication:

  1. Under the Shares tab, select the project being replicated.

  2. Click on the Replication tab and select the Scheduled option.

  3. Wait until the status changes to manual, then click the Enable/Disable button.

Multiple Notification Emails Sent From Global Cluster When Zone Clusters Are in Use (7098290)

Problem Summary: If Oracle Solaris Cluster Geographic Edition is configured in a zone cluster, duplicate notification emails about loss of connection to partner clusters are sent from both the zone cluster and the global cluster. The emails should only be sent from the zone cluster.

Workaround: This is a side effect of the cluster event handling. It is harmless, and the duplicates should be ignored.

Installation

Unable to Install Data Service Agents on Existing 3.3 5/11 solaris10 Brand Zone Without Specifying Patch Options (7197399)

Problem Summary: When installing agents in a solaris10 brand non-global zone from an Oracle Solaris Cluster 3.3 or 3.3 5/11 DVD, the clzoncecluster install-clustercommand fails if you do not specify the patches that support solaris10 branded zones.

Workaround: Perform the following steps to install agents from an Oracle Solaris Cluster 3.3 or 3.3 5/11 DVD to a solaris10 brand zone:

  1. Reboot the zone cluster into offline mode.

    # clzonecluster reboot -o zonecluster
  2. Run the clzonecluster install-cluster command, specifying the information for the core patch that supports solaris10 branded zones.

    # clzonecluster install-cluster -d dvd -p patchdir=patchdir[,patchlistfile=patchlistfile] \
    -n node[,…]] zonecluster
  3. After installation is complete, reboot the zone cluster to bring it online.

    # clzonecluster reboot zonecluster

clzonecluster Does Not Report Errors When install Is Used Instead of install-cluster for solaris10 Branded Zones (7190439)

Problem Summary: When the clzonecluster installcommand is used to install from an Oracle Solaris Cluster release DVD, it does not print any messages but nothing is installed onto the nodes.

Workaround: To install the Oracle Solaris Cluster release in a solaris10 branded zone, do not use the clzonecluster install command, which is used to install the Oracle Solaris 10 image. Instead, use the clzonecluster install-cluster command.

ASM Instance Proxy Resource Creation Errored When a Hostname Has Uppercase Letters (7190067)

Problem Summary: The use of uppercase letters in the cluster node hostname causes the creation of ASM instance proxy resources to fail.

Workaround: Use only lowercase letters for the cluster-node hostnames when installing Oracle Solaris Cluster software.

Wizard Won't Discover the ASM SID (7190064)

Problem Summary: When using the clsetup utility to configure the HA for Oracle or HA for Oracle RAC database, the Oracle ASM System Identifier screen is not able to discover or configure the Oracle ASM SID when a cluster node hostname is configured with uppercase letters.

Workaround: Use only lowercase letters for the cluster-node hostnames when installing Oracle Solaris Cluster software.

RAC Proxy Resource Creation Fails When the Cluster Node's Hostname Has Uppercase Letters (7189565)

Problem Summary: The use of uppercase letters in the cluster node hostname causes the creation of RAC database proxy resources to fail.

Workaround: Use only lowercase letters for the cluster-node hostnames when you install Oracle Solaris Cluster software.

Hard to Get Data Service Names for solaris10 Brand Zone Noninteractive Data Service Installation (7184714)

Problem Summary: It is hard to know what is the agent names to specify when using the clzonecluster install-cluster command to install agents with the -s option.

Workaround: When using the clzonecluster install-cluster -d dvd -s {all | software-component[,…]} options zone-cluster command to create a solaris10 brand zone cluster, you can specify the following cluster components with the -s option:

cacao Cannot Communicate on Machines Running Trusted Extensions (7183625)

Problem Summary: If the Trusted Extensions feature of Oracle Solaris software is enabled before the Oracle Solaris Cluster software is installed and configured, the Oracle Solaris Cluster setup procedures are unable to copy the common agent container security keys from one node to other nodes of the cluster. Identical copies of the security keys on all cluster nodes is a requirement for the container to function properly on cluster nodes.

Workaround: Manually copy the security keys from one global-cluster node to all other nodes of the global cluster.

  1. On each node, stop the security file agent.

    phys-schost# /usr/sbin/cacaoadm stop
  2. On one node, change to the /etc/cacao/instances/default/ directory.

    phys-schost-1# cd /etc/cacao/instances/default/
  3. Create a tar file of the /etc/cacao/instances/default/ directory.

    phys-schost-1# tar cf /tmp/SECURITY.tar security
  4. Copy the /tmp/SECURITY.tar file to each of the other cluster nodes.

  5. On each node to which you copied the /tmp/SECURITY.tar file, extract the security files.

    Any security files that already exist in the /etc/cacao/instances/default/ directory are overwritten.

    phys-schost-2# cd /etc/cacao/instances/default/
    phys-schost-2# tar xf /tmp/SECURITY.tar
  6. Delete the /tmp/SECURITY.tar file from each node in the cluster.


    Note - You must delete each copy of the tar file to avoid security risks.


    phys-schost-1# rm /tmp/SECURITY.tar
    phys-schost-2# rm /tmp/SECURITY.tar
  7. On each node, restart the security file agent.

    phys-schost# /usr/sbin/cacaoadm start

The Command clnode remove -F nodename Fails to Remove the Node nodename From Solaris Volume Manager Device Groups (6471834)

Problem Summary: When a node is removed from the cluster by using the command clnode remove -F nodename, a stale entry for the removed node might remain in Solaris Volume Manager device groups.

Workaround: Remove the node from the Solaris Volume Manager device group by using the metaset command before you run the clnode remove -F nodename command.

If you ran the clnode remove -F nodename command before you removed the node from the Solaris Volume Manager device group, run the metaset command from an active cluster node to remove the stale node entry from the Solaris Volume Manager device group. Then run the clnode clear -F nodename command to completely remove all traces of the node from the cluster.

Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)

Problem Summary: If there are redundant paths in the network hardware between interconnect adapters, the scinstall utility might fail to configure the interconnect path between them.

Workaround: If autodiscovery discovers multiple interconnect paths, manually specify the adapter pairs for each path.

Runtime

Logical Hostname Failover Could Create Duplicate Addresses, Lead To Outage (7201091)

Problem Summary: For a shared-IP zone-cluster (ip-type=shared), if the underlying non-global zone of a zone-cluster node is shut down by using the uadmin 1 0 or uadmin 2 0 command, the resulting failover of LogicalHostname resources might result in duplicate IP addresses being configured on a new primary node. The duplicate address is marked with the DUPLICATE flag until five minutes later, during which time the address is not usable by the application. See the ifconfig(1M) man page for more information about the DUPLICATE flag.

Workaround: Use either of the following methods:

sc_delegated_restarter Does Not Take Into Account Environment Variable Set in Manifest (7173159)

Problem Summary: Any environment variables that are specified in the service manifest are not recognized when the service is put under SUNW.Proxy_SMF_failover resource type control.

Workaround: There is no workaround.

Unable to Re-enable Transport Interface After Disabling With ipadm disable-if -t interface (7141828)

Problem Summary: Cluster transport paths go offline with accidental use of the ipadm disable-if command on the private transport interface.

Workaround: Disable and re-enable the cable that the disabled interface is connected to.

  1. Determine the cable to which the interface is connected.

    # /usr/cluster/bin/clinterconnect show | grep Cable
  2. Disable the cable for this interface on this node.

    # /usr/cluster/bin/clinterconnect disable cable
  3. Re-enable the cable to bring the path online.

    # /usr/cluster/bin/clinterconnect enable cable

Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)

Problem Summary: Logical hostname failover requires getting the netmask from the network if nisis enabled for the netmasksname service. This call to getnetmaskbyaddr() hangs for a while due to CR 7051511, which might hang long enough for the Resource Group Manager (RGM) to put the resource in the FAILED state. This occurs even though the correct netmask entries are in the /etc/netmasks local files. This issue affects only multi-homed clusters, such as cluster nodes that reside on multiple subnets.

Workaround: Configure the /etc/nsswitch.conf file, which is handled by an SMF service, to only use files for netmasks lookups.

# /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/netmask = astring:\"files\"
# /usr/sbin/svcadm refresh svc:/system/name-service/switch

Upgrade

x86: scinstall -u update Sometimes Fails to Upgrade the Cluster Packages on an x86 Node (7201491)

Problem Summary: Running scinstall -u update on an x86 cluster node sometimes fails to upgrade the cluster packages. The following error messages are reported:

root@phys-schost-1:~# scinstall -u update

Calling "scinstall -u preupgrade"

Renamed "/.alt.s11u1_24a-2/etc/cluster/ccr" to "/.alt.s11u1_24a-2/etc/cluster/ccr.upgrade".
Log file - /.alt.s11u1_24a-2/var/cluster/logs/install/scinstall.upgrade.log.12037

** Upgrading software **
Startup: Linked image publisher check ... Done
Startup: Refreshing catalog 'aie' ... Done
Startup: Refreshing catalog 'solaris' ... Done
Startup: Refreshing catalog 'ha-cluster' ... Done
Startup: Refreshing catalog 'firstboot' ... Done
Startup: Checking that pkg(5) is up to date ... Done
Planning: Solver setup ... Done
Planning: Running solver ... Done
Planning: Finding local manifests ... Done
Planning: Fetching manifests:  0/26  0% complete
Planning: Fetching manifests: 26/26  100% complete
Planning: Package planning ... Done
Planning: Merging actions ... Done
Planning: Checking for conflicting actions ... Done
Planning: Consolidating action changes ... Done
Planning: Evaluating mediators ... Done
Planning: Planning completed in 16.30 seconds
Packages to update: 26

Planning: Linked images: 0/1 done; 1 working: zone:OtherNetZC
pkg: update failed (linked image exception(s)):

A 'update' operation failed for child 'zone:OtherNetZC' with an unexpected
return value of 1 and generated the following output:
pkg: 3/4 catalogs successfully updated:
 
Framework stall:
URL: 'http://bea100.us.oracle.com:24936/versions/0/' 

Workaround: Before you run the scinstall -u update command, run pkg refresh --full.