3 Oracle Extended Clusters

You can extend an Oracle RAC cluster across two, or more, geographically separate sites, each equipped with its own storage.

Note:

Starting with Oracle Grid Infrastructure 23ai, Domain Services Clusters (DSC), which is part of the Oracle Cluster Domain architecture, are desupported.

Oracle Cluster Domains consist of a Domain Services Cluster (DSC) and Member Clusters. Member Clusters were deprecated in Oracle Grid Infrastructure 19c. The DSC continues to be available to provide services to production clusters. However, with most of those services no longer requiring the DSC for hosting, installation of DSCs are desupported in Oracle Database 23ai. Oracle recommends that you use any cluster or system of your choice for services previously hosted on the DSC, if applicable. Oracle will continue to support the DSC for hosting shared services, until each service can be used on alternative systems.

About Oracle Extended Clusters

An Oracle Extended Cluster consists of nodes that are located in multiple locations called sites. In the event that one of the sites fails, the other site acts as an active standby.

Both Oracle ASM and the Oracle Database stack, in general, are designed to use enterprise-class shared storage in a data center. Fibre Channel technology, however, enables you to distribute compute and storage resources across two or more data centers, and connect them through ethernet cables and Fibre Channel, for compute and storage needs, respectively.

While you can configure Oracle Extended Clusters when you install Oracle Grid Infrastructure, you can also do so post installation using the ConvertToExtended script. You manage your Oracle Extended Cluster using CRSCTL.

Converting to Oracle Extended Cluster

This procedure is only supported for clusters that have been installed with or upgraded to Oracle Grid Infrastructure 12c release 2 (12.2), or later, which are typically configured with one site (default site).

Note:

This procedure requires that all nodes in the cluster be accessible.
You can configure an Oracle Extended Cluster with one or many disk groups and with multiple failure groups. Using the converttoextended script you can create multiple data sites and associate a node with each data site. All Oracle Flex ASM storage remains associated with the default cluster site because there is no mechanism to convert an existing disk group to an extended disk group. After you convert your cluster to an Oracle Extended Cluster, the voting file membership remains flat, and not hierarchical.
You must also add an extended disk group, and migrate the voting files to the extended disk group to take advantage of a site-specific hierarchical voting file algorithm.
Use CRSCTL to query the cluster, as follows, to determine its extended status:
$ crsctl get cluster extended
CRS-6579: "The cluster is 'NOT EXTENDED'"
$ crsctl query cluster site -all
Site 'crsclus' identified by '7b7b3bef4c1f5ff9ff8765bceb45433a' in state 'ENABLED',
 and contains nodes 'node1,node2,node3,node4', and disks ''.

The preceding example identifies a cluster called crsclus that has four nodes—node1, node2, node3, and node4—and a disk group—datadg. The cluster has one site configured.

  1. As the root user, perform a complete backup of the OCR and voting files.
    # ocrconfig -manualbackup
  2. Log in to the first node, and run the following command.
    # rootcrs.sh -converttoextended -first -sites list_of_sites -site node_site

    list_of_sites is the comma-separated list of sites in the extended cluster and node_site is the name of the site with which the local node is associated.

    Note:

    The node on which you are running the converttoextended command becomes unavailable, which may disrupt the database access.
  3. Run the following command on all other cluster nodes:
    # rootcrs.sh -converttoextended -site node_site

    node_site is the name of the site with which the local node is associated.

    Note:

    The node on which you are running the converttoextended command becomes unavailable, which may disrupt the database access.
  4. Delete the default site after the associated nodes and storage are migrated.
    # crsctl delete cluster site site_name
  5. Associate every Oracle ASM disk with a site by using the ALTER DISKGROUP SQL statement as the SYSASM user on the Oracle ASM instance after mounting the disk groups in restricted mode.
    SQL> ALTER DISKGROUP diskgroup_name RENAME DISK disk_name SITE site_name;

    Note:

    If a disk group contains Oracle Clusterware data, like voting files and the Oracle ASM SPFILE, then use the standard procedure for migrating the voting files and the Oracle ASM SPFILE to a different disk group, before the site assignment for the disks. For a disk group to store voting files, a normal redundancy disk group requires a minimum of three disk devices. You can migrate voting files and the Oracle ASM SPFILE back to the original disk group after the site assignment for the disks in the disk group.
    After the disk groups are modified successfully, you can remount the disk groups in normal mode.
After you finish configuring the Oracle Extended Cluster, run the following command to verify the configuration:
$ crsctl get cluster extended
CRS-XXXX: "The cluster is 'EXTENDED'"

$ crsctl query cluster site -all
Site 'la' identified by GUID '7b7b3bef4c1f5ff9ff8765bceb45433a' in state 'ENABLED' contains nodes 'node1,node2' and disks 'disk1, disk2, disk3'.
Site 'ny' identified by GUID '888b3bef4c1f5ff9ff8765bceb45433a' in state 'ENABLED' contains nodes 'node3,node4' and disks 'disk4, disk5, disk6'.
Site 'nj' identified by GUID '999b3bef4c1f5ff9ff8765bceb45433a' in state 'ENABLED' contains nodes 'node5,node6' and disks 'disk7, disk8, disk9'.