4 Oracle Flex Clusters
An Oracle Flex Cluster scales Oracle Clusterware to large numbers of nodes.
This chapter includes the following topics:
4.1 Overview of Oracle Flex Clusters
Oracle Grid Infrastructure installed in an Oracle Flex Cluster configuration is a scalable, dynamic, robust network of nodes.
Oracle Flex Clusters provide a platform for a variety of applications, including Oracle Real Application Clusters (Oracle RAC) databases with large numbers of nodes. Oracle Flex Clusters also provide a platform for other service deployments that require coordination and automation for high availability.
All nodes in an Oracle Flex Cluster belong to a single Oracle Grid Infrastructure cluster. This architecture centralizes policy decisions for deployment of resources based on application needs, to account for various service levels, loads, failure responses, and recovery.
Oracle Flex Clusters contain two types of nodes arranged in a hub and spoke architecture: Hub Nodes and Leaf Nodes. The number of Hub Nodes in an Oracle Flex Cluster can be as many as 64. The number of Leaf Nodes can be many more. Hub Nodes and Leaf Nodes can host different types of applications.
Hub Nodes are similar to Oracle Grid Infrastructure nodes in an Oracle Clusterware standard Cluster configuration: they are tightly connected, and have direct access to shared storage. Use Hub Nodes to host read-write database instances.
Leaf Nodes are different from standard Oracle Grid Infrastructure nodes, in that they do not require direct access to shared storage, but instead request data through Hub Nodes. Use Leaf Nodes to host read-only database instances.
Note:
Read-write and read-only database instances of the same primary database can coexist in an Oracle Flex Cluster.Hub Nodes can run in an Oracle Flex Cluster configuration without having any Leaf Nodes as cluster member nodes, but Leaf Nodes must be members of a cluster that includes at least one Hub Node.
Note:
If you upgrade an Oracle Flex Cluster, then Oracle recommends that you upgrade the Hub Nodes first, and that you also have any upgraded Hub Nodes up and running as part of the upgrade process.
Reader Nodes
You can use Leaf Nodes to host Oracle RAC database instances that run in read-only mode, which become reader nodes. You can optimize these nodes for parallel query operations by provisioning nodes with a large amount of memory so that data is cached in the Leaf Node.
A Leaf Node sends periodic heartbeat messages to its associated Hub Node, which is different from the heartbeat messages that occur between Hub Nodes. During planned shutdown of the Hub Nodes, a Leaf Node attempts to connect to another Hub Node, unless the Leaf Node is connected to only one Hub Node. If the Hub Node is evicted, then the Leaf Node is also evicted from the cluster.
4.2 Managing Oracle Flex Clusters
Use CRSCTL to manage Oracle Flex Clusters after successful installation of Oracle Grid Infrastructure for either a small or large cluster.
4.2.1 Changing the Cluster Mode
You can change the mode of an existing Oracle Clusterware standard Cluster to be an Oracle Flex Cluster.
4.2.1.1 Changing an Oracle Clusterware Standard Cluster to an Oracle Flex Cluster
Use CRSCTL to change an existing Oracle Clusterware standard Cluster to an Oracle Flex Cluster.
Perform the following steps:
-
Run the following command to determine the current mode of the cluster:
$ crsctl get cluster mode status
-
Run the following command to ensure that the Grid Naming Service (GNS) is configured with a fixed VIP:
$ srvctl config gns
This procedure cannot succeed unless GNS is configured with a fixed VIP. If there is no GNS, then, as
root
, create one, as follows:# srvctl add gns -vip vip_name | ip_address
Run the following command as
root
to start GNS:# srvctl start gns
-
Use the Oracle Automatic Storage Management Configuration Assistant (ASMCA) to enable Oracle Flex ASM in the cluster before you change the cluster mode.
-
Run the following command as
root
to change the mode of the cluster to be an Oracle Flex Cluster:# crsctl set cluster mode flex
-
Stop Oracle Clusterware by running the following command as
root
on each node in the cluster:# crsctl stop crs
-
Start Oracle Clusterware by running the following command as
root
on each node in the cluster:# crsctl start crs -wait
Note:
Use the
-wait
option to display progress and status messages.
Related Topics
4.3 Oracle Extended Clusters
You can extend an Oracle RAC cluster across two, or more, geographically separate sites, each equipped with its own storage. In the event that one of the sites fails, the other site acts as an active standby.
Both Oracle ASM and the Oracle Database stack, in general, are designed to use enterprise-class shared storage in a data center. Fibre Channel technology, however, enables you to distribute compute and storage resources across two or more data centers, and connect them through ethernet cables and Fibre Channel, for compute and storage needs, respectively.
While you can configure Oracle Extended Clusters when you install Oracle Grid Infrastructure, you can also do so post installation using the ConvertToExtended
script. You manage your Oracle Extended Cluster using CRSCTL.
4.3.1 Configuring Oracle Extended Clusters
This procedure is only supported for clusters that have been installed with or upgraded to Oracle Grid Infrastructure 12c release 2 (12.2), or later, which are typically configured with one site (default site).
Note:
This procedure requires that all nodes in the cluster be accessible. There will also be a cluster outage during which time database access is disrupted.ConvertToExtended
script you can create multiple data sites and associate a node with each data site. All Oracle Flex ASM storage remains associated with the default cluster site because there is no mechanism to convert an existing disk group to an extended disk group. After you convert your cluster to an Oracle Extended Cluster, the voting file membership remains flat, and not hierarchical.
$ crsctl get cluster extended
CRS-6579: The cluster is 'NOT EXTENDED'
$ crsctl query cluster site –all
Site 'crsclus' identified by '7b7b3bef4c1f5ff9ff8765bceb45433a' in state 'ENABLED',
and contains nodes 'node1,node2,node3,node4', and disks ''.
The preceding example identifies a cluster called crsclus
that has four nodes—node1
, node2
, node3
, and node4
—and a disk group—datadg
. The cluster has one site configured.
cssd
, and crsd
from considering the cluster mode is not extended, and other components to consider that it is extended. The advantage to keeping the Grid Plug and Play daemon (gpnpd
) online is that the profile gets updated on the remote nodes. When you next start the Oracle Clusterware stack, the cluster mode will be extended.# crsctl stop cluster -all
-
Extend the cluster to the specific node, as follows:
$ rootcrs.sh -converttoextended -site site_name
-
Ensure that CSS is not running on any remote nodes.
-
Look up new sites and the site GUIDs using the previous checkpoint information.
-
Add sites to the local configuration, as follows:
$ crsctl add crs site site_name -guid site_guid -local
-
Update the to-site mapping in the local configuration for this node, as follows:
$ crsctl modify cluster site site_name –n local_host -local
-
Stop and then start the Oracle High Availability Services stack, as follows:
# crsctl stop crs # crsctl start crs
$ crsctl get cluster extended
CRS-XXXX: The cluster is 'EXTENDED'
$ crsctl query cluster site –all
Site 'crsclus' identified by '7b7b3bef4c1f5ff9ff8765bceb45433a' is 'ONLINE', and
contains nodes '', and disks ''.
Site 'ny' identified by '888b3bef4c1f5ff9ff8765bceb45433a' is 'ONLINE', and \
contains nodes 'node1,node2', and disks ''.
Site 'nj' identified by '999b3bef4c1f5ff9ff8765bceb45433a' is 'ONLINE', and \
contains nodes 'node3,node4', and disks ''.
The output in the preceding command examples is similar to what CRSCTL displays when you run the commands.