2 Oracle Clusterware Configuration and Administration
Configuring and administering Oracle Clusterware and its various components involves managing applications and databases, and networking within a cluster.
Note:
Starting with Oracle Grid Infrastructure 23ai, Domain Services Clusters (DSC), which is part of the Oracle Cluster Domain architecture, are desupported.
Oracle Cluster Domains consist of a Domain Services Cluster (DSC) and Member Clusters. Member Clusters were deprecated in Oracle Grid Infrastructure 19c. The DSC continues to be available to provide services to production clusters. However, with most of those services no longer requiring the DSC for hosting, installation of DSCs are desupported in Oracle Database 23ai. Oracle recommends that you use any cluster or system of your choice for services previously hosted on the DSC, if applicable. Oracle will continue to support the DSC for hosting shared services, until each service can be used on alternative systems.
Administrator-managed clusters requires that you manually configure how the cluster resources are deployed and where the workload is managed. Typically, this means that must configure which database instances run on what cluster nodes, by preference, and where those instances will restart in case of failures. By configuring where the database instances reside, You configure the workloads across the cluster.
Note:
The policy-managed database deployment option is desupported in Oracle Database 23ai.Role-Separated Management
Role-separated management is an approach to managing cluster resources and workloads in a coordinated fashion in order to reduce the risks of resource conflicts and shortages.
Role-separated management uses operating system security and role definitions, and Oracle Clusterware access permissions to separate resource and workload management according to the user’s role. This is particularly important for those working in consolidated environments, where there is likely to be competition for computing resources, and a degree of isolation is required for resource consumers and management of those resources. By default, this feature is not implemented during installation.
Configuring role-separated management consists of establishing the operating system users and groups that will administer the cluster resources (such as databases), according to the roles intended, adding the permissions on the cluster resources, as necessary. In addition, Oracle Automatic Storage Management (Oracle ASM) provides the capability to extend these role-separation constructs to the storage management functions.
Role-separated management in Oracle Clusterware no longer depends on a cluster
administrator (although Oracle maintains backward compatibility). By default, the user
who installed Oracle Clusterware in the Oracle Grid Infrastructure home (Grid home) and
root
are permanent cluster administrators. Primary group
privileges (oinstall
, by default) enable database administrators to
create databases using the Oracle Database Configuration Assistant (Oracle DBCA), but do
not enable role separation.
Configuring Role Separation
Role separation is the determination of the roles that are needed, the resources that they will administer, and what their access privileges should be.
After you determine the roles, you then create or modify the
operating system user accounts for group privileges (such as
oinstall
or grid
), using the ACLs and the
CRSCTL utility. The most basic case is to create two operating system users as
part of the oinstall
group and create the cluster.
This requires careful planning, and disciplined, detail-oriented execution, but you can modify the configuration after implementation, to correct mistakes or make adjustments over time.
Note:
You cannot apply role separation techniques to ora.* resources (Oracle RAC database resources). You can only apply these techniques to user-defined cluster resources and types.You create the resources under the root
or grid
accounts. For the designated operating system users to administer these
resources, they must then be given the correct permissions, enabling them to
fulfill their roles.
Use the crsctl setperm
command to configure horizontal role
separation using ACLs, resources, or both. The CRSCTL utility is located in the
path Grid_home
/bin
, where Grid_home
is the Oracle Grid
Infrastructure for a cluster home.
The command uses the following syntax, where the access control (ACL) string is indicated by italics:
crsctl setperm {resource | type | serverpool} name {-u acl_string |
-x acl_string | -o user_name | -g group_name}
The flag options are:
-
-u
: Update the entity ACL -
-x
: Delete the entity ACL -
-o
: Change the entity owner -
-g
: Change the entity primary group
The ACL strings are:
{user:user_name[:readPermwritePermexecPerm] |
group:group_name[:readPermwritePermexecPerm] |
other[::readPermwritePermexecPerm] }
In the preceding syntax example:
-
user
: Designates the user ACL (access permissions granted to the designated user) -
group
: Designates the group ACL (permissions granted to the designated group members) -
other
: Designates the other ACL (access granted to users or groups not granted particular access permissions) -
readperm
: Location of the read permission (r
grants permission and "-
" forbids permission) -
writeperm
: Location of the write permission (w
grants permission and "-
" forbids permission) -
execperm
: Location of the execute permission (x
grants permission, and "-
" forbids permission)
For cluster resources, to set permissions on an application (resource) called MyProgram
(administered by Maynard
) for the group crsadmin
, where the administrative user has read, write, and execute privileges, the members of the crsadmin
group have read and execute privileges, and users outside of the group are granted only read access (for status and configuration checks), enter the following command as whichever user originally created the resource (root
or grid
owner):
# crsctl setperm resource MyProgram -u user:Maynard:r-x,group:crsadmin:rw-,other:---:r--
Related Topics
Configuring Oracle Grid Infrastructure Using Grid Setup Wizard
Using the Configuration Wizard, you can configure a new Oracle Grid Infrastructure on one or more nodes, or configure an upgraded Oracle Grid Infrastructure. You can also run the Grid Setup Wizard in silent mode.
After performing a software-only installation of the Oracle Grid Infrastructure, you can configure the software using Grid Setup Wizard. This Wizard performs various validations of the Grid home and inputs before and after you run through the wizard.
Note:
-
Before running the Grid Setup Wizard, ensure that the Oracle Grid Infrastructure home is current, with all necessary patches applied.
-
To launch the Grid Setup Wizard in the subsequent procedures:
On Linux and UNIX, run the following command:
Oracle_home/gridSetup.sh
On Windows, run the following command:
Oracle_home\gridSetup.bat
Configuring a Single Node
You can configure a single node by using the Configuration Wizard.
To configure a single node:
Configuring Multiple Nodes
You can use the Configuration Wizard to configure multiple nodes in a cluster.
It is not necessary that Oracle Grid Infrastructure software be installed on nodes you want to configure using the Configuration Wizard.
Note:
Before you launch the Configuration Wizard, ensure the following:
While software is not required to be installed on all nodes, if it is installed, then the software must be installed in the same Grid_home
path and be at the identical level on all the nodes.
To use the Configuration Wizard to configure multiple nodes:
Upgrading Oracle Grid Infrastructure
You use the Grid Setup Wizard to upgrade a cluster’s Oracle Grid Infrastructure.
To use upgrade Oracle Grid Infrastructure for a cluster:
See Also:
Oracle Database Installation Guide for your platform for Oracle Restart procedures
Running the Configuration Wizard in Silent Mode
You can run the Configuration Wizard in silent mode by specifying the —silent parameter.
To use the Configuration Wizard in silent mode to configure or upgrade nodes:
-
Start the Configuration Wizard from the command line, as follows:
$ $ORACLE_HOME/gridSetup.sh -silent -responseFile file_name
The Configuration Wizard validates the response file and proceeds with the configuration. If any of the inputs in the response file are found to be invalid, then the Configuration Wizard displays an error and exits.
-
Run the
root
andGrid_home/gridSetup -executeConfigTools
scripts as prompted.
Moving and Patching an Oracle Grid Infrastructure Home
You use the Grid Setup Wizard to move and patch an Oracle Grid Infrastructure home.
The Oracle Installer script gridSetup.sh
supports a new switch -switchGridHome
for this purpose. This feature enables you to move and patch an Oracle Grid Infrastructure home to a newer or same patch level.
See Also:
-
Oracle Grid Infrastructure Installation and Upgrade Guide for Linux for information about switching the Oracle Grid Infrastructure home after patching
Server Weight-Based Node Eviction
You can configure the Oracle Clusterware failure recovery mechanism to choose which cluster nodes to terminate or evict in the event of a private network (cluster interconnect) failure.
In a split-brain situation, where a cluster experiences a network split, partitioning the cluster into disjoint cohorts, Oracle Clusterware applies certain rules to select the remaining cohort, potentially evicting a node that is running a critical, singleton resource.
You can affect the outcome of these decisions by adding value to a database instance or node so that, when Oracle Clusterware must decide whether to evict or terminate, it will consider these factors and attempt to ensure that all critical components remain available. You can configure weighting functions to add weight to critical components in your cluster, giving Oracle Clusterware added input when deciding which nodes to evict when resolving a split-brain situation.
You may want to ensure that specific nodes exist after the tie-breaking process, perhaps because of certain hardware characteristics, or that certain resources remain, perhaps because of particular databases or services. You can assign weight to particular nodes, resources, or services, based on the following criteria:
-
You can assign weight only to administrator-managed nodes.
-
You can assign weight to servers or applications that are registered Oracle Clusterware resources.
Weight contributes to importance of the component and influences the choice that Oracle Clusterware makes when managing a split-brain situation. With other critical factors being equal between the various cohorts, Oracle Clusterware chooses the heaviest cohort to remain active.
You can assign weight to various components, as follows:
-
To assign weight to database instances or services, you use the
-css_critical yes
parameter with thesrvctl add database
orsrvctl add service
commands when adding a database instance or service. You can also use the parameter with thesrvctl modify database
andsrvctl modify service
commands. -
To assign weight to non ora.* resources, use the
-attr "CSS_CRITICAL=yes"
parameter with thecrsctl add resource
andcrsctl modify resource
commands when you are adding or modifying resources. -
To assign weight to a server, use the
-css_critical yes
parameter with thecrsctl set server
command.
Note:
You must restart the Oracle Clusterware stack on the node for the values to take effect. This does not apply to resources where the changes take effect without having to restart the resource.
Overview of Grid Naming Service
Oracle Clusterware uses Grid Naming Service (GNS) for address resolution in a cluster environment.
Note:
The Highly Available Grid Naming Service feature of Grid Naming Service (GNS) in Oracle Grid Infrastructure is deprecated in Oracle Database 23ai.The highly-available GNS provides the ability to run multiple GNS instances in a multi-cluster environment with different roles. This feature is being deprecated. There is no replacement.
Network Administration Tasks for GNS and GNS Virtual IP Address
To implement GNS, your network administrator must configure the DNS to set up a domain for the cluster, and delegate resolution of that domain to the GNS VIP. You can use a separate domain, or you can create a subdomain of an existing domain for the cluster.
GNS distinguishes between nodes by using cluster names and individual node identifiers as part of the host name for that cluster node, so that cluster node 123 in cluster A is distinguishable from cluster node 123 in cluster B.
However, if you configure host names manually, then the subdomain you delegate to GNS should have no subdomains. For example, if you delegate the subdomain mydomain.example.com
to GNS for resolution, then there should be no other.mydomain.example.com
domains. Oracle recommends that you delegate a subdomain to GNS that is used by GNS exclusively.
Note:
You can use GNS without DNS delegation in configurations where static addressing is being done, such as in Oracle Flex ASM or Oracle Flex Clusters. However, GNS requires a domain be delegated to it if addresses are assigned using DHCP.
Example 2-1 shows DNS entries required to delegate a domain called myclustergns.example.com
to a GNS VIP address 10.9.8.7
.
The GNS daemon and the GNS VIP run on one node in the server cluster. The GNS daemon listens on the GNS VIP using port 53 for DNS requests. Oracle Clusterware manages the GNS daemon and the GNS VIP to ensure that they are always available. If the server on which the GNS daemon is running fails, then Oracle Clusterware fails over the GNS daemon and the GNS VIP to a remaining cluster member node. If the cluster is an Oracle Flex Cluster configuration, then Oracle Clusterware fails over the GNS daemon and the GNS VIP.
Note:
Oracle Clusterware does not fail over GNS addresses to different clusters. Failovers occur only to members of the same cluster.
Example 2-1 DNS Entries
# Delegate to gns on mycluster mycluster.example.com NS myclustergns.example.com #Let the world know to go to the GNS vip myclustergns.example.com. 10.9.8.7
Understanding Grid Naming Service Configuration Options
GNS can run in either automatic or standard cluster address configuration mode.
Automatic configuration uses either the Dynamic Host Configuration Protocol (DHCP) for IPv4 addresses or the Stateless Address Autoconfiguration Protocol (autoconfig) (RFC 2462 and RFC 4862) for IPv6 addresses.
Automatic Configuration Option for Addresses
With automatic configurations, a DNS administrator delegates a domain on the DNS to be resolved through the GNS subdomain. During installation, Oracle Universal Installer assigns names for each cluster member node interface designated for Oracle Grid Infrastructure use during installation or configuration. SCANs and all other cluster names and addresses are resolved within the cluster, rather than on the DNS.
Automatic configuration occurs in one of the following ways:
-
For IPv4 addresses, Oracle Clusterware assigns unique identifiers for each cluster member node interface allocated for Oracle Grid Infrastructure, and generates names using these identifiers within the subdomain delegated to GNS. A DHCP server assigns addresses to these interfaces, and GNS maintains address and name associations with the IPv4 addresses leased from the IPv4 DHCP pool.
-
For IPv6 addresses, Oracle Clusterware automatically generates addresses with autoconfig.
Static Configuration Option for Addresses
With static configurations, no subdomain is delegated. A DNS administrator configures the GNS VIP to resolve to a name and address configured on the DNS, and a DNS administrator configures a SCAN name to resolve to three static addresses for the cluster.
A DNS administrator also configures a static public IP name and address, and virtual IP name and address for each cluster member node. A DNS administrator must also configure new public and virtual IP names and addresses for each node added to the cluster. All names and addresses are resolved by DNS.
GNS without subdomain delegation using static VIP addresses and SCANs enables Oracle Flex Cluster and CloudFS features that require name resolution information within the cluster. However, any node additions or changes must be carried out as manual administration tasks.
Administering Grid Naming Service
Use SRVCTL to administer Grid Naming Service (GNS) in a cluster environment.
Note:
The Highly Available Grid Naming Service feature of Grid Naming Service (GNS) in Oracle Grid Infrastructure is deprecated in Oracle Database 23ai.The highly-available GNS provides the ability to run multiple GNS instances in a multi-cluster environment with different roles. This feature is being deprecated. There is no replacement.
Starting and Stopping GNS with SRVCTL
You use the srvctl command to start and stop GNS.
Start and stop GNS on the server cluster by running the following commands as
root
, respectively:
# srvctl start gns # srvctl stop gns
Changing the GNS Subdomain when Moving from IPv4 to IPv6 Network
When you move from an IPv4 network to an IPv6 network, you must change the GNS subdomain.
Rolling Conversion from DNS to GNS Cluster Name Resolution
You can convert Oracle Grid Infrastructure cluster networks using DNS for name resolution to cluster networks using Grid Naming Service (GNS) obtaining name resolution through GNS.
Use the following procedure to convert from a standard DNS name resolution network to a GNS name resolution network, with no downtime:
Node Failure Isolation
Failure isolation is a process by which a failed node is isolated from the rest of the cluster to prevent the failed node from corrupting data.
Note:
You must configure the IPMI driver either on all or none of the cluster nodes.When a node fails, isolating it involves an external mechanism capable of restarting a problem node without cooperation either from Oracle Clusterware or from the operating system running on that node. To provide this capability, Oracle Clusterware supports the Intelligent Platform Management Interface specification (IPMI) (also known as Baseboard Management Controller (BMC)), an industry-standard management protocol.
Typically, you configure failure isolation using IPMI during Oracle Grid Infrastructure installation, when you are provided with the option of configuring IPMI from the Failure Isolation Support screen. If you do not configure IPMI during installation, then you can configure it after installation using the Oracle Clusterware Control utility (CRSCTL), as described in a subsequent sectioin.
To use IPMI for failure isolation, each cluster member node must be equipped with an
IPMI device running firmware compatible with IPMI version 2.0, which supports IPMI over
a local area network (LAN). In addition to BMC, ipmitool
or
ipmiutil
must be installed on each node. The
ipmiutil
utility is not distributed with the Oracle Grid
Infrastructure installation and must be downloaded. You can download
ipmiutil
from ipmiutil.sourceforge.net
or other
repositories. The minimum version of the ipmiutil
utility required is
3.08. Running ipmiutil
displays the version number, in addition to a
version number display from the help prompt.
During database operation, failure isolation is accomplished by communication from the evicting Cluster Synchronization Services daemon to the failed node's IPMI device over the LAN. The IPMI-over-LAN protocol is carried over an authenticated session protected by a user name and password, which are obtained from the administrator during installation.
To support dynamic IP address assignment for IPMI using DHCP, the Cluster Synchronization Services daemon requires direct communication with the local IPMI device during Cluster Synchronization Services startup to obtain the IP address of the IPMI device. (This is not true for HP-UX and Solaris platforms, however, which require that the IPMI device be assigned a static IP address.) This is accomplished using an IPMI probe command (OSD), which communicates with the IPMI device through an IPMI driver, which you must install on each cluster system.
If you assign a static IP address to the IPMI device, then the IPMI driver is not strictly required by the Cluster Synchronization Services daemon. The driver is required, however, to use ipmitool
or ipmiutil
to configure the IPMI device but you can also do this with management consoles on some platforms.
Server Hardware Configuration for IPMI
You must first install the ipmitool
or
ipmiutil
binary, install and enable the IPMI driver, and configure the
IPMI device, as described in the Oracle Grid Infrastructure
Installation and Upgrade Guide for your
platform.
Note:
You must configure the IPMI driver either on all or none of the cluster nodes.Post-installation Configuration of IPMI-based Failure Isolation Using CRSCTL
You use the crsctl command to configure IPMI-based failure isolation, after installing Oracle Clusterware. You can also use this command to modify or remove the IPMI configuration.
IPMI Post-installation Configuration with Oracle Clusterware
After you install the ipmitool
or ipmiutil
binary, install and enable the IPMI driver, configure the IPMI device, and complete the
server configuration, you can use the CRSCTL command to complete IPMI configuration.
Before you started the installation, you installed the ipmitool
or
ipmiutil
binary, installed and enabled the IPMI driver in the
server operating system, and configured the IPMI hardware on each node (IP address
mode, admin credentials, and so on), as described in Oracle Grid Infrastructure
Installation Guide. When you install Oracle Clusterware, the installer
collects the IPMI administrator user ID and password, and stores them in an Oracle
Wallet in node-local storage, in OLR. In addition, the installer also collects
information on the ipmitool
or ipmiutil
binary
location.
After you complete the server configuration, and the configuration of the
ipmitool
or ipmiutil
binary location, complete
the following procedure on each cluster node to register IPMI administrators and
passwords on the nodes.
Note:
If IPMI is configured to obtain its IP address using DHCP, it may be necessary to reset IPMI or restart the node to cause it to obtain an address.
Modifying IPMI Configuration Using CRSCTL
You may need to modify an existing IPMI-based failure isolation configuration to change IPMI passwords, or to configure IPMI for failure isolation in an existing installation. You may also need to modify the path to the ipmiutil
binary file.
You use CRSCTL with the IPMI configuration tool appropriate to your platform to accomplish these modifications.
For example, to change the administrator password for IPMI, you must first modify the IPMI configuration as described in Oracle Grid Infrastructure Installation and Upgrade Guide, and then use CRSCTL to change the password in OLR.
The configuration data needed by Oracle Clusterware for IPMI is kept in an Oracle Wallet in OLR. Because the configuration information is kept in a secure store, it must be written by the Oracle Grid Infrastructure installation owner account (the Grid user), so you must log in as that installation user.
Use the following procedure to modify an existing IPMI configuration:
Removing IPMI Configuration Using CRSCTL
You can remove an IPMI configuration from a cluster using CRSCTL if you want to stop using IPMI completely or if IPMI was initially configured by someone other than the user who installed Oracle Clusterware.
If the latter is true, then Oracle Clusterware cannot access the IPMI configuration data and IPMI is not usable by the Oracle Clusterware software, and you must reconfigure IPMI as the user who installed Oracle Clusterware.
To completely remove IPMI, perform the following steps. To reconfigure IPMI as the user who installed Oracle Clusterware, perform steps 3 and 4, then repeat steps 3 and 4 in the previous section.
Understanding Network Addresses on Manually Configured Networks
It is helpful to understand the concepts and requirements for network addresses on manually configured networks.
Understanding Network Address Configuration Requirements
An Oracle Clusterware configuration requires at least one public network interface and one private network interface.
-
A public network interface connects users and application servers to access data on the database server.
-
A private network interface is for internode communication and used exclusively by Oracle Clusterware.
You can configure a public network interface for either IPv4, IPv6, or both types of addresses on a given network. If you use redundant network interfaces (bonded or teamed interfaces), then be aware that Oracle does not support configuring one interface to support IPv4 addresses and the other to support IPv6 addresses. You must configure network interfaces of a redundant interface pair with the same IP protocol.
You can configure one or more private network interfaces, using either IPv4 or IPv6 addresses for all the network adapters. You cannot mix IPv4 and IPv6 addresses for any private network interfaces.
Note:
You can only use IPv6 for private networks in clusters using Oracle Clusterware 12c release 2 (12.2), or later.All the nodes in the cluster must use the same IP protocol configuration. Either all the nodes use only IPv4, or all the nodes use only IPv6, or all the nodes use both IPv4 and IPv6. You cannot have some nodes in the cluster configured to support only IPv6 addresses, and other nodes in the cluster configured to support only IPv4 addresses.
The VIP agent supports the generation of IPv6 addresses using the Stateless Address Autoconfiguration Protocol (RFC 2462), and advertises these addresses with GNS. Run the srvctl config network
command to determine if DHCP or stateless address autoconfiguration is being used.
About IPv6 Address Formats
Each node in an Oracle Grid Infrastructure cluster can support both IPv4 and IPv6 addresses on the same network. The preferred IPv6 address format is as follows, where each x represents a hexadecimal character:
xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
The IPv6 address format is defined by RFC 2460 and Oracle Grid Infrastructure supports IPv6 addresses, as following:
-
Global and site-local IPv6 addresses as defined by RFC 4193.
Note:
Link-local and site-local IPv6 addresses as defined in RFC 1884 are not supported.
-
The leading zeros compressed in each field of the IP address.
-
Empty fields collapsed and represented by a '::' separator. For example, you could write the IPv6 address 2001:0db8:0000:0000:0000:8a2e:0370:7334 as 2001:db8::8a2e:370:7334.
-
The four lower order fields containing 8-bit pieces (standard IPv4 address format). For example 2001:db8:122:344::192.0.2.33.
Name Resolution and the Network Resource Address Type
You can review the network configuration and control the network address type using the srvctl config network
(to review the configuration) and srvctl modify network -iptype
commands, respectively.
You can configure how addresses are acquired using the srvctl modify network -nettype
command. Set the value of the -nettype
parameter to dhcp
or static
to control how IPv4 network addresses are acquired. Alternatively, set the value of the -nettype
parameter to autoconfig
or static
to control how IPv6 addresses are generated.
The -nettype
and -iptype
parameters are not directly related but you can use -nettype dhcp
with -iptype ipv4
and -nettype autoconfig
with -iptype ipv6
.
Note:
If a network is configured with both IPv4 and IPv6 subnets, then Oracle does not support both subnets having -nettype
set to mixed
.
Oracle does not support making transitions from IPv4 to IPv6 while -nettype
is set to mixed
. You must first finish the transition from static
to dhcp
before you add IPv6 into the subnet.
Similarly, Oracle does not support starting a transition to IPv4 from IPv6 while -nettype
is set to mixed
. You must first finish the transition from autoconfig
to static
before you add IPv4 into the subnet.
Related Topics
Understanding SCAN Addresses and Client Service Connections
Public network addresses are used to provide services to clients.
If your clients are connecting to the Single Client Access Name (SCAN) addresses, then you may need to change public and virtual IP addresses as you add or remove nodes from the cluster, but you do not need to update clients with new cluster addresses.
Note:
You can edit the listener.ora
file to make modifications to the Oracle Net listener parameters for SCAN and the node listener. For example, you can set TRACE_LEVEL_listener_name
. However, you cannot set protocol address parameters to define listening endpoints, because the listener agent dynamically manages them.
SCANs function like a cluster alias. However, SCANs are resolved on any node in the cluster, so unlike a VIP address for a node, clients connecting to the SCAN no longer require updated VIP addresses as nodes are added to or removed from the cluster. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.
The SCAN is a fully qualified name (host name and domain) that is configured to resolve to all the addresses allocated for the SCAN. The SCAN resolves to all three addresses configured for the SCAN name on the DNS server, or resolves within the cluster in a GNS configuration. SCAN listeners can run on any node in the cluster. SCANs provide location independence for the databases, so that client configuration does not have to depend on which nodes run a particular database.
Oracle Database instances only register with SCAN listeners as remote listeners. Upgraded databases register with SCAN listeners as remote listeners, and also continue to register with all node listeners.
Note:
Because of the Oracle Clusterware installation requirement that you provide a SCAN name during installation, if you resolved at least one IP address using the server /etc/hosts
file to bypass the installation requirement but you do not have the infrastructure required for SCAN, then, after the installation, you can ignore the SCAN and connect to the databases in the cluster using VIPs.
Oracle does not support removing the SCAN address.
Related Topics
SCAN Listeners and Service Registration Restriction With Valid Node Checking
You can use valid node checking to specify the nodes and subnets from which the SCAN listener accepts registrations.
SRVCTL stores the node and subnet information in the SCAN listener resource
profile. The SCAN listener agent reads that information from the resource profile and
writes it to the listener.ora
file.
Database instance registration with a listener succeeds only when the request originates from a valid node. The network administrator can specify a list of valid nodes, excluded nodes, or disable valid node checking altogether. The list of valid nodes explicitly lists the nodes and subnets that can register with the database. The list of excluded nodes explicitly lists the nodes that cannot register with the database. The control of dynamic registration results in increased manageability and security of Oracle RAC deployments.
By default, the SCAN listener agent sets
REMOTE_ADDRESS_REGISTRATION_listener_name
to a private IP endpoint. The SCAN listener
accepts registration requests only from the private network. Remote nodes that are not
accessible to the private network of the SCAN listener must be included in the list of
valid nodes by using the registration_invited_nodes_alias
parameter in the listener.ora
file,
or by modifying the SCAN listener using the command-line interface, SRVCTL.
Note:
Starting with Oracle Grid Infrastructure 12c, for a SCAN listener, if theVALID_NODE_CHECKING_REGISTRATION_listener_name
and
REGISTRATION_INVITED_NODES_listener_name
parameters are set in the
listener.ora
file, then the listener agent overwrites these
parameters.
If you use the SRVCTL utility to set the invitednodes
and
invitedsubnets
values, then the listener agent automatically sets
VALID_NODE_CHECKING_REGISTRATION_listener_name
to SUBNET and sets
REGISTRATION_INVITED_NODES_listener_name
to the specified list in the
listener.ora
file.
For other listeners managed by CRS, the listener agent sets
VALID_NODE_CHECKING_REGISTRATION_listener_name
to be SUBNET in the
listener.ora
file only if it is not already set in the
listener.ora
file. The SRVCTL utility does not support setting the
invitednodes
and invitedsubnets
values for a
non-SCAN listener. The listener agent does not update
REGISTRATION_INVITED_NODES_listener_name
in the listener.ora
file for a
non SCAN listener.
Configuring Shared Single Client Access Names
A shared single client access name (SCAN) enables you to share one set of SCAN virtual IPs (VIPs) and listeners on a dedicated cluster with other clusters.
About Configuring Shared Single Client Access Names
You must configure the shared single client access name (SCAN) on both the database server and the database client.
Note:
Starting with Oracle Grid Infrastructure 23ai, Domain Services Clusters (DSC), which is part of the Oracle Cluster Domain architecture, are desupported.The use of a shared SCAN enables multiple clusters to use a single common set of SCAN virtual IP (VIP) addresses to manage user connections, instead of deploying a set of SCAN VIPs per cluster. For example, instead of 10 clusters deploying 3 SCAN VIPs per cluster using a total of 30 IP addresses, with shared SCAN deployments, you only deploy 3 SCAN VIPs for those same 10 clusters, requiring only 3 IP addresses.
Be aware the SCAN VIPs (shared or otherwise) are required for Oracle Real Application Cluster (Oracle RAC) database clusters.
The general procedure for configuring shared SCANs is to use the
srvctl
utility to configure first on the server (that is, the
cluster that hosts the shared SCAN), then on the client (the Oracle RAC cluster that
will use this shared SCAN). On the server, in addition to the configuration using
srvctl
, you must to set environment variables, create a credential
file, and ensure that the Oracle Notification Service (ONS) process that is specific to
a SCAN cluster can access its own configuration directory to create and manage the ONS
configuration.
Changing Network Addresses on Manually Configured Systems
You can perform network address maintenance on manually configured systems.
Changing the Virtual IP Addresses Using SRVCTL
You can use SRVCTL to change a virtual IP address.
Clients configured to use public VIP addresses for Oracle Database releases before Oracle Database 11g release 2 (11.2) can continue to use their existing connection addresses. Oracle recommends that you configure clients to use SCANs, but you are not required to use SCANs. When an earlier version of Oracle Database is upgraded, it is registered with the SCAN, and clients can start using the SCAN to connect to that database, or continue to use VIP addresses for connections.
If you continue to use VIP addresses for client connections, you can modify the VIP address while Oracle Database and Oracle ASM continue to run. However, you must stop services while you modify the address. When you restart the VIP address, services are also restarted on the node.
You cannot use this procedure to change a static public subnet to use DHCP. Only the srvctl add network -subnet
command creates a DHCP network.
Note:
The following instructions describe how to change only a VIP address, and assume that the host name associated with the VIP address does not change. Note that you do not need to update VIP addresses manually if you are using GNS, and VIPs are assigned using DHCP.
If you are changing only the VIP address, then update the DNS and the client hosts files. Also, update the server hosts files, if those are used for VIP addresses.
Perform the following steps to change a VIP address:
Changing Oracle Clusterware Private Network Configuration
You can make changes to the Oracle Clusterware private network configuration.
About Private Networks and Network Interfaces
Oracle Clusterware requires that each node is connected through a private network (in addition to the public network). The private network connection is referred to as the cluster interconnect.
Table 2-1 describes how the network interface card and the private IP address are stored.
Oracle only supports clusters in which all of the nodes use the same network interface connected to the same subnet (defined as a global interface with the oifcfg
command). You cannot use different network interfaces for each node (node-specific interfaces).
Table 2-1 Storage for the Network Interface, Private IP Address, and Private Host Name
Entity | Stored In... | Comments |
---|---|---|
Network interface name |
Operating system For example: |
You can use wildcards when specifying network interface names. For example: |
Private network Interfaces |
Oracle Clusterware, in the Grid Plug and Play (GPnP) Profile |
Configure an interface for use as a private interface during installation by marking the interface as Private, or use the |
Redundant Interconnect Usage
You can define multiple interfaces for Redundant Interconnect Usage by classifying the role of interfaces as private either during installation or after installation using the oifcfg setif
command.
When you do, Oracle Clusterware creates from one to four (depending on the number of interfaces you define) highly available IP (HAIP) addresses, which Oracle Database and Oracle ASM instances use to ensure highly available and load balanced communications.
The Oracle software (including Oracle RAC, Oracle ASM, and Oracle ACFS, by default, uses the HAIP address of the interfaces designated with the private role as the HAIP address for all of its traffic, enabling load balancing across the provided set of cluster interconnect interfaces. If one of the defined cluster interconnect interfaces fails or becomes non-communicative, then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces.
For example, after installation, if you add a new interface to a server named eth3
with the subnet number 172.16.2.0, then use the following command to make this interface available to Oracle Clusterware for use as a private interface:
$ oifcfg setif -global eth3/172.16.2.0:cluster_interconnect
While Oracle Clusterware brings up a HAIP address on eth3
of 169.254.*.* (which is the reserved subnet for HAIP), and the database, Oracle ASM, and Oracle ACFS use that address for communication, Oracle Clusterware also uses the 172.16.2.0 address for its own communication.
Caution:
Do not use OIFCFG to classify HAIP subnets (169.264.*.*). You can use OIFCFG to record the interface name, subnet, and type (public, cluster interconnect, or Oracle ASM) for Oracle Clusterware. However, you cannot use OIFCFG to modify the actual IP address for each interface.
Note:
Oracle Clusterware uses at most four interfaces at any given point, regardless of the number of interfaces defined. If one of the interfaces fails, then the HAIP address moves to another one of the configured interfaces in the defined set.
When there is only a single HAIP address and multiple interfaces from which to select, the interface to which the HAIP address moves is no longer the original interface upon which it was configured. Oracle Clusterware selects the interface with the lowest numeric subnet to which to add the HAIP address.
Consequences of Changing Interface Names Using OIFCFG
The consequences of changing interface names depend on which name you are changing, and whether you are also changing the IP address.
In cases where you are only changing the interface names, the consequences are minor. If you change the name for the public interface that is stored in OCR, then you also must modify the node applications for the cluster. Therefore, you must stop the node applications for this change to take effect.
Changing a Network Interface
You can change a network interface and its associated subnet address by using the OIFCFG command..
This procedure changes the network interface and IP address on each node in the cluster used previously by Oracle Clusterware and Oracle Database.
Caution:
The interface that the Oracle RAC (RDBMS) interconnect uses must be the same interface that Oracle Clusterware uses with the host name. Do not configure the private interconnect for Oracle RAC on a separate interface that is not monitored by Oracle Clusterware.
Creating a Network Using SRVCTL
You can use SRVCTL to create a network for a cluster member node, and to add application configuration information.
Create a network for a cluster member node, as follows:
Network Address Configuration in a Cluster
You can configure a network interface for either IPv4, IPv6, or both types of addresses on a given network.
If you configure redundant network interfaces using a third-party technology, then Oracle does not support configuring one interface to support IPv4 addresses and the other to support IPv6 addresses. You must configure network interfaces of a redundant interface pair with the same IP address type. If you use the Oracle Clusterware Redundant Interconnect feature, then you must use IPv4 addresses for the interfaces.
All the nodes in the cluster must use the same IP protocol configuration. Either all the nodes use only IPv4, or all the nodes use only IPv6, or all the nodes use both IPv4 and IPv6. You cannot have some nodes in the cluster configured to support only IPv6 addresses, and other nodes in the cluster configured to support only IPv4 addresses.
The local listener listens on endpoints based on the address types of the subnets configured for the network resource. Possible types are IPV4, IPV6, or both.
Changing Static IPv4 Addresses To Static IPv6 Addresses Using SRVCTL
When you change from IPv4 static addresses to IPv6 static addresses, you add an IPv6 address and modify the network to briefly accept both IPv4 and IPv6 addresses, before switching to using static IPv6 addresses, only.
Note:
If the IPv4 network is in mixed mode with both static and dynamic addresses, then you cannot perform this procedure. You must first transition all addresses to static.
To change a static IPv4 address to a static IPv6 address:
Changing Dynamic IPv4 Addresses To Dynamic IPv6 Addresses Using SRVCTL
You change dynamic IPv4 addresses to dynamic IPv6 addresses by the SRVCTL command.
Note:
If the IPv4 network is in mixed mode with both static and dynamic addresses, then you cannot perform this procedure. You must first transition all addresses to dynamic.
To change dynamic IPv4 addresses to dynamic IPv6 addresses:
Related Topics
Changing an IPv4 Network to an IPv4 and IPv6 Network
You can change an IPv4 network to an IPv4 and IPv6 network by adding an IPv6 network to an existing IPv4 network.
This process is described in Steps 1 through 5 of the procedure documented in "Changing Static IPv4 Addresses To Static IPv6 Addresses Using SRVCTL".
After you complete those steps, log in as the Grid user, and run the following command:
$ srvctl status scan
Review the output to confirm the changes to the SCAN VIPs.
Transitioning from IPv4 to IPv6 Networks for VIP Addresses Using SRVCTL
You use the SRVCTL command to remove an IPv4 address type from a combined IPv4 and IPv6 network.
Enter the following command:
# srvctl modify network -iptype ipv6
This command starts the removal process of IPv4 addresses configured for the cluster.