Before You Begin
Ensure that the /etc/netmasks file has IP-address subnet and netmask entries for all logical hostnames. If necessary, edit the /etc/netmasks file to add any missing entries.
# clresourcetype register ORCL.otd
You can optionally select the set of nodes or zones on which the data service can run with the –n option.
# clresourcegroup create [-n node[,…]] resource-group
Specifies the name of the failover resource group. This name can be your choice but must be unique for resource groups within the cluster.
Specifies a comma-separated, ordered list of nodes that can master this resource group.
This list is optional. If you omit this list, the global zone of each cluster node can master the resource group.
You should have performed this verification during the Oracle Solaris Cluster installation. See the planning chapter in the Oracle Solaris Cluster Software Installation Guide for details.
# clressharedaddress create -g resource-group \ -h shared-address[,…] \ [-N netiflist] \ resource
Specifies the name of the failover resource group.
Specifies a comma-separated list of shared addresses that this resource is to make available.
Specifies the name of the resource group. This name can be your choice but must be unique for resource groups within the cluster.
Specifies an optional, comma-separated list that identifies the IPMP groups that are on each node or zone. The format of each entry in the list is netif@node. The replaceable items in this format are as follows:
Specifies an IPMP group name, such as sc_ipmp0, or a public network interface card (NIC). If you specify a public NIC, Oracle Solaris Cluster attempts to create the required IPMP groups.
Specifies the name or ID of a node. To specify the global zone, or to specify a node without non-global zones, specify only node[,…]
This list is optional. If you omit this list, Oracle Solaris Cluster attempts to create the required IPMP groups.
Specifies the name of the resource.
# clresourcegroup online -eM otd-rg
Specifies the name of the failover resource group.
Create a resource group to hold a data service application resource. You must specify the maximum and desired number of primary nodes as well as a dependency between this resource group and the failover resource group that you created in Step 3. This dependency ensures that in the event of failover, the resource manager starts the network resource before starting any data services that depend on the network resource.
# clresourcegroup create -p Maximum_primaries=m \ -p Desired_primaries=n \ scalable-resource-group
Specifies the maximum number of active primary nodes allowed for this resource group. If you do not assign a value to this property, the default is 1.
Specifies the desired number of active primary nodes allowed for this resource group. If you do not assign a value to this property, the default is 1.
Specifies the scalable resource group.
# clresourcegroup create -S resource-group
You can repeat this step to add multiple application resources, such as secure and insecure versions, to the same resource group.
To set load balancing for the data service, use the two standard resource properties Load_balancing_policy and Load_balancing_weights. See the r_properties (5) man page for a description of these properties. Additionally, see the examples that follow this section.
# clresource create -g scalable-resource-group \ -t resource-type \ -p ORACLE_HOME=oracle-traffic-director-installation-directory \ -p INSTANCE_HOME=instance-directory \ -p Resource_dependencies=shared-address[,…] \ -p Port_list=port-number/protocol[,…] \ -p Scalable=True \ resource
Specifies the name of the scalable resource group into which the resources are to be placed.
Specifies the type of the resource to add.
Specifies the directory where the Oracle Traffic Director software has been installed. This is a per-node setting and if the location is different on each node, each node must be qualified with the node name. For example:
-p ORACLE_HOME{node1}=oracle-traffic-director-installation-directory-node1 \ -p ORACLE_HOME{node2}=oracle-traffic-director-installation-directory-node2
Specifies the directory where the Oracle Traffic Director instance configuration is located. This is a per-node setting and if the location is different on each node, each node must be qualified with the node name. For example:
-p INSTANCE_HOME{node1}=instance-directory-node1 \ -p INSTANCE_HOME{node2}=instance-directory-node2
Specifies a comma-separated list of network resources that identify the shared addresses that the data service uses.
Specifies a comma-separated list of port numbers and protocol to be used, for example, 80/tcp,81/tcp.
Specifies a Boolean that is required for scalable services.
Specifies the name of the resource to add.
The resource is created in the enabled state.
# clresourcegroup online -eM resource-group
Specifies the name of the scalable resource group.
This example creates an Oracle Traffic Director otd-rs resource named otd-rg in a resource group named web-rg, which is configured to run simultaneously on all four nodes of a four-node cluster.
The Oracle Traffic Director instances are configured to listen on port 80 and uses the IP addresses as configured in a SharedAddress resource named sa-rs, which is contained in the resource group sa-rg. The hostname otd-a-sa, is configured in the naming service used by the cluster and any of the clients that will be accessing the server instances.
To create the shared address resource group and resource for this example, do the following:
# clresourcegroup create sa-rg # clressharedaddress create -g sa-rg -h otd-a-sa sa-rs # clresourcegroup online -eM sa-rg
To create the Oracle Traffic Director resource group and resource, do the following:
# clresourcegroup create -S otd-rg # clresourcetype register ORCL.otd # clresource create -g otd-rg -t ORCL.otd \ -p ORACLE_HOME=/global/otd/otd-home \ -p INSTANCE_HOME{node1}= /global/otd/otd-1/net-otd-a \ -p INSTANCE_HOME{node2}= /global/otd/otd-2/net-otd-a \ -p INSTANCE_HOME{node3}= /global/otd/otd-3/net-otd-a \ -p INSTANCE_HOME{node4}= /global/otd/otd-4/net-otd-a \ -p Resource_dependencies_offline_restart=otd-gfs-rs \ -p Resource_dependencies=sa-rs \ -p Port_List=80/tcp \ -p Scalable=True \ otd-rs # clresourcegroup online -eM otd-rg