Go to main content

Reference for Oracle Solaris Cluster 4.4

Exit Print View

Updated: August 2018
 
 

scinstall (8)

Name

scinstall - initialize Oracle Solaris Cluster software and establish new cluster nodes

Synopsis

/usr/cluster/bin/scinstall -i -F [-C clustername] 
     [-T authentication-options] [-o]] [-A adapter-options] 
     [-B switch-options] [-m cable-options] [-w netaddr-options]
/usr/cluster/bin/scinstall -i -N cluster-member [-C clustername] 
     [-A adapter-options] [-B switch-options] [-m cable-options]
/usr/cluster/bin/scinstall -c net-image-source -U password-file 
     -h nodename -n nodeip-mac-options -W software-specs -F 
     [-C clustername] [-T authentication-options [-A adapter-options] 
     [-B switch-options] [-m cable-options] [-w netaddr-options]
/usr/cluster/bin/scinstall -c net-image-source -U password-file 
     -h nodename -n nodeip-mac-options -W software-specs 
     -N cluster-member [-C clustername] [-A adapter-options] 
     [-B switch-options] [-m cable-options]
/usr/cluster/bin/scinstall -c archive=archive-location[::cert=cert-file::
     key=key-file],action=initial -U password-file -h nodename
     -n nodeip-mac-options -F[-C clustername] [-f hostnames-map-file]
     [-T authentication-options] [-A adapter-options] 
     [-B switch-options] [-m cable-options] [-o] [-w netaddr-options]
/usr/cluster/bin/scinstall -c archive=archive-location[::cert=cert-file::
     key=key-file],action=initial -U password-file -h nodename
     -n nodeip-mac-options -N cluster-member[-C clustername] [-f hostnames-map-file]
     [-T authentication-options] [-A adapter-options] 
     [-B switch-options] [-m cable-options] [-o] [-w netaddr-options]
/usr/cluster/bin/scinstall -c archive=archive-location[::cert=cert-file::
     key=key-file],action=restore -h nodename [-F[-o]]
     -C clustername -n nodeip-mac-options [-T secureAI=yes]
/usr/cluster/bin/scinstall -c archive=archive-location[::cert=cert-file::
     key=key-file],action=replicate -h nodename [-F[-o]] 
     -C clustername -n nodeip-mac-options
     [-T node=archive-source-node::node-to-install[,…] [,secureAI=yes]
     [-f hostnames-map-file] [-w netaddr-options] -U password-file
/usr/cluster/bin/scinstall -u update-modes [update-options]
/usr/cluster/bin/scinstall -u update update-options [pkg_fmri_pattern …]
/usr/cluster/bin/scinstall -r [-N cluster-member] [–b be-name]
scinstall -p [-v]

Description

The scinstall command performs a number of Oracle Solaris Cluster node creation and update tasks, as follows.

  • The "initialize" form (–i) of scinstall establishes a node as a new Oracle Solaris Cluster configuration member. It either establishes the first node in a new cluster (–F) or adds a node to an already-existing cluster (–N). Always run this form of the scinstall command from the node that is creating the cluster or is being added to the cluster.

  • The "add install client" form (–c) of scinstall establishes the specified nodename as a custom Automated Installer (AI) client on the AI install server from which the command is run. Always run this form of the scinstall command from the AI install server.

  • The "remove" form (–r) of scinstall removes cluster configuration information and uninstalls Oracle Solaris Cluster software from a cluster node.

  • The "update" form (–u) of scinstall, which has multiple modes and options, updates an Oracle Solaris Cluster node to a new release. This process was formerly called a software upgrade. Always run this form of the scinstall command from the node being updated.

  • The "print release" form (–p) of scinstall prints release and package versioning information for the Oracle Solaris Cluster software that is installed on the node from which the command is run.

Without options, the scinstall command attempts to run in interactive mode.

Run all forms of the scinstall command other than the "print release" form (–p) as the root role.

The ha-cluster/system/install software package includes a copy of the scinstall command.

You can run this command only from the global zone.

Options

Basic Options

The following options direct the basic form and function of the command.

None of the following options can be combined on the same command line.

–c

Specifies the "add install client" form of the scinstall command. This option establishes the specified nodename as a custom Automated Installer (AI) client on the AI server where you issued the command. This –c option accepts two specifications: -c net-image-source and -c archive=archive-location[::cert=cert-file::key=key-file],action={initial/restore|replicate}.

You can use this option only in the global zone.

You must specify the net-image-source when you use the AI to install the Oracle Solaris and Oracle Solaris Cluster software packages from IPS repositories and configure a new cluster. It can be a repository where you retrieve the install-image or solaris-auto-install IPS package based on the architecture of the cluster nodes (SPARC or i386):

-c publisher=repo[::cert=cert-file=key-file],arch={sparc|i386}

The net-image-source can also be an AI ISO image file for the Oracle Solaris release. The file must be accessible from an already-established AI server that is configured to install the cluster nodes: -c iso-file.

Use the archive=archive-location,action={initial|restore|replicate} command when you use the Unified Archives to automatically install a cluster or restore cluster nodes. This command specifies the location of the Unified Archives, and can be the full path to an archive file on a file-system that is accessible from the AI server, or an HTTP or HTTPS location. If you are accessing an HTTPS location, you must specify the SSL key and certificate file. You must also specify the intended use of the archive: to configure a new cluster (action=initial), restore a node (action=restore), or replicate a new cluster from an existing cluster that has the same hardware configuration (action=replicate). When you use the restore action, the archive must be a recovery type of archive that was previously created on the same node that you want to restore.

This form of the command enables fully-automated cluster installation from an AI server by helping to establish each cluster node, or nodename, as a custom AI client on an already-established Automated Installer install server.

For Oracle Solaris Cluster, you can customize the AI manifest file. See How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (IPS Repositories) in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment and Automatically Installing Oracle Solaris 11.4 Systems.

Before you use the scinstall command to set up a node as a custom Oracle Solaris Cluster AI client, you must first establish the AI installation server. For more information about setting up an AI install server, see Chapter 3, Setting Up the AI Server in Automatically Installing Oracle Solaris 11.4 Systems.

–i

Specifies the "initialize" form of the scinstall command. This form of the command establishes a node as a new cluster member. The new node is the node from which you issue the scinstall command.

You can use this option only in the global zone.

If the –F option is used with –i, scinstall establishes the node as the first node in a new cluster.

If the –o option is used with the –F option, scinstall establishes a single-node cluster.

If the –N option is used with –i, scinstall adds the node to an already-existing cluster.

–p

Prints release and package versioning information for the Oracle Solaris Cluster software that is installed on the node from which the command is run. This is the only form of scinstall that you can run as a non-root.

You can use this option only in the global zone.

–r

Removes cluster configuration information and uninstalls Oracle Solaris Cluster framework and data-service software from a cluster node. You can then reinstall the node or remove the node from the cluster. You must run the command on the node that you uninstall, from a directory that is not used by the cluster software. The node must be in noncluster mode.

You can use this option only in the global zone.

–u

Updates Oracle Solaris Cluster software on the node from which you invoke the scinstall command. The update form of scinstall has multiple modes of operation, as specified by update-mode. See Update Options below for information specific to the type of update that you intend to perform.

You can use this option only in the global zone.

Additional Options

You can combine additional options with the basic options to modify the default behavior of each form of the command. Refer to the SYNOPSIS section for additional details about which of these options are legal with which forms of the scinstall command.

The following additional options are supported:

–b be-name

Specifies the name to assign the new boot environment (BE). The –b option is only legal with the "remove" (–r) form of the command and with the "update" (–u update) form of the command. When used with the –u update form of the command, the –b option is not legal combined with the –R option.

If you do not specify this option, scinstall assigns the name of the new BE. This name is based on the name of the current BE, of the form currentBE-N, where the suffix -N is an incremented number.

The first new BE is named currentBE-1, the next new BE is named currentBE-2, and so forth. If a BE is deleted, its name is not reused for the next new BE when a BE name with a higher suffix number exists. For example, if BEs sc4.4, sc4.4-1, and sc4.4-2 exist, and sc4.4-1 is deleted, the next new BE is named sc4.4-3.

If you specify a BE name that already exists, the command exits with an error.

–h nodename

Specifies the node name. The –h option is only legal with the "add install client" (–c) form of the command.

The nodename is the name of the cluster node (that is, AI install client) to set up for custom AI installation.

–v

Prints release information in verbose mode. The –v option is only legal with the "print release" (–p) form of the command to specify verbose mode.

In the verbose mode of "print release," the version string for each installed Oracle Solaris Cluster software package is also printed.

–F [config-options]

Establishes the first node in the cluster. The –F option is only legal with the "initialize" (–i) or "add install client" (–c) forms of the command.

The establishment of secondary nodes will be blocked until the first node is fully instantiated as a cluster member and is prepared to perform all necessary tasks that are associated with adding new cluster nodes. If the –F option is used with the –o option, a single-node cluster is created and no additional nodes can be added during the cluster-creation process.

–f hostnames-map-file

Specifies the text file containing a list of old hostname and new hostname pairs to use to replicate a cluster from another cluster, or to use a recovery archive with the initial action to form a brand new cluster. The file can contain multiple lines, with each line containing two columns. The first column is the hostname or IP address used in the source cluster where the archives are created. The second column is the corresponding hostname or IP address for the new cluster. These hostnames can be used for logical hostnames, shared address resources, and zone clusters.

source-cluster-zc-hostname1          target-cluster-zc-hostname1
source-cluster-zc-hostname2          target-cluster-zc-hostname2
source-cluster-lh1          target-cluster-lh1
source-cluster-lh2          target-cluster-lh2

You can use this option only in the global zone.

–N cluster-member [config-options]

Specifies the cluster member. The –N option is only legal with the “initialize" (–i), "add install client" (–c) or “remove" (–") forms of the command.

Before you use the –N option with the –i option, you must first run the clauth enable -n control-node command on the cluster-member to be specified to the –N option. This command authorizes acceptance of commands from the control-node. The clauth command does not need to be run before using the –N option with the –c option. For more information, see the clauth(8CL) man page.

When used with the –i or –c option, the –N option is used to add additional nodes to an existing cluster. The specified cluster-member is typically the name of the first cluster node that is established for the cluster. However, the cluster-member can be the name of any cluster node that already participates as a cluster member. The node that is being initialized is added to the cluster of which cluster-member is already an active member. The process of adding a new node to an existing cluster involves updating the configuration data on the specified cluster-member, as well as creating a copy of the configuration database onto the local file system of the new node.

When used with the –r option, the –N option specifies the cluster-member, which can be any other node in the cluster that is an active cluster member. The scinstall command contacts the specified cluster-member to make updates to the cluster configuration. If the –N option is not specified, scinstall makes a best attempt to find an existing node to contact.

Configuration Options

The config-options is used with the –F option.

/usr/cluster/bin/scinstall{–i | –c net-image-source –U password-file 
     –hnodename -n nodeip-mac-options -W software-spec} –F [
     –C clustername] [–T authentication-options] [–A adapter-options] 
     [–B switch-options] 
     [–m endpoint=[this-node]:name[@port],endpoint=[node:]name[@port]] 
     [–o] [–w netaddr-options]
/usr/cluster/bin/scinstall {–i | –c net-image-source –U password-file 
     –h nodename -n nodeip-mac-options -W software-spec} 
     –N cluster-member [–C clustername] [–A adapter-options] 
     [–B switch-options] [–m endpoint=cable-options]
–m cable-options

Specifies the cluster interconnect connections. This option is only legal when the –F or –N option is also specified.

The –m option helps to establish the cluster interconnect topology by configuring the cables connecting the various ports found on the cluster transport adapters and switches. Each new cable configured with this form of the command establishes a connection from a cluster transport adapter on the current node to either a port on a cluster transport switch or an adapter on another node already in the cluster.

If you specify no –m options, the scinstall command attempts to configure a default cable. However, if you configure more than one transport adapter or switch with a given instance of scinstall, it is not possible for scinstall to construct a default. The default is to configure a cable from the singly-configured transport adapter to the singly-configured (or default) transport switch.

The –m cable-options are as follows.

–m endpoint=[this-node]:name[@port],endpoint=[node:]name[@port]

The syntax for the –m option demonstrates that at least one of the two endpoints must be an adapter on the node that is being configured. For that endpoint, it is not required to specify this-node explicitly. The following is an example of adding a cable:

–m endpoint=:net1,endpoint=switch1

In this example, port 0 of the net1 transport adapter on this node, the node that scinstall is configuring, is cabled to a port on transport switch switch1. The port number that is used on switch1 defaults to the node ID number of this node.

You must always specify two endpoint options with each occurrence of the –m option. The name component of the option argument specifies the name of either a cluster transport adapter or a cluster transport switch at one of the endpoints of a cable.

  • If you specify the node component, the name is the name of a transport adapter.

  • If you do not specify the node component, the name is the name of a transport switch.

If you specify no port component, the scinstall command attempts to assume a default port name. The default port for an adapter is always 0. The default port name for a switch endpoint is equal to the node ID of the node being added to the cluster.

Refer to the clinterconnect(8CL) man page for more information regarding port assignments and other requirements.

Before you can configure a cable, you must first configure the adapters and/or switches at each of the two endpoints of the cable (see –A and –B).

–n nodeip-mac-options

Specifies the IP address and MAC address of the node. This option is only legal when the –c option is also specified.

The –n nodeip-mac-options syntax is as follows:

-n ip=node-ipaddr/N,mac=mac-address
–o

Specifies the configuration of a single-node cluster. This option is only legal when the –i and –F options are also specified.

Other –F options are supported but are not required. If the cluster name is not specified, the name of the node is used as the cluster name. You can specify transport configuration options, which will be stored in the CCR. Once a single-node cluster is established, it is not necessary to configure a quorum device or to disable installmode.

–w netaddr-options

Specifies the network address for the private interconnect, or cluster transport. This option is only legal when the –F option is also specified.

Use this option to specify a private-network address for use on the private interconnect. You can use this option when the default private-network address collides with an address that is already in use within the enterprise. You can also use this option to customize the size of the IP address range that is reserved for use by the private interconnect. For more information, see the networks(5) and netmasks(5) man pages.

If not specified, the default network address for the private interconnect is 172.16.0.0. The default netmask is 255.255.240.0. This IP address range supports up to 62 nodes, 10 private networks, and 12 zone clusters.

The –w netaddr-options are as follows:

-w netaddr=netaddr[,netmask=netmask]

-w netaddr=netaddr[,maxnodes=nodes,maxprivatenets=maxprivnets,\
numvirtualclusters=zoneclusters]

-w netaddr=netaddr[,netmask=netmask,maxnodes=nodes,\maxprivatenets=maxprivnets\
,numvirtualclusters=zoneclusters]
netaddr=netaddr

Specifies the private network address. The last two octets of this address must always be zero.

[netmask=netmask]

Specifies the netmask. The specified value must provide an IP address range that is greater than or equal to the default.

To assign a smaller IP address range than the default, specify the maxnodes, maxprivatenets, and numvirtualclusters operands.

[,maxnodes=nodes,maxprivatenets=maxprivnets,numvirtualclusters=zoneclusters]

Specifies the maximum number of nodes, private networks, and zone clusters that the cluster is ever expected to have. The command uses these values to calculate the minimum netmask that the private interconnect requires to support the specified number of nodes, private networks, and zone clusters. The maximum value for nodes is 62 and the minimum value is 2. The maximum value for maxprivnets is 128 and the minimum value is 2. You can set a value of 0 for zoneclusters.

[,netmask=netmask,maxnodes=nodes,maxprivatenets=maxprivnets\ ,numvirtualclusters=zoneclusters]

Specifies the netmask and the maximum number of nodes, private networks, and zone clusters that the cluster is ever expected to have. You must specify a netmask that can sufficiently accommodate the specified number of nodes, privnets, and zoneclusters. The maximum value for nodes is 62 and the minimum value is 2. The maximum value for privnets is 128 and the minimum value is 2. You can set a value of 0 for zoneclusters.

If you specify only the netaddr suboption, the command assigns the default netmask of 255.255.240.0. The resulting IP address range accommodates up to 62 nodes, 10 private networks, and 12 zone clusters.

To change the private-network address or netmask after the cluster is established, use the cluster command or the clsetup utility.

–A adapter-options

Specifies the transport adapter and, optionally, its transport type. This option is only legal when the –F or –N option is also specified. Tagged-VLAN adapters are not supported when the –c option is also specified.

Each occurrence of the –A option configures a cluster transport adapter that is attached to the node from which you run the scinstall command.

If no –A options are specified, an attempt is made to use a default adapter and transport type. The default transport type is dlpi. On the SPARC platform, the default adapter is hme1.

When the adapter transport type is dlpi, you do not need to specify the trtype suboption. In this case, you can use either of the following two forms to specify the –A adapter-options:

–A [trtype=type,]name=adaptername[,vlanid=vlanid][,other-options]–A adaptername
[trtype=type]

Specifies the transport type of the adapter. Use the trtype option with each occurrence of the –A option for which you want to specify the transport type of the adapter. An example of a transport type is dlpi.

The default transport type is dlpi.

name=adaptername

Specifies the adapter name. You must use the name suboption with each occurrence of the –A option to specify the adaptername. An adaptername is constructed from a device name that is immediately followed by a physical-unit number, for example, hme0.

If you specify no other suboptions with the –A option, you can specify the adaptername as a standalone argument to the –A option, as –A adaptername.

vlanid=vlanid

Specifies the VLAN ID of the tagged-VLAN adapter.

[other-options]

Specifies additional adapter options. When a particular adapter provides any other options, you can specify them by using the –A option.

–B switch-options

Specifies the transport switch, also called transport junction. This option is only legal when the –F or –N option is also specified.

Each occurrence of the –B option configures a cluster transport switch. Examples of such devices can include, but are not limited to, Ethernet switches, other switches of various types, and rings.

If you specify no –B options, scinstall attempts to add a default switch at the time that the first node is instantiated as a cluster node. When you add additional nodes to the cluster, no additional switches are added by default. However, you can add them explicitly. The default switch is named switch1, and it is of type switch.

When the switch type is type switch, you do not need to specify the type suboption. In this case, you can use either of the following two forms to specify the –B switch-options.

-B [type=type,]name=name[,other-options]-B name

If a cluster transport switch is already configured for the specified switch name, scinstall prints a message and ignores the –B option.

If you use directly-cabled transport adapters, you are not required to configure any transport switches. To avoid configuring default transport switches, use the following special –B option:

–B type=direct
[type=type]

Specifies the transport switch type. You can use the type option with each occurrence of the –B option. Ethernet switches are an example of a cluster transport switch which is of the switch type switch. See the clinterconnect(8CL) man page for more information.

You can specify the type suboption as direct to suppress the configuration of any default switches. Switches do not exist in a transport configuration that consists of only directly connected transport adapters. When the type suboption is set to direct, you do not need to use the name suboption.

name=name

Specifies the transport switch name. Unless the type is direct, you must use the name suboption with each occurrence of the –B option to specify the transport switch name. The name can be up to 256 characters in length and is made up of either letters or digits, with the first character being a letter. Each transport switch name must be unique across the namespace of the cluster.

If no other suboptions are needed with –B, you can give the switch name as a standalone argument to –B (that is, –B name).

[other-options]

Specifies additional transport switch options. When a particular switch type provides other options, you can specify them with the –B option. Refer to the clinterconnect(8CL) man page for information about any special options that you might use with the switches.

–C clustername

Specifies the name of the cluster. This option is only legal when the –F or –N option is also specified.

  • If the node that you configure is the first node in a new cluster, the default clustername is the same as the name of the node that you are configuring.

  • If the node that you configure is being added to an already-existing cluster, the default clustername is the name of the cluster to which cluster-member already belongs.

It is an error to specify a clustername that is not the name of the cluster to which cluster-member belongs.

–T authentication-options

Specifies node-authentication options for the cluster. This option is only legal when the –F option is also specified.

Use this option to establish authentication list of nodes that attempt to add themselves to the cluster configuration. Specifically, when a machine requests that it be added to the cluster as a cluster node, a check is made to determine whether or not the node has permission to join. If the joining node has permission, it is authenticated and allowed to join the cluster.

You can only use the –T option with the scinstall command when you set up the very first node in the cluster. If the authentication list or policy needs to be changed on an already-established cluster, use the claccess command.

The –T authentication-options are as follows.

–T node=nodename[,…][,secureAI=yes]
–T node=archive-source-node::node-to-install[,…][,secureAI=yes]
–T secureAI=yes
node=nodename[,…]

Specifies node names to add to the node authentication list. You must specify at least one node suboption to the –T option. This option is used to add node names to the list of nodes that are able to configure themselves as nodes in the cluster. If the authentication list is empty, any node can request that it be added to the cluster configuration. However, if the list has at least one name in it, all such requests are authenticated by using the authentication list. You can modify or clear this list of nodes at any time by using the claccess command or the clsetup utility from one of the active cluster nodes.

node=archive-source-node::node-to-install[,…]

The node=archive-source-node::node-to-install option specifies the pair of the node names. You must specify the node pairs for all the nodes that you want to replicate. The first node name is the node where the archive is created, and the second node name is the node in the new cluster that you want to install from that archive. Use this specification only when replicating a cluster from the archives created on another cluster, and the new cluster nodes must have the same (or a super set) hardware configuration as the source cluster nodes where the archives are created.

[secureAI=yes]

Specifies using secure installation with AI and is effective only when using AI to install the cluster software. Without the secureAI=yes specification, the default action performs a traditional AI installation. When restoring a node from an archive using the secure installation method, you only need to specify -T secureAI=yes. You do not need node=nodename[,…].

–U password-file

Specifies the name of the file that contains the root-user password. This option is only legal when the –c option is also specified.

This option enables automated setting of the root password during initial Oracle Solaris installation and configuration. The user creates a file that contains the text to use as the root user password for the system being installed. Typically, the password-file is located on, or accessible from, an already-established AI install server that is configured to install the nodename install client. The scinstall utility retrieves the contents of this file and supplies it to the Oracle Solaris configuration utility.

–W software-specs

Specifies the location of one or more publishers and package repositories. Also, specifies the public key and the SSL certificate information needed for a secure install using AI. This option is only legal when the –c option is specified to install from an IPS repository.

The –W software-specs are as follows:

–W publisher=repo[::key=key-file::cert=certificate-file] \
::pkg[,…][:::publisher=repo[::key=key-file::cert=certificate-file]::pkg[,…]]…

Note that the –W option is broken into multiple lines for readability, but should be specified in a single unbroken string.

In the –W option syntax, publisher is the publisher name ha-cluster or solaris, repo is the repository location, key-file and certificate-file is the information for the public key and the SSL certificate that is required for a secure installation from a HTTPS repository, and pkg is a software package name.

In order to install Oracle Solaris or Oracle Solaris Cluster using the secure HTTPS repository, you need to provide information for the public key and the SSL certificate. You can request and download the public key and the SSL certificate from the http://pkg-register.oracle.com site.

Update Options

The –u update-modes and the update-options for standard (nonrolling) update, rolling update, and dual-partition update are as follows.

Standard (Nonrolling) and Rolling Update

Use the –u update mode to update a cluster node to a later Oracle Solaris Cluster software release in standard (nonrolling) or rolling update mode.

  • A standard, or nonrolling, update process updates an existing mounted BE or an inactive boot environment (BE) while your cluster node continues to serve cluster requests. If you do not specify an existing inactive or mounted active BE, the scinstall utility automatically creates a new BE. Once the update is complete, if an inactive BE was updated, the scinstall utility activates the updated BE and notifies the user to reboot the node into the updated BE.

  • A rolling update process takes only one cluster node out of production at a time. This process can only be used to update Oracle Solaris or Oracle Solaris Cluster software, or both, to an update release of the versions that are already installed. While you update one node, cluster services continue on the rest of the cluster nodes. After a node is updated, you bring it back into the cluster and repeat the process on the next node to update. After all nodes are updated, you must run the scversions command on one cluster node to commit the cluster to the updated version. Until this command is run, some new functionality that is introduced in the update release might not be available.

  • Optionally, you can specify package FMRIs that are already installed in the current image.

The update-options to –u update for standard and rolling mode are as follows.

[–g | –Z excluded-zone-cluster-name …] \
[–b be-name | –R mounted-be-path] \
[–L {accept | licenses | accept,licenses | licenses,accept}] 
[pkg_fmri_pattern …]
–g

Updates only the global zone, but does not update any zone clusters.

The –g option is not legal with the –Z option.

–L {accept | licenses | accept,licenses | licenses,accept }

Specifies whether to accept or display, or both, the licenses of the packages you update to.

The accept argument corresponds to the --accept option of the pkg command and the licenses argument corresponds to the --licenses option.

Specifying the –L accept option indicates that you agree to and accept the licenses of the packages that are updated. If you do not provide this option, and any package licenses require acceptance, the update operation fails.

Specifying –L licenses displays all of the licenses for the packages that are updated.

When both accept and licenses are specified to the –L option, the licenses of the packages that are updated are displayed as well as accepted. The order you specify the accept and licenses arguments does not affect the behavior of the command.

–R mounted-be-path

Specifies an existing, mounted boot environment (BE) to update.

The –R option is not legal with the –b option.

–Z excluded-zone-cluster-name […]

Specifies a zone cluster that must not be updated. You can specify the –Z option multiple times. The update is performed only on the global cluster and on those zone clusters that are not specified with the –Z option.

The –Z option is not legal with the –g option.

It is not supported to have a zone cluster running a different version of the cluster software than the global cluster. The behavior of a zone cluster that is not updated is undetermined.

The scinstall -u update command supports the ability to specify the pkg_fmri_patterns for the packages you are updating:

[pkg_fmri_pattern …]

Specifies the packages to update. These packages must be installed in the current image. If an asterisk (*) is one of the pkg_fmri_pattern patterns provided, you can update all packages installed in the current image.

Dual-Partition Update

Use the –u update-modes and update-options for dual-partition update to perform the multiple stages of a dual-partition update.

The dual-partition update process first involves assigning cluster nodes into two groups, or partitions. Next, you update one partition while the other partition provides cluster services. You then switch services to the updated partition, update the remaining partition, and rejoin the updated nodes of the second partition to the cluster formed by the updated first partition. The update-modes for dual-partition update also include a mode for recovery after a failure during a dual-partition update.

Dual-partition update modes are used in conjunction with the –u update update mode. See Updating Your Oracle Solaris Cluster 4.4 Environment for more information.

The update-modes and update-options to –u for dual-partition update are as follows:

/usr/cluster/bin/scinstall -u begin -h nodelist
/usr/cluster/bin/scinstall -u plan
/usr/cluster/bin/scinstall -u recover
/usr/cluster/bin/scinstall -u status
/usr/cluster/bin/scinstall -u apply
/usr/cluster/bin/scinstall -u status
apply

Specifies that update of a partition is completed. Run this form of the command from any node in the updated partition, after all nodes in that partition are updated.

The apply update mode performs the following tasks:

First partition

When run from a node in the first partition, the apply update mode prepares all nodes in the first partition to run the new software.

When the nodes in the first partition are ready to support cluster services, the command remotely executes the scripts /etc/cluster/ql/cluster_pre_halt_apps and /etc/cluster/ql/cluster_post_halt_apps that are on the nodes in the second partition. These scripts are used to call user-written scripts that stop applications that are not under Resource Group Manager (RGM) control, such as Oracle Real Application Clusters (Oracle RAC).

  • The cluster_pre_halt_apps script is run before applications that are under RGM control are stopped.

  • The cluster_post_halt_apps script is run after applications that are under RGM control are stopped, but before the node is halted.


Note -  Before you run the apply update mode, modify the script templates as needed to call other scripts that you write to stop certain applications on the node. Place the modified scripts and the user-written scripts that they call on each node in the first partition. These scripts are run from one arbitrary node in the first partition. To stop applications that are running on more than one node in the first partition, modify the user-written scripts accordingly. The unmodified scripts perform no default actions.

After all applications on the second partition are stopped, the command halts the nodes in the second partition. The shutdown initiates the switchover of applications and data services to the nodes in the first partition. Then the command boots the nodes in the second partition into cluster mode.

If a resource group was offline because its node list contains only members of the first partition, the resource group comes back online. If the node list of a resource group has no nodes that belong to the first partition, the resource group remains offline.

Second partition

When run from a node in the second partition, the apply update mode prepares all nodes in the second partition to run the new software. The command then boots the nodes into cluster mode. The nodes in the second partition rejoin the active cluster that was formed by the nodes in the first partition.

If a resource group was offline because its node list contains only members of the second partition, the resource group comes back online.

After all nodes have rejoined the cluster, the command performs final processing, reconfigures quorum devices, and restores quorum vote counts.

begin

Specifies the nodes to assign to the first partition that you update and initiates the dual-partition update process. Run this form of the command from any node of the cluster. Use this update mode after you use the plan update mode to determine the possible partition schemes.

First the begin update mode records the nodes to assign to each partition. Next, all applications are stopped on one node, then the update mode shuts down the node. The shutdown initiates switchover of each resource group on the node to a node that belongs to the second partition, provided that the node is in the resource-group node list. If the node list of a resource group contains no nodes that belong to the second partition, the resource group remains offline.

The command then repeats this sequence of actions on each remaining node in the first partition, one node at a time.

The nodes in the second partition remain in operation during the update of the first partition. Quorum devices are temporarily unconfigured and quorum vote counts are temporarily changed on the nodes.

plan

Queries the cluster storage configuration and displays all possible partition schemes that satisfy the shared-storage requirement. Run this form of the command from any node of the cluster. This is the first command that you run in a dual-partition update.

Dual-partition update requires that each shared storage array must be physically accessed by at least one node in each partition.

The plan update mode can return zero, one, or multiple partition solutions. If no solutions are returned, the cluster configuration is not suitable for dual-partition update. Use instead the standard update method.

For any partition solution, you can choose either partition group to be the first partition that you update.

recover

Recovers the cluster configuration on a node if a fatal error occurs during dual-partition update processing. Run this form of the command on each node of the cluster.

You must shut down the cluster and boot all nodes into noncluster mode before you run this command.

Once a fatal error occurs, you cannot resume or restart a dual-partition update, even after you run the recover update mode.

The recover update mode restores the Cluster Configuration Repository (CCR) database to the original state, before the start of the dual-partition update.

The following list describes in which circumstances to use the recover update mode and in which circumstances to take other steps.

  • If the failure occurred during –u begin processing, run the –u recover update mode.

  • If the failure occurred after –u begin processing completed but before the shutdown warning for the second partition was issued, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the –u recover update mode.

    • If the failure occurred on a node in the second partition, no recovery action is necessary.

  • If the failure occurred after the shutdown warning for the second partition was issued but before –u apply processing started on the second partition, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the –u recover update mode.

    • If the failure occurred on a node in the second partition, reboot the failed node into noncluster mode.

  • If the failure occurred after –u apply processing was completed on the second partition but before the update completed, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the –u recover update mode.

    • If the failure occurred on a node in the first partition but the first partition stayed in service, reboot the failed node.

    • If the failure occurred on a node in the second partition, run the –u recover update mode.

In all cases, you can continue the update manually by using the standard update method, which requires the shutdown of all cluster nodes.

status

Displays the status of the dual-partition update. The following are the possible states:

Update is in progress

The scinstall -u begin command has been run but dual-partition update has not completed.

The cluster also reports this status if a fatal error occurred during the dual-partition update. In this case, the state is not cleared even after recovery procedures are performed and the cluster update is completed by using the standard update method

Update not in progress

Either the scinstall -u begin command has not yet been issued, or the dual-partition update has completed successfully.

Run the status update mode from one node of the cluster. The node can be in either cluster mode or noncluster mode.

The reported state is valid for all nodes of the cluster, regardless of which stage of the dual-partition update the issuing node is in.

The following option is supported with the dual-partition update mode:

–h nodelist

Specifies a space-delimited list of all nodes that you assign to the first partition. You choose these from output displayed by the plan update mode as valid members of a partition in the partition scheme that you use. The remaining nodes in the cluster, which you do not specify to the begin update mode, are assigned to the second partition.

This option is only valid with the begin update mode.

Examples

Establishing a Two-Node Cluster

The following example establishes a typical two-node cluster with Oracle Solaris Cluster software on SPARC based platforms. The example assumes that Oracle Solaris Cluster software packages are already installed on the nodes.

On node1, issue the following command:

node1# /usr/cluster/bin/scinstall -i -F

On node2, issue the following command:

node2# /usr/cluster/bin/scinstall -i -N node1

Establishing a Single-Node Cluster

The following command establishes a single-node cluster with Oracle Solaris Cluster software on SPARC based platforms, with all defaults accepted. The example assumes that Oracle Solaris Cluster software packages are already installed on the node.

# /usr/cluster/bin/scinstall -i -F -o

Adding Install Clients with a Net Image ISO File on an AI Server

The following example sets up an AI install server to install and initialize Oracle Solaris Cluster software on SPARC based platforms in a two-node cluster.

On the install server, issue the following commands. Note that the –W option is broken into multiple lines for readability, but should be specified in a single unbroken string.

# usr/cluster/bin/scinstall -c /export/home/11-ga-ai-x86.iso -h phys-schost-1 \ 
-U /export/pwdfile \ 
-C schost \ 
-F \ 
-W solaris=http://ipkg.us.oracle.com/solaris11/release::\
entire,server_install:::ha-cluster=cluster-repository::\
ha-cluster-framework-full,ha-cluster-data-services-full,
ha-cluster-geo-full \ 
-n ip=10.255.85.163/24,mac=12:34:56:78:90:ab \ 
-T node=phys-schost-1,node=phys-schost-2 \ 
-w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=62,\
maxprivatenets=10,numvirtualclusters=12 \ 
-A trtype=dlpi,name=e1000g1 -A trtype=dlpi,name=nxge1 \ 
-B type=switch,name=switch1 -B type=switch,name=switch2 \ 
-m endpoint=:e1000g1,endpoint=switch1 \ 
-m endpoint=:nge1,endpoint=switch2 

# usr/cluster/bin/scinstall -c /export/home/11-ga-ai-x86.iso -h phys-schost-2 \ 
-U /export/pwdfile \ 
-C schost \ 
-N phys-schost-1 \ 
-W solaris=http://ipkg.us.oracle.com/solaris11/release::\
entire,server_install:::ha-cluster=cluster-repository::\
ha-cluster-framework-full,ha-cluster-data-services-full,\ 
ha-cluster-geo-full \ 
-n ip=10.255.85.164/24,mac=12:34:56:78:90:ab \ 
-A trtype=dlpi,name=e1000g1 -A trtype=dlpi,name=nxge1 \ 
-m endpoint=:e1000g1,endpoint=switch1 \ 
-m endpoint=:nge1,endpoint=switch2

Performing a Dual-Partition Update

The following example uses the dual-partition method to update the framework and data service software of a cluster to the next Oracle Solaris Cluster release. This examples uses the Oracle Solaris Cluster version on SPARC based platforms. The example queries the cluster for valid partition schemes, assigns nodes to partitions, reboots the node in the first partition, returns the first partition to operation after update and reboots the node in the second partition, and returns the second partition to the cluster after update.

# /usr/cluster/bin/scinstall -u plan
  Option 1
    First partition
      phys-schost-1
    Second partition
      phys-schost-2
…
# /usr/cluster/bin/scinstall -u begin -h phys-schost-1 phys-schost-3

ok boot -x

(Update the node in the first partition)

phys-schost-1# /usr/cluster/bin/scinstall -u apply
ok boot -x

(Update the node in the second partition)

phys-schost-2# /usr/cluster/bin/scinstall -u apply

Updating the Framework and Data Service Software (Standard or Rolling Update)

The following example updates the framework and data service software of a cluster to the next Oracle Solaris Cluster release. Perform these operations on each cluster node.


Note -  For a rolling update, perform these operations on one node at a time, after you use the clnode evacuate command to move all resource groups and device groups to the other nodes which will remain in the cluster.
# /usr/cluster/bin/scinstall -u update
# init 6

Restoring the First Node from an Archive File

The following example uses a secure AI installation to restore the first node from an archive file that is saved on a file system that is accessible from the AI server.

# /usr/cluster/bin/scinstall -c archive=file:///net/storagenode/export/archive
     /phys-schost-1-recovery-archive,action=restore \
-h phys-schost-1 \
-C schost =\
-F \
-n ip=10.255.85.163/24,mac=12:34:56:78:90:ab \
-T secureAI=yes

Restoring Other Nodes from Archives

The following example uses a secure AI installation to restore the other nodes from archives that were previously created on those nodes.

# /usr/cluster/bin/scinstall -c archive=file:///net/storagenode/export/archive
     /phys-schost-2-recovery-archive,action=restore \
-h phys-schost-2 \
-C schost =\
-n ip=10.255.85.164/24,mac=12:34:56:78:90:cd \
-T secureAI=yes

Performing a Non-Secure Replication

The following example performs a non-secure replication.

# /usr/cluster/bin/scinstall -c archive=file:///net/storagenode/export/archive
     /source-node-1-archive,action=replicate \
-h phys-schost-1 \
-C schost \
-F \
-n ip=10.255.85.163/24,mac=12:34:56:78:90:ab \
-T node=phys-schost-1,node=phys-schost-2,secureAI=yes \
-U /export/pwdfile
# /usr/cluster/bin/scinstall -c archive=file:///net/pnass3/export/archive
     /vzono1a.clone,action=replicate \
-h phys-schost-2 \
-C schost \
-n ip=10.255.85.164/24,mac=12:34:56:78:90:cd \
-U /export/pwdfile

Adding Install Clients with IPS Repositories on an AI Server

The following examples uses a secure AI installation to install and configure a two-node x86 cluster from IPS repositories.

# /usr/cluster/bin/scinstall -c solaris=http://ipkg.us.oracle.com/solaris11
     /release::arch=i386 -h phys-schost-1 \
-C schost \
-F \
-W solaris=http://ipkg.us.oracle.com/solaris11/release::entire,server_install:::
     ha-cluster=http://ipkg.us.oracle.com/ha-cluster/release::ha-cluster-framework-full \
-n ip=10.255.85.163/24,mac=12:34:56:78:90:ab \
-T node=phys-schost-1,node=phys-schost-2,secureAI=yes \
-w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=32,maxprivatenets=10,
     numvirtualclusters=12 \
-A trtype=dlpi,name=net1 -A trtype=dlpi,name=net3 \
-B type=switch,name=switch1 -B type=switch,name=switch2 \
-m endpoint=:net1,endpoint=switch1 \
-m endpoint=:net3,endpoint=switch2 \
-P task=quorum,state=INIT -P task=security,state=SECURE \
-U /export/pwdfile
# /usr/cluster/bin/scinstall -c solaris=http://ipkg.us.oracle.com/solaris11
     /release::arch=i386 -h phys-schost-2 \
-C schost \
-N phys-schost-1 \
-W solaris=http://ipkg.us.oracle.com/solaris11/release::entire,server_install:::
     ha-cluster=http://ipkg.us.oracle.com/ha-cluster/release::ha-cluster-framework-full \
-n ip=10.255.85.164/24,mac=12:34:56:78:90:ab \
-A trtype=dlpi,name=net1 -A trtype=dlpi,name=net3 \
-m endpoint=:net1,endpoint=switch1 \
-m endpoint=:net3,endpoint=switch2 \
-U /export/pwdfile

Exit Status

The following exit values are returned:

0

Successful completion.

non-zero

An error occurred.

Files

  • /etc/cluster/ql/cluster_post_halt_apps
  • /etc/cluster/ql/cluster_pre_halt_apps

Attributes

See attributes(7) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
ha-cluster/system/install
Interface Stability
Evolving

See Also

lofi(4D), netmasks(5), networks(5), Intro(8CL), claccess(8CL), clauth(8CL), clinterconnect(8CL), clnode(8CL), clsetup(8CL), cluster(8CL), newfs(8), scversions(8)

Installing and Configuring an Oracle Solaris Cluster 4.4 Environment, Administering an Oracle Solaris Cluster 4.4 Configuration, Updating Your Oracle Solaris Cluster 4.4 Environment