Go to main content

Oracle Solaris Cluster 4.3 Reference Manual

Exit Print View

Updated: September 2015
 
 

scinstall (1M)

Name

scinstall - initialize Oracle Solaris Cluster software and establish new cluster nodes

Synopsis

/usr/cluster/bin/scinstall -i -F [-C clustername] 
     [-T authentication-options] [-o]] [-A adapter-options] 
     [-B switch-options] [-m cable-options] [-w netaddr-options]
/usr/cluster/bin/scinstall -i -N cluster-member [-C clustername] 
     [-A adapter-options] [-B switch-options] [-m cable-options]
/usr/cluster/bin/scinstall -c net-image-source -U password-file 
     -h nodename -n nodeip-mac-options -W software-specs -F 
     [-C clustername] [-T authentication-options [-A adapter-options] 
     [-B switch-options] [-m cable-options] [-w netaddr-options]
/usr/cluster/bin/scinstall -c net-image-source -U password-file 
     -h nodename -n nodeip-mac-options -W software-specs 
     -N cluster-member [-C clustername] [-A adapter-options] 
     [-B switch-options] [-m cable-options]
/usr/cluster/bin/scinstall -c archive=archive-location[::cert=cert-file::
     key=key-file],action=initial -U password-file -h nodename
     -n nodeip-mac-options -F[-C clustername] [-f hostnames-map-file]
     [-T authentication-options] [-A adapter-options] 
     [-B switch-options] [-m cable-options] [-o] [-w netaddr-options]
/usr/cluster/bin/scinstall -c archive=archive-location[::cert=cert-file::
     key=key-file],action=initial -U password-file -h nodename
     -n nodeip-mac-options -N cluster-member[-C clustername] [-f hostnames-map-file]
     [-T authentication-options] [-A adapter-options] 
     [-B switch-options] [-m cable-options] [-o] [-w netaddr-options]
/usr/cluster/bin/scinstall -c archive=archive-location[::cert=cert-file::
     key=key-file],action=restore -h nodename [-F[-o]]
     -C clustername -n nodeip-mac-options [-T secureAI=yes]
/usr/cluster/bin/scinstall -c archive=archive-location[::cert=cert-file::
     key=key-file],action=replicate -h nodename [-F[-o]] 
     -C clustername -n nodeip-mac-options
     [-T node=archive-source-node::node-to-install[,...] [,secureAI=yes]
     [-f hostnames-map-file] [-w netaddr-options] -U password-file
/usr/cluster/bin/scinstall -u upgrade-modes [upgrade-options]
/usr/cluster/bin/scinstall -u update upgrade-options [pkg_fmri_pattern ...]
/usr/cluster/bin/scinstall -r [-N cluster-member]
scinstall -p [-v]

Description


Note -  Oracle Solaris Cluster software includes an object-oriented command set. Although Oracle Solaris Cluster software still supports the original command set, Oracle Solaris Cluster procedural documentation uses only the object-oriented command set. For more information about the object-oriented command set, see the Intro(1CL) man page.

The scinstall command performs a number of Oracle Solaris Cluster node creation and upgrade tasks, as follows.

  • The “initialize” form (–i) of scinstall establishes a node as a new Oracle Solaris Cluster configuration member. It either establishes the first node in a new cluster (–F) or adds a node to an already-existing cluster (–N). Always run this form of the scinstall command from the node that is creating the cluster or is being added to the cluster.

  • The “add install client” form (–c) of scinstall establishes the specified nodename as a custom Automated Installer (AI) client on the AI install server from which the command is run. Always run this form of the scinstall command from the AI install server.

  • The “remove” form (–r) of scinstall removes cluster configuration information and uninstalls Oracle Solaris Cluster software from a cluster node.

  • The “upgrade” form (–u) of scinstall, which has multiple modes and options, upgrades an Oracle Solaris Cluster node. Always run this form of the scinstall command from the node being upgraded.

  • The “print release” form (–p) of scinstall prints release and package versioning information for the Oracle Solaris Cluster software that is installed on the node from which the command is run.

Without options, the scinstall command attempts to run in interactive mode.

Run all forms of the scinstall command other than the “print release” form (–p) as superuser.

The ha-cluster/system/install software package includes a copy of the scinstall command.

You can run this command only from the global zone.

Options

Basic Options

The following options direct the basic form and function of the command.

None of the following options can be combined on the same command line.

–c

Specifies the “add install client” form of the scinstall command. This option establishes the specified nodename as a custom Automated Installer (AI) client on the AI server where you issued the command. This –c option accepts two specifications: -c net-image-source and -c archive=archive-location[::cert=cert-file::key=key-file],action={initial/restore|replicate}.

You can use this option only in the global zone.

You must specify the net-image-source when you use the AI to install the Oracle Solaris and Oracle Solaris Cluster software packages from IPS repositories and configure a new cluster. It can be a repository where you retrieve the install-image or solaris-auto-install IPS package based on the architecture of the cluster nodes (SPARC or i386):

-c publisher=repo[::cert=cert-file=key-file],arch={sparc|i386}

The net-image-source can also be an AI ISO image file for the Oracle Solaris release. The file must be accessible from an already-established AI server that is configured to install the cluster nodes: -c iso-file.

Use the archive=archive-location,action={initial|restore|replicate} command when you use the Unified Archives to automatically install a cluster or restore cluster nodes. This command specifies the location of the Unified Archives, and can be the full path to an archive file on a file-system that is accessible from the AI server, or an HTTP or HTTPS location. If you are accessing an HTTPS location, you must specify the SSL key and certificate file. You must also specify the intended use of the archive: to configure a new cluster (action=initial), restore a node (action=restore), or replicate a new cluster from an existing cluster that has the same hardware configuration (action=replicate). When you use the restore action, the archive must be a recovery type of archive that was previously created on the same node that you want to restore.

This form of the command enables fully-automated cluster installation from an AI server by helping to establish each cluster node, or nodename, as a custom AI client on an already-established Automated Installer install server.

For Oracle Solaris Cluster, you can customize the AI manifest file. See How to Install and Configure Oracle Solaris and Oracle Solaris Cluster Software (IPS Repositories) in Oracle Solaris Cluster 4.3 Software Installation Guide and Installing Oracle Solaris 11.3 Systems .

Before you use the scinstall command to set up a node as a custom Oracle Solaris Cluster AI client, you must first establish the AI installation server. For more information about setting up an AI install server, see Chapter 8, Setting Up an AI Server, in Installing Oracle Solaris 11.3 Systems .

–i

Specifies the “initialize” form of the scinstall command. This form of the command establishes a node as a new cluster member. The new node is the node from which you issue the scinstall command.

You can use this option only in the global zone.

If the –F option is used with –i, scinstall establishes the node as the first node in a new cluster.

If the –o option is used with the –F option, scinstall establishes a single-node cluster.

If the –N option is used with –i, scinstall adds the node to an already-existing cluster.

–p

Prints release and package versioning information for the Oracle Solaris Cluster software that is installed on the node from which the command is run. This is the only form of scinstall that you can run as a non-superuser.

You can use this option only in the global zone.

–r

Removes cluster configuration information and uninstalls Oracle Solaris Cluster framework and data-service software from a cluster node. You can then reinstall the node or remove the node from the cluster. You must run the command on the node that you uninstall, from a directory that is not used by the cluster software. The node must be in noncluster mode.

You can use this option only in the global zone.

–u

Upgrades Oracle Solaris Cluster software on the node from which you invoke the scinstall command. The upgrade form of scinstall has multiple modes of operation, as specified by upgrade-mode. See Upgrade Options below for information specific to the type of upgrade that you intend to perform.

You can use this option only in the global zone.

Additional Options

You can combine additional options with the basic options to modify the default behavior of each form of the command. Refer to the SYNOPSIS section for additional details about which of these options are legal with which forms of the scinstall command.

The following additional options are supported:

–h nodename

Specifies the node name. The –h option is only legal with the “add install client” (–c) form of the command.

The nodename is the name of the cluster node (that is, AI install client) to set up for custom AI installation.

–v

Prints release information in verbose mode. The –v option is only legal with the “print release” (–p) form of the command to specify verbose mode.

In the verbose mode of “print release,” the version string for each installed Oracle Solaris Cluster software package is also printed.

–F [config-options]

Establishes the first node in the cluster. The –F option is only legal with the “initialize” (–i) or “add install client” (–c) forms of the command.

The establishment of secondary nodes will be blocked until the first node is fully instantiated as a cluster member and is prepared to perform all necessary tasks that are associated with adding new cluster nodes. If the –F option is used with the –o option, a single-node cluster is created and no additional nodes can be added during the cluster-creation process.

–f hostnames-map-file

Specifies the text file containing a list of old hostname and new hostname pairs to use to replicate a cluster from another cluster, or to use a recovery archive with the initial action to form a brand new cluster. The file can contain multiple lines, with each line containing two columns. The first column is the hostname or IP address used in the source cluster where the archives are created. The second column is the corresponding hostname or IP address for the new cluster. These hostnames can be used for logical hostnames, shared address resources, and zone clusters.

source-cluster-zc-hostname1          target-cluster-zc-hostname1
source-cluster-zc-hostname2          target-cluster-zc-hostname2
source-cluster-lh1          target-cluster-lh1
source-cluster-lh2          target-cluster-lh2

You can use this option only in the global zone.

–N cluster-member [config-options]

Specifies the cluster member. The –N option is only legal with the “initialize” (–i), “add install client” (–c) or “remove” (–r) forms of the command.

Before you use the –N option with the –i option, you must first run the clauth enable -n control-node command on the cluster-member to be specified to the –N option. This command authorizes acceptance of commands from the control-node. The clauth command does not need to be run before using the –N option with the –c option. For more information, see the clauth(1CL) man page.

When used with the –i or –c option, the –N option is used to add additional nodes to an existing cluster. The specified cluster-member is typically the name of the first cluster node that is established for the cluster. However, the cluster-member can be the name of any cluster node that already participates as a cluster member. The node that is being initialized is added to the cluster of which cluster-member is already an active member. The process of adding a new node to an existing cluster involves updating the configuration data on the specified cluster-member, as well as creating a copy of the configuration database onto the local file system of the new node.

When used with the –r option, the –N option specifies the cluster-member, which can be any other node in the cluster that is an active cluster member. The scinstall command contacts the specified cluster-member to make updates to the cluster configuration. If the –N option is not specified, scinstall makes a best attempt to find an existing node to contact.

Configuration Options

The config-options is used with the –F option.

/usr/cluster/bin/scinstall{–i | –c net-image-source –U password-file –h 
     nodename -n nodeip-mac-options -W software-spec} –F [–C 
     clustername] [–T authentication-options] [–A adapter-options] [–B 
     switch-options] [–m endpoint=[this-node]:name[@port],endpoint=
     [node:]name[@port]] [–o] [–w netaddr-options]
/usr/cluster/bin/scinstall {–i | –c net-image-source –U password-file –h 
     nodename -n nodeip-mac-options -W software-spec} –N 
     cluster-member [–C clustername] [–A adapter-options] [–B 
     switch-options] [–m endpoint=cable-options]
–m cable-options

Specifies the cluster interconnect connections. This option is only legal when the –F or –N option is also specified.

The –m option helps to establish the cluster interconnect topology by configuring the cables connecting the various ports found on the cluster transport adapters and switches. Each new cable configured with this form of the command establishes a connection from a cluster transport adapter on the current node to either a port on a cluster transport switch or an adapter on another node already in the cluster.

If you specify no –m options, the scinstall command attempts to configure a default cable. However, if you configure more than one transport adapter or switch with a given instance of scinstall, it is not possible for scinstall to construct a default. The default is to configure a cable from the singly-configured transport adapter to the singly-configured (or default) transport switch.

The –m cable-options are as follows.

–m endpoint=[this-node]:
name[@port],endpoint=[
node:]name[@port]

The syntax for the –m option demonstrates that at least one of the two endpoints must be an adapter on the node that is being configured. For that endpoint, it is not required to specify this-node explicitly. The following is an example of adding a cable:

–m endpoint=:net1,endpoint=switch1

In this example, port 0 of the net1 transport adapter on this node, the node that scinstall is configuring, is cabled to a port on transport switch switch1. The port number that is used on switch1 defaults to the node ID number of this node.

You must always specify two endpoint options with each occurrence of the –m option. The name component of the option argument specifies the name of either a cluster transport adapter or a cluster transport switch at one of the endpoints of a cable.

  • If you specify the node component, the name is the name of a transport adapter.

  • If you do not specify the node component, the name is the name of a transport switch.

If you specify no port component, the scinstall command attempts to assume a default port name. The default port for an adapter is always 0. The default port name for a switch endpoint is equal to the node ID of the node being added to the cluster.

Refer to the clinterconnect(1CL) man page for more information regarding port assignments and other requirements.

Before you can configure a cable, you must first configure the adapters and/or switches at each of the two endpoints of the cable (see –A and –B).

–n nodeip-mac-options

Specifies the IP address and MAC address of the node. This option is only legal when the –c option is also specified.

The –n nodeip-mac-options syntax is as follows:

-n ip=node-ipaddr/N,mac=
mac-address
–o

Specifies the configuration of a single-node cluster. This option is only legal when the –i and –F options are also specified.

Other –F options are supported but are not required. If the cluster name is not specified, the name of the node is used as the cluster name. You can specify transport configuration options, which will be stored in the CCR. Once a single-node cluster is established, it is not necessary to configure a quorum device or to disable installmode.

–w netaddr-options

Specifies the network address for the private interconnect, or cluster transport. This option is only legal when the –F option is also specified.

Use this option to specify a private-network address for use on the private interconnect. You can use this option when the default private-network address collides with an address that is already in use within the enterprise. You can also use this option to customize the size of the IP address range that is reserved for use by the private interconnect. For more information, see the networks (4) and netmasks (4) man pages.

If not specified, the default network address for the private interconnect is 172.16.0.0. The default netmask is 255.255.240.0. This IP address range supports up to 62 nodes, 10 private networks, 12 zone clusters, and three exclusive-IP zone clusters.

The –w netaddr-options are as follows:

-w netaddr=netaddr[,netmask=netmask]
-w netaddr=netaddr[,maxnodes=nodes,maxprivatenets=maxprivnets,\
numvirtualclusters=zoneclusters, numxipvirtualclusters=xipzoneclusters]
-w netaddr=netaddr[,netmask=netmask,maxnodes=nodes,\maxprivatenets=maxprivnets\
,numvirtualclusters=zoneclusters]
netaddr=netaddr

Specifies the private network address. The last two octets of this address must always be zero.

[netmask=netmask]

Specifies the netmask. The specified value must provide an IP address range that is greater than or equal to the default.

To assign a smaller IP address range than the default, specify the maxnodes, maxprivatenets, and numvirtualclusters operands.

[,maxnodes=nodes,maxprivatenets=maxprivnets,numvirtualclusters=zoneclusters]

Specifies the maximum number of nodes, private networks, and zone clusters that the cluster is ever expected to have. The command uses these values to calculate the minimum netmask that the private interconnect requires to support the specified number of nodes, private networks, and zone clusters. The maximum value for nodes is 62 and the minimum value is 2. The maximum value for maxprivnets is 128 and the minimum value is 2. You can set a value of 0 for zoneclusters.

[,netmask=netmask,maxnodes=nodes,maxprivatenets=maxprivnets\ ,numvirtualclusters=zoneclusters]

Specifies the netmask and the maximum number of nodes, private networks, and zone clusters that the cluster is ever expected to have. You must specify a netmask that can sufficiently accommodate the specified number of nodes, privnets, and zoneclusters. The maximum value for nodes is 62 and the minimum value is 2. The maximum value for privnets is 128 and the minimum value is 2. You can set a value of 0 for zoneclusters.

If you specify only the netaddr suboption, the command assigns the default netmask of 255.255.240.0. The resulting IP address range accommodates up to 62 nodes, 10 private networks, and 12 zone clusters.

To change the private-network address or netmask after the cluster is established, use the cluster command or the clsetup utility.

–A adapter-options

Specifies the transport adapter and, optionally, its transport type. This option is only legal when the –F or –N option is also specified.

Each occurrence of the –A option configures a cluster transport adapter that is attached to the node from which you run the scinstall command.

If no –A options are specified, an attempt is made to use a default adapter and transport type. The default transport type is dlpi. On the SPARC platform, the default adapter is hme1.

When the adapter transport type is dlpi, you do not need to specify the trtype suboption. In this case, you can use either of the following two forms to specify the –A adapter-options:


–A [trtype=type,]name=adaptername[,vlanid=
vlanid][,other-options]
–A adaptername
[trtype=type]

Specifies the transport type of the adapter. Use the trtype option with each occurrence of the –A option for which you want to specify the transport type of the adapter. An example of a transport type is dlpi.

The default transport type is dlpi.

name=adaptername

Specifies the adapter name. You must use the name suboption with each occurrence of the –A option to specify the adaptername. An adaptername is constructed from a device name that is immediately followed by a physical-unit number, for example, hme0.

If you specify no other suboptions with the –A option, you can specify the adaptername as a standalone argument to the –A option, as –A adaptername.

vlanid=vlanid

Specifies the VLAN ID of the tagged-VLAN adapter.

[other-options]

Specifies additional adapter options. When a particular adapter provides any other options, you can specify them by using the –A option.

–B switch-options

Specifies the transport switch, also called transport junction. This option is only legal when the –F or –N option is also specified.

Each occurrence of the –B option configures a cluster transport switch. Examples of such devices can include, but are not limited to, Ethernet switches, other switches of various types, and rings.

If you specify no –B options, scinstall attempts to add a default switch at the time that the first node is instantiated as a cluster node. When you add additional nodes to the cluster, no additional switches are added by default. However, you can add them explicitly. The default switch is named switch1, and it is of type switch.

When the switch type is type switch, you do not need to specify the type suboption. In this case, you can use either of the following two forms to specify the –B switch-options.

-B [type=type,]name=name[,other-options]-B name

If a cluster transport switch is already configured for the specified switch name, scinstall prints a message and ignores the –B option.

If you use directly-cabled transport adapters, you are not required to configure any transport switches. To avoid configuring default transport switches, use the following special –B option:

–B type=direct
[type=type]

Specifies the transport switch type. You can use the type option with each occurrence of the –B option. Ethernet switches are an example of a cluster transport switch which is of the switch type switch. See the clinterconnect(1CL) man page for more information.

You can specify the type suboption as direct to suppress the configuration of any default switches. Switches do not exist in a transport configuration that consists of only directly connected transport adapters. When the type suboption is set to direct, you do not need to use the name suboption.

name=name

Specifies the transport switch name. Unless the type is direct, you must use the name suboption with each occurrence of the –B option to specify the transport switch name. The name can be up to 256 characters in length and is made up of either letters or digits, with the first character being a letter. Each transport switch name must be unique across the namespace of the cluster.

If no other suboptions are needed with –B, you can give the switch name as a standalone argument to –B (that is, –B name).

[other-options]

Specifies additional transport switch options. When a particular switch type provides other options, you can specify them with the –B option. Refer to the clinterconnect(1CL) man page for information about any special options that you might use with the switches.

–C clustername

Specifies the name of the cluster. This option is only legal when the –F or –N option is also specified.

  • If the node that you configure is the first node in a new cluster, the default clustername is the same as the name of the node that you are configuring.

  • If the node that you configure is being added to an already-existing cluster, the default clustername is the name of the cluster to which cluster-member already belongs.

It is an error to specify a clustername that is not the name of the cluster to which cluster-member belongs.

–T authentication-options

Specifies node-authentication options for the cluster. This option is only legal when the –F option is also specified.

Use this option to establish authentication list of nodes that attempt to add themselves to the cluster configuration. Specifically, when a machine requests that it be added to the cluster as a cluster node, a check is made to determine whether or not the node has permission to join. If the joining node has permission, it is authenticated and allowed to join the cluster.

You can only use the –T option with the scinstall command when you set up the very first node in the cluster. If the authentication list or policy needs to be changed on an already-established cluster, use the claccess command.

The –T authentication-options are as follows.

–T node=nodename[,…][,secureAI=yes]
–T node=archive-source-node::node-to-install[,…][,secureAI=yes]
–T secureAI=yes
node=nodename[,…]

Specifies node names to add to the node authentication list. You must specify at least one node suboption to the –T option. This option is used to add node names to the list of nodes that are able to configure themselves as nodes in the cluster. If the authentication list is empty, any node can request that it be added to the cluster configuration. However, if the list has at least one name in it, all such requests are authenticated by using the authentication list. You can modify or clear this list of nodes at any time by using the claccess command or the clsetup utility from one of the active cluster nodes.

node=archive-source-node::node-to-install[,…]

The node=archive-source-node::node-to-install option specifies the pair of the node names. You must specify the node pairs for all the nodes that you want to replicate. The first node name is the node where the archive is created, and the second node name is the node in the new cluster that you want to install from that archive. Use this specification only when replicating a cluster from the archives created on another cluster, and the new cluster nodes must have the same (or a super set) hardware configuration as the source cluster nodes where the archives are created.

[secureAI=yes]

Specifies using secure installation with AI and is effective only when using AI to install the cluster software. Without the secureAI=yes specification, the default action performs a traditional AI installation. When restoring a node from an archive using the secure installation method, you only need to specify -T secureAI=yes. You do not need node=nodename[,…].

–U password-file

Specifies the name of the file that contains the root-user password. This option is only legal when the –c option is also specified.

This option enables automated setting of the root password during initial Oracle Solaris installation and configuration. The user creates a file that contains the text to use as the root user password for the system being installed. Typically, the password-file is located on, or accessible from, an already-established AI install server that is configured to install the nodename install client. The scinstall utility retrieves the contents of this file and supplies it to the Oracle Solaris configuration utility.

–W software-specs

Specifies the location of one or more publishers and package repositories. Also, specifies the public key and the SSL certificate information needed for a secure install using AI. This option is only legal when the –c option is specified to install from an IPS repository.

The –W software-specs are as follows:

–W publisher=
repo[::key=
key-file::cert=certificate-file] \
::pkg[,…][:::
publisher=repo[::key=key-file::cert=
certificate-file]::pkg[,…]]…

Note that the –W option is broken into multiple lines for readability, but should be specified in a single unbroken string.

In the –W option syntax, publisher is the publisher name ha-cluster or solaris, repo is the repository location, key-file and certificate-file is the information for the public key and the SSL certificate that is required for a secure installation from a HTTPS repository, and pkg is a software package name.

In order to install Oracle Solaris or Oracle Solaris Cluster using the secure HTTPS repository, you need to provide information for the public key and the SSL certificate. You can request and download the public key and the SSL certificate from the http://pkg-register.oracle.com site.

Upgrade Options

The –u upgrade-modes and the upgrade-options for standard (nonrolling) upgrade, rolling upgrade, and dual-partition upgrade are as follows.

Standard (Nonrolling) and Rolling Upgrade

Use the –u update mode to upgrade a cluster node to a later Oracle Solaris Cluster software release in standard (nonrolling) or rolling upgrade mode.

  • A standard, or nonrolling, upgrade process upgrades an inactive boot environment (BE) while your cluster node continues to serve cluster requests. If you do not specify an existing inactive BE, the scinstall utility automatically creates a new BE. Once the upgrade is complete, the scinstall utility activates the upgraded BE and notifies the user to reboot the node into the upgraded BE.

  • A rolling upgrade process takes only one cluster node out of production at a time. This process can only be used to upgrade Oracle Solaris or Oracle Solaris Cluster software or both to an update release of the versions that are already installed. While you upgrade one node, cluster services continue on the rest of the cluster nodes. After a node is upgraded, you bring it back into the cluster and repeat the process on the next node to upgrade. After all nodes are upgraded, you must run the scversions command on one cluster node to commit the cluster to the upgraded version. Until this command is run, some new functionality that is introduced in the update release might not be available.

  • Optionally, you can specify package FMRIs that are already installed in the current image.

The upgrade-options to –u update for standard and rolling mode are as follows.

/usr/cluster/bin/scinstall -u update [–b be-name] [–L {accept | 
     licenses | accept,licenses | licenses,accept}] [pkg_fmri_pattern ...]
–b be-name

Specifies the name to assign the new boot environment (BE). If you do not specify this option, scinstallassigns the name of the new BE. This name is based on the name of the current BE, of the form currentBE-N, where the suffix -N is an incremented number. The first new BE is named currentBE-1, the next new BE is named currentBE-2, and so forth. If a BE is deleted, its name is not reused for the next new BE if a BE name with a higher suffix number exists. For example, if BEs sc4.0, sc4.0-1, and sc4.0-2 exist, and sc4.0-1 is deleted, the next new BE is named sc4.0-3.

If you specify a BE name that already exists, the command exits with an error.

–L {accept | licenses | accept,licenses | licenses,accept }

Specifies whether to accept or display, or both, the licenses of the packages you upgrade to.

The accept argument corresponds to the --accept option of the pkg command and the licenses argument corresponds to the --licenses option.

Specifying the –L accept option indicates that you agree to and accept the licenses of the packages that are updated. If you do not provide this option, and any package licenses require acceptance, the update operation fails.

Specifying –L licenses displays all of the licenses for the packages that are updated.

When both accept and licenses are specified to the –L option, the licenses of the packages that are updated are displayed as well as accepted. The order you specify the accept and licenses arguments does not affect the behavior of the command.

The scinstall -u update command supports the ability to specify the pkg_fmri_patterns for the packages you are updating:

[pkg_fmri_pattern...]

Specifies the packages to update. These packages must be installed in the current image. If an asterisk (*) is one of the pkg_fmri_pattern patterns provided, you can update all packages installed in the current image.

Dual-Partition Upgrade

Use the –u upgrade-modes and upgrade-options for dual-partition upgrade to perform the multiple stages of a dual-partition upgrade. The dual-partition upgrade process first involves assigning cluster nodes into two groups, or partitions. Next, you upgrade one partition while the other partition provides cluster services. You then switch services to the upgraded partition, upgrade the remaining partition, and rejoin the upgraded nodes of the second partition to the cluster formed by the upgraded first partition. The upgrade-modes for dual-partition upgrade also include a mode for recovery after a failure during a dual-partition upgrade.

Dual-partition upgrade modes are used in conjunction with the –u update upgrade mode. See Oracle Solaris Cluster 4.3 Upgrade Guide for more information.

The upgrade-modes and upgrade-options to –u for dual-partition upgrade are as follows:

/usr/cluster/bin/scinstall -u begin -h nodelist
/usr/cluster/bin/scinstall -u plan
/usr/cluster/bin/scinstall -u recover
/usr/cluster/bin/scinstall -u status
/usr/cluster/bin/scinstall -u apply
/usr/cluster/bin/scinstall -u status
apply

Specifies that upgrade of a partition is completed. Run this form of the command from any node in the upgraded partition, after all nodes in that partition are upgraded.

The apply upgrade mode performs the following tasks:

First partition

When run from a node in the first partition, the apply upgrade mode prepares all nodes in the first partition to run the new software.

When the nodes in the first partition are ready to support cluster services, the command remotely executes the scripts /etc/cluster/ql/cluster_pre_halt_apps and /etc/cluster/ql/cluster_post_halt_apps that are on the nodes in the second partition. These scripts are used to call user-written scripts that stop applications that are not under Resource Group Manager (RGM) control, such as Oracle Real Application Clusters (Oracle RAC).

  • The cluster_pre_halt_apps script is run before applications that are under RGM control are stopped.

  • The cluster_post_halt_apps script is run after applications that are under RGM control are stopped, but before the node is halted.


Note -  Before you run the apply upgrade mode, modify the script templates as needed to call other scripts that you write to stop certain applications on the node. Place the modified scripts and the user-written scripts that they call on each node in the first partition. These scripts are run from one arbitrary node in the first partition. To stop applications that are running on more than one node in the first partition, modify the user-written scripts accordingly. The unmodified scripts perform no default actions.

After all applications on the second partition are stopped, the command halts the nodes in the second partition. The shutdown initiates the switchover of applications and data services to the nodes in the first partition. Then the command boots the nodes in the second partition into cluster mode.

If a resource group was offline because its node list contains only members of the first partition, the resource group comes back online. If the node list of a resource group has no nodes that belong to the first partition, the resource group remains offline.

Second partition

When run from a node in the second partition, the apply upgrade mode prepares all nodes in the second partition to run the new software. The command then boots the nodes into cluster mode. The nodes in the second partition rejoin the active cluster that was formed by the nodes in the first partition.

If a resource group was offline because its node list contains only members of the second partition, the resource group comes back online.

After all nodes have rejoined the cluster, the command performs final processing, reconfigures quorum devices, and restores quorum vote counts.

begin

Specifies the nodes to assign to the first partition that you upgrade and initiates the dual-partition upgrade process. Run this form of the command from any node of the cluster. Use this upgrade mode after you use the plan upgrade mode to determine the possible partition schemes.

First the begin upgrade mode records the nodes to assign to each partition. Next, all applications are stopped on one node, then the upgrade mode shuts down the node. The shutdown initiates switchover of each resource group on the node to a node that belongs to the second partition, provided that the node is in the resource-group node list. If the node list of a resource group contains no nodes that belong to the second partition, the resource group remains offline.

The command then repeats this sequence of actions on each remaining node in the first partition, one node at a time.

The nodes in the second partition remain in operation during the upgrade of the first partition. Quorum devices are temporarily unconfigured and quorum vote counts are temporarily changed on the nodes.

plan

Queries the cluster storage configuration and displays all possible partition schemes that satisfy the shared-storage requirement. Run this form of the command from any node of the cluster. This is the first command that you run in a dual-partition upgrade.

Dual-partition upgrade requires that each shared storage array must be physically accessed by at least one node in each partition.

The plan upgrade mode can return zero, one, or multiple partition solutions. If no solutions are returned, the cluster configuration is not suitable for dual-partition upgrade. Use instead the standard upgrade method.

For any partition solution, you can choose either partition group to be the first partition that you upgrade.

recover

Recovers the cluster configuration on a node if a fatal error occurs during dual-partition upgrade processing. Run this form of the command on each node of the cluster.

You must shut down the cluster and boot all nodes into noncluster mode before you run this command.

Once a fatal error occurs, you cannot resume or restart a dual-partition upgrade, even after you run the recover upgrade mode.

The recover upgrade mode restores the Cluster Configuration Repository (CCR) database to the original state, before the start of the dual-partition upgrade.

The following list describes in which circumstances to use the recover upgrade mode and in which circumstances to take other steps.

  • If the failure occurred during –u begin processing, run the –u recover upgrade mode.

  • If the failure occurred after –u begin processing completed but before the shutdown warning for the second partition was issued, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the –u recover upgrade mode.

    • If the failure occurred on a node in the second partition, no recovery action is necessary.

  • If the failure occurred after the shutdown warning for the second partition was issued but before –u apply processing started on the second partition, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the –u recover upgrade mode.

    • If the failure occurred on a node in the second partition, reboot the failed node into noncluster mode.

  • If the failure occurred after –u apply processing was completed on the second partition but before the upgrade completed, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the –u recover upgrade mode.

    • If the failure occurred on a node in the first partition but the first partition stayed in service, reboot the failed node.

    • If the failure occurred on a node in the second partition, run the –u recover upgrade mode.

In all cases, you can continue the upgrade manually by using the standard upgrade method, which requires the shutdown of all cluster nodes.

status

Displays the status of the dual-partition upgrade. The following are the possible states:

Upgrade is in progress

The scinstall -u begin command has been run but dual-partition upgrade has not completed.

The cluster also reports this status if a fatal error occurred during the dual-partition upgrade. In this case, the state is not cleared even after recovery procedures are performed and the cluster upgrade is completed by using the standard upgrade method

Upgrade not in progress

Either the scinstall -u begin command has not yet been issued, or the dual-partition upgrade has completed successfully.

Run the status upgrade mode from one node of the cluster. The node can be in either cluster mode or noncluster mode.

The reported state is valid for all nodes of the cluster, regardless of which stage of the dual-partition upgrade the issuing node is in.

The following option is supported with the dual-partition upgrade mode:

–h nodelist

Specifies a space-delimited list of all nodes that you assign to the first partition. You choose these from output displayed by the plan upgrade mode as valid members of a partition in the partition scheme that you use. The remaining nodes in the cluster, which you do not specify to the begin upgrade mode, are assigned to the second partition.

This option is only valid with the begin upgrade mode.

Examples

Establishing a Two-Node Cluster

The following example establishes a typical two-node cluster with Oracle Solaris Cluster software for Oracle Solaris 11 on SPARC based platforms. The example assumes that Oracle Solaris Cluster software packages are already installed on the nodes.

On node1, issue the following command:

node1# /usr/cluster/bin/scinstall -i -F

On node2, issue the following command:

node2# /usr/cluster/bin/scinstall -i -N node1

Establishing a Single-Node Cluster

The following command establishes a single-node cluster with Oracle Solaris Cluster software for Oracle Solaris 11 on SPARC based platforms, with all defaults accepted. The example assumes that Oracle Solaris Cluster software packages are already installed on the node.

# /usr/cluster/bin/scinstall -i -F -o

Adding Install Clients with a Net Image ISO File on an AI Server

The following example sets up an AI install server to install and initialize Oracle Solaris Cluster software for Oracle Solaris 11 on SPARC based platforms in a two-node cluster.

On the install server, issue the following commands. Note that the –W option is broken into multiple lines for readability, but should be specified in a single unbroken string.

# usr/cluster/bin/scinstall -c /export/home/11-ga-ai-x86.iso -h phys-schost-1 \ 
-U /export/pwdfile \ 
-C schost \ 
-F \ 
-W solaris=http://ipkg.us.oracle.com/solaris11/release::\
entire,server_install:::ha-cluster=cluster-repository::\
ha-cluster-framework-full,ha-cluster-data-services-full,
ha-cluster-geo-full \ 
-n ip=10.255.85.163/24,mac=12:34:56:78:90:ab \ 
-T node=phys-schost-1,node=phys-schost-2 \ 
-w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=62,\
maxprivatenets=10,numvirtualclusters=12,numxipvirtualclusters=3 \ 
-A trtype=dlpi,name=e1000g1 -A trtype=dlpi,name=nxge1 \ 
-B type=switch,name=switch1 -B type=switch,name=switch2 \ 
-m endpoint=:e1000g1,endpoint=switch1 \ 
-m endpoint=:nge1,endpoint=switch2 

# usr/cluster/bin/scinstall -c /export/home/11-ga-ai-x86.iso -h phys-schost-2 \ 
-U /export/pwdfile \ 
-C schost \ 
-N phys-schost-1 \ 
-W solaris=http://ipkg.us.oracle.com/solaris11/release::\
entire,server_install:::ha-cluster=cluster-repository::\
ha-cluster-framework-full,ha-cluster-data-services-full,\ 
ha-cluster-geo-full \ 
-n ip=10.255.85.164/24,mac=12:34:56:78:90:ab \ 
-A trtype=dlpi,name=e1000g1 -A trtype=dlpi,name=nxge1 \ 
-m endpoint=:e1000g1,endpoint=switch1 \ 
-m endpoint=:nge1,endpoint=switch2

Performing a Dual-Partition Upgrade

The following example uses the dual-partition method to upgrade the framework and data service software of a cluster to the next Oracle Solaris Cluster release. This examples uses the Oracle Solaris Cluster version for Solaris 11 on SPARC based platforms. The example queries the cluster for valid partition schemes, assigns nodes to partitions, reboots the node in the first partition, returns the first partition to operation after upgrade and reboots the node in the second partition, and returns the second partition to the cluster after upgrade.

# /usr/cluster/bin/scinstall -u plan
  Option 1
    First partition
      phys-schost-1
    Second partition
      phys-schost-2
…
# /usr/cluster/bin/scinstall -u begin -h phys-schost-1 phys-schost-3

ok boot -x

(Upgrade the node in the first partition)

phys-schost-1# /usr/cluster/bin/scinstall -u apply
ok boot -x

(Upgrade the node in the second partition)

phys-schost-2# /usr/cluster/bin/scinstall -u apply

Upgrading the Framework and Data Service Software (Standard or Rolling Upgrade)

The following example upgrades the framework and data service software of a cluster to the next Oracle Solaris Cluster release. Perform these operations on each cluster node.


Note -  For a rolling upgrade, perform these operations on one node at a time, after you use the clnode evacuate command to move all resource groups and device groups to the other nodes which will remain in the cluster.
# /usr/cluster/bin/scinstall -u update
# init 6

Restoring the First Node from an Archive File

The following example uses a secure AI installation to restore the first node from an archive file that is saved on a file system that is accessible from the AI server.

# /usr/cluster/bin/scinstall -c archive=file:///net/storagenode/export/archive
     /phys-schost-1-recovery-archive,action=restore \
-h phys-schost-1 \
-C schost =\
-F \
-n ip=10.255.85.163/24,mac=12:34:56:78:90:ab \
-T secureAI=yes

Restoring Other Nodes from Archives

The following example uses a secure AI installation to restore the other nodes from archives that were previously created on those nodes.

# /usr/cluster/bin/scinstall -c archive=file:///net/storagenode/export/archive
     /phys-schost-2-recovery-archive,action=restore \
-h phys-schost-2 \
-C schost =\
-n ip=10.255.85.164/24,mac=12:34:56:78:90:cd \
-T secureAI=yes

Performing a Non-Secure Replication

The following example performs a non-secure replication.

# /usr/cluster/bin/scinstall -c archive=file:///net/storagenode/export/archive
     /source-node-1-archive,action=replicate \
-h phys-schost-1 \
-C schost \
-F \
-n ip=10.255.85.163/24,mac=12:34:56:78:90:ab \
-T node=phys-schost-1,node=phys-schost-2,secureAI=yes \
-U /export/pwdfile
# /usr/cluster/bin/scinstall -c archive=file:///net/pnass3/export/archive
     /vzono1a.clone,action=replicate \
-h phys-schost-2 \
-C schost \
-n ip=10.255.85.164/24,mac=12:34:56:78:90:cd \
-U /export/pwdfile

Adding Install Clients with IPS Repositories on an AI Server

The following examples uses a secure AI installation to install and configure a two-node x86 cluster from IPS repositories.

# /usr/cluster/bin/scinstall -c solaris=http://ipkg.us.oracle.com/solaris11
     /release::arch=i386 -h phys-schost-1 \
-C schost \
-F \
-W solaris=http://ipkg.us.oracle.com/solaris11/release::entire,server_install:::
     ha-cluster=http://ipkg.us.oracle.com/ha-cluster/release::ha-cluster-framework-full \
-n ip=10.255.85.163/24,mac=12:34:56:78:90:ab \
-T node=phys-schost-1,node=phys-schost-2,secureAI=yes \
-w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=32,maxprivatenets=10,
     numvirtualclusters=12,numxipvirtualclusters=3 \
-A trtype=dlpi,name=net1 -A trtype=dlpi,name=net3 \
-B type=switch,name=switch1 -B type=switch,name=switch2 \
-m endpoint=:net1,endpoint=switch1 \
-m endpoint=:net3,endpoint=switch2 \
-P task=quorum,state=INIT -P task=security,state=SECURE \
-U /export/pwdfile
# /usr/cluster/bin/scinstall -c solaris=http://ipkg.us.oracle.com/solaris11
     /release::arch=i386 -h phys-schost-2 \
-C schost \
-N phys-schost-1 \
-W solaris=http://ipkg.us.oracle.com/solaris11/release::entire,server_install:::
     ha-cluster=http://ipkg.us.oracle.com/ha-cluster/release::ha-cluster-framework-full \
-n ip=10.255.85.164/24,mac=12:34:56:78:90:ab \
-A trtype=dlpi,name=net1 -A trtype=dlpi,name=net3 \
-m endpoint=:net1,endpoint=switch1 \
-m endpoint=:net3,endpoint=switch2 \
-U /export/pwdfile

Exit Status

The following exit values are returned:

0

Successful completion.

non-zero

An error occurred.

Files

/etc/cluster/ql/cluster_post_halt_apps

/etc/cluster/ql/cluster_pre_halt_apps

Attributes

See attributes (5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
ha-cluster/system/install
Interface Stability
Evolving

See Also

Intro(1CL), claccess(1CL), clauth(1CL), clinterconnect(1CL), clnode(1CL), clsetup(1CL), cluster(1CL), newfs (1M) , scversions(1M), netmasks (4) , networks (4) , lofi (7D)

Oracle Solaris Cluster 4.3 Software Installation Guide , Oracle Solaris Cluster 4.3 System Administration Guide , Oracle Solaris Cluster 4.3 Upgrade Guide