JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Reference Manual     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

Introduction

OSC33 1

OSC33 1cl

OSC33 1ha

OSC33 1m

cconsole(1M)

ccp(1M)

ccradm(1M)

chosts(1M)

cl_eventd(1M)

cl_pnmd(1M)

cports(1M)

crlogin(1M)

cssh(1M)

ctelnet(1M)

dcs_config(1M)

halockrun(1M)

hatimerun(1M)

pmfadm(1M)

pmfd(1M)

rpc.pmfd(1M)

scconf(1M)

scconf_dg_rawdisk(1M)

scconf_dg_svm(1M)

scconf_quorum_dev_quorum_server(1M)

scconf_quorum_dev_scsi(1M)

scconf_transp_adap_bge(1M)

scconf_transp_adap_ce(1M)

scconf_transp_adap_e1000g(1M)

scconf_transp_adap_eri(1M)

scconf_transp_adap_ge(1M)

scconf_transp_adap_hme(1M)

scconf_transp_adap_ibd(1M)

scconf_transp_adap_qfe(1M)

scconf_transp_jct_etherswitch(1M)

scconf_transp_jct_ibswitch(1M)

scdidadm(1M)

scdpm(1M)

sceventmib(1M)

scgdevs(1M)

scinstall(1M)

scnas(1M)

scnasdir(1M)

scprivipadm(1M)

scprivipd(1M)

scrgadm(1M)

scsetup(1M)

scshutdown(1M)

scsnapshot(1M)

scstat(1M)

scswitch(1M)

sctelemetry(1M)

scversions(1M)

sc_zonesd(1M)

OSC33 3ha

OSC33 4

OSC33 5

OSC33 5cl

OSC33 7

OSC33 7p

Index

scinstall

- initialize Oracle Solaris Cluster software and establish new cluster nodes

Synopsis

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall 
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall 
-i [-k] [-s srvc[,…]] [-F [-C clustername] [-T authentication-options] 
[-G [lofi | special | mount-point] [-o]] [-A adapter-options] 
[-B switch-options] [-m cable-options] [-w netaddr-options]]
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall
-i [-k] [-s srvc[,…]] [-N cluster-member [-C clustername] 
[-G {lofi | special | mount-point}] [-A adapter-options] 
[-B switch-options] [-m cable-options]]
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall 
-a install-dir [-d dvdimage-dir]
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall 
-c jumpstart-dir -h nodename [-d dvdimage-dir] [-s srvc[,…]] 
[-F [-C clustername] [-G {lofi | special | mount-point}]] 
[-T authentication-options [-A adapter-options] [-B switch-options] 
[-m cable-options] [-w netaddr-options]]
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall 
-c jumpstart-dir -h nodename [-d dvdimage-dir] [-s srvc[,…]]
[-N cluster-member [-C clustername] [-G {lofi | special | mount-point}] 
[-A adapter-options] [-B switch-options] [-m cable-options]]
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall 
-u upgrade-mode
/usr/cluster/bin/scinstall -u upgrade-options
/usr/cluster/bin/scinstall -r [-N cluster-member] 
[-G mount-point]
scinstall -p [-v]

Description


Note - Beginning with the Sun Cluster 3.2 release, Oracle Solaris Cluster software includes an object-oriented command set. Although Oracle Solaris Cluster software still supports the original command set, Oracle Solaris Cluster procedural documentation uses only the object-oriented command set. For more information about the object-oriented command set, see the Intro(1CL) man page.


The scinstall command performs a number of Oracle Solaris Cluster node creation and upgrade tasks, as follows.

Without options, the scinstall command attempts to run in interactive mode.

Run all forms of the scinstall command other than the “print release” form (-p) as superuser.

The scinstall command is located in the Tools directory on the Oracle Solaris Cluster installation media. If the Oracle Solaris Cluster installation media has been copied to a local disk, media-mnt-pt is the path to the copied Oracle Solaris Cluster media image. The SUNWsczu software package also includes a copy of the scinstall command.

Except for the -p option, you can run this command only from the global zone.

Options

Basic Options

The following options direct the basic form and function of the command.

None of the following options can be combined on the same command line.

-a

Specifies the “set up install server” form of the scinstall command. This option is used to create an install-dir on any Solaris machine from which the command is run and then make a copy of the Oracle Solaris Cluster media in that directory.

You can use this option only in the global zone.

If the install-dir already exists, the scinstall command returns an error message. Typically, the target directory is created on an NFS server which has also been set up as a Solaris install server (see the setup_install_server(1M) man page).

-c

Specifies the “add install client” form of the scinstall command. This option establishes the specified nodename as a custom JumpStart client in the jumpstart-dir on the machine from which you issued the command.

You can use this option only in the global zone.

Typically, the jumpstart-dir is located on an already-established Solaris install server that is configured to JumpStart the nodename install client (see the add_install_client(1M) man page).

This form of the command enables fully-automated cluster installation from a JumpStart server by helping to establish each cluster node, or nodename, as a custom JumpStart client on an already-established Solaris JumpStart server. The command makes all necessary updates to the rules file in the specified jumpstart-dir. In addition, special JumpStart class files and finish scripts that support cluster initialization are added to the jumpstart-dir, if they are not already installed. Configuration data that is used by the Oracle Solaris Cluster-supplied finish script is established for each node that you set up by using this method.

Users can customize the Solaris class file that the -c option to the scinstall command installs by editing the file directly in the normal way. However, it is always important to ensure that the Solaris class file defines an acceptable Solaris installation for an Oracle Solaris Cluster node. Otherwise, the installation might need to be restarted.

Both the class file and finish script that are installed by this form of the command are located in the following directory:

jumpstart-dir/autoscinstall.d/3.1

The class file is installed as autoscinstall.class, and the finish script is installed as autoscinstall.finish.

For each cluster nodename that you set up with the -c option as an automated Oracle Solaris Cluster JumpStart install client, this form of the command sets up a configuration directory as the following:

jumpstart-dir/autoscinstall.d/nodes/nodename

Options for specifying Oracle Solaris Cluster node installation and initialization are saved in files that are located in these directories. Never edit these files directly.

You can customize the JumpStart configuration in the following ways:

  • You can add a user-written finish script as the following file name:

    jumpstart-dir/autoscinstall.d/nodes/nodename/finish

    The scinstall command runs the user-written finish scripts after it runs the finish script supplied with the product.

  • If the directory

    jumpstart-dir/autoscinstall.d/nodes/nodename/archive

    exists, the scinstall command copies all files in that directory to the new installation. In addition, if an etc/inet/hosts file exists in that directory, scinstall uses the hosts information found in that file to supply name-to-address mappings when a name service (NIS/NIS+/DNS) is not used.

  • If the directory

    jumpstart-dir/autoscinstall.d/nodes/nodename/patches

    exists, the scinstall command installs all files in that directory by using the patchadd(1M) command. This directory is intended for Solaris software patches and any other patches that must be installed before Oracle Solaris Cluster software is installed.

You can create these files and directories individually or as links to other files or directories that exist under jumpstart-dir.

See the add_install_client(1M) man page and related JumpStart documentation for more information about how to set up custom JumpStart install clients.

Run this form of the command from the install-dir (see the -a form of scinstall) on the JumpStart server that you use to initialize the cluster nodes.

Before you use the scinstall command to set up a node as a custom Oracle Solaris Cluster JumpStart client, you must first establish each node as a Solaris install client. The JumpStart directory that you specify with the -c option to the add_install_client command should be the same directory that you specify with the -c option to the scinstall command. However, the scinstall jumpstart-dir does not have a server component to it, since you must run the scinstall command from a Solaris JumpStart server.

To remove a node as a custom Oracle Solaris Cluster JumpStart client, simply remove it from the rules file.

-i

Specifies the “initialize” form of the scinstall command. This form of the command establishes a node as a new cluster member. The new node is the node from which you issue the scinstall command.

You can use this option only in the global zone.

If the -F option is used with -i, scinstall establishes the node as the first node in a new cluster.

If the -o option is used with the -F option, scinstall establishes a single-node cluster.

If the -N option is used with -i, scinstall adds the node to an already-existing cluster.

If the -s option is used and the node is an already-established cluster member, only the specified srvc (data service) is installed.

-p

Prints release and package versioning information for the Oracle Solaris Cluster software that is installed on the node from which the command is run. This is the only form of scinstall that you can run as a non-superuser.

You can use this option in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone.

-r

Removes cluster configuration information and uninstalls Oracle Solaris Cluster framework and data-service software from a cluster node. You can then reinstall the node or remove the node from the cluster. You must run the command on the node that you uninstall, from a directory that is not used by the cluster software. The node must be in noncluster mode.

If you used the installer utility to install Oracle Solaris Cluster software packages, you must also run the /var/sadm/prod/SUNWentsysver/uninstall utility to remove the record of the Oracle Solaris Cluster software installation from the product registry. The installer utility does not permit reinstallation of software packages that its product registry still records as an installed product. For more information about using the uninstall program, see Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.

You can use this option only in the global zone.

-u upgrade-mode

Upgrades Oracle Solaris Cluster software on the node from which you invoke the scinstall command. The upgrade form of scinstall has multiple modes of operation, as specified by upgrade-mode. See Upgrade Options below for information specific to the type of upgrade that you intend to perform.

You can use this option only in the global zone.

Additional Options

You can combine additional options with the basic options to modify the default behavior of each form of the command. Refer to the SYNOPSIS section for additional details about which of these options are legal with which forms of the scinstall command.

The following additional options are supported:

-d dvdimage-dir

Specifies an alternate directory location for finding the media images of the Oracle Solaris Cluster product and unbundled Oracle Solaris Cluster data services.

If the -d option is not specified, the default directory is the media image from which the current instance of the scinstall command is started.

-h nodename

Specifies the node name. The -h option is only legal with the “add install client” (-c) form of the command.

The nodename is the name of the cluster node (that is, JumpStart install client) to set up for custom JumpStart installation.

-k

Specifies that scinstall will not install Oracle Solaris Cluster software packages. The -k option is only legal with the “initialize” (-i) form of the command.

In Sun Cluster 3.0 and 3.1 software, if this option was not specified, the default behavior was to install any Oracle Solaris Cluster packages that were not already installed. Beginning with the Sun Cluster 3.2 release, this option is unnecessary. It is provided only for backwards compatibility with user scripts that use this option.

-s srvc[,…]

Specifies a data service. The -s option is only legal with the “initialize” (-i), “upgrade” (-u), or “add install client” (-c) forms of the command to install or upgrade the specified srvc (data service package).

If a data service package cannot be located, a warning message is printed, but installation otherwise continues to completion.

-v

Prints release information in verbose mode. The -v option is only legal with the “print release” (-p) form of the command to specify verbose mode.

In the verbose mode of “print release,” the version string for each installed Oracle Solaris Cluster software package is also printed.

-F [config-options]

Establishes the first node in the cluster. The -F option is only legal with the “initialize” (-i), “upgrade” (-u), or “add install client” (-c) forms of the command.

The establishment of secondary nodes will be blocked until the first node is fully instantiated as a cluster member and is prepared to perform all necessary tasks that are associated with adding new cluster nodes. If the -F option is used with the -o option, a single-node cluster is created and no additional nodes can be added during the cluster-creation process.

-N cluster-member [config-options]

Specifies the cluster member. The -N option is only legal with the “initialize” (-i), “add install client” (-c), “remove” (-r), or “upgrade” (-u) forms of the command.

  • When used with the -i, -c, or -u option, the -N option is used to add additional nodes to an existing cluster. The specified cluster-member is typically the name of the first cluster node that is established for the cluster. However, the cluster-member can be the name of any cluster node that already participates as a cluster member. The node that is being initialized is added to the cluster of which cluster-member is already an active member. The process of adding a new node to an existing cluster involves updating the configuration data on the specified cluster-member, as well as creating a copy of the configuration database onto the local file system of the new node.

  • When used with the -r option, the -N option specifies the cluster-member, which can be any other node in the cluster that is an active cluster member. The scinstall command contacts the specified cluster-member to make updates to the cluster configuration. If the -N option is not specified, scinstall makes a best attempt to find an existing node to contact.

Configuration Options

The config-options which can be used with the -F option or -N cluster-member option are as follows.

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall
{-i | -c jumpstart-dir -h nodename}
[-F
   [-C clustername]
   [-G {lofi | special | mount-point} ]
   [-T authentication-options]
   [-A adapter-options]
   [-B switch-options]
   [-m endpoint=[this-node]:name[@port],endpoint=[node:]name[@port] ]
   [-o]
   [-w netaddr-options]
]

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall
{-i | -c jumpstart-dir -h nodename}
[-N cluster-member
   [-C clustername]
   [-G {lofi | special | mount-point} ]
   [-A adapter-options]
   [-B switch-options]
   [-m endpoint=cable-options]
]
-m cable-options

Specifies the cluster interconnect connections. This option is only legal when the -F or -N option is also specified.

The -m option helps to establish the cluster interconnect topology by configuring the cables connecting the various ports found on the cluster transport adapters and switches. Each new cable configured with this form of the command establishes a connection from a cluster transport adapter on the current node to either a port on a cluster transport switch or an adapter on another node already in the cluster.

If you specify no -m options, the scinstall command attempts to configure a default cable. However, if you configure more than one transport adapter or switch with a given instance of scinstall, it is not possible for scinstall to construct a default. The default is to configure a cable from the singly-configured transport adapter to the singly-configured (or default) transport switch.

The -m cable-options are as follows.

-m endpoint=[this-node]:name[@port],endpoint=[node:]name[@port]

The syntax for the -m option demonstrates that at least one of the two endpoints must be an adapter on the node that is being configured. For that endpoint, it is not required to specify this-node explicitly. The following is an example of adding a cable:

-m endpoint=:hme1,endpoint=switch1

In this example, port 0 of the hme1 transport adapter on this node, the node that scinstall is configuring, is cabled to a port on transport switch switch1. The port number that is used on switch1 defaults to the node ID number of this node.

You must always specify two endpoint options with each occurrence of the -m option. The name component of the option argument specifies the name of either a cluster transport adapter or a cluster transport switch at one of the endpoints of a cable.

  • If you specify the node component, the name is the name of a transport adapter.

  • If you do not specify the node component, the name is the name of a transport switch.

If you specify no port component, the scinstall command attempts to assume a default port name. The default port for an adapter is always 0. The default port name for a switch endpoint is equal to the node ID of the node being added to the cluster.

Refer to the individual cluster transport adapter and cluster transport switch man pages for more information regarding port assignments and other requirements. The man pages for cluster transport adapters use the naming convention scconf_transp_adap_adapter(1M). The man pages for cluster transport switches use the naming convention scconf_transp_jct_switch(1M).

Before you can configure a cable, you must first configure the adapters and/or switches at each of the two endpoints of the cable (see -A and -B).

-o

Specifies the configuration of a single-node cluster. This option is only legal when the -i and -F options are also specified.

Other -F options are supported but are not required. If the cluster name is not specified, the name of the node is used as the cluster name. You can specify transport configuration options, which will be stored in the CCR. The -G option is only required if a dedicated global-devices file system, such as /globaldevices, is specified. Once a single-node cluster is established, it is not necessary to configure a quorum device or to disable installmode.

-w netaddr-options

Specifies the network address for the private interconnect, or cluster transport. This option is only legal when the -F option is also specified.

Use this option to specify a private-network address for use on the private interconnect. You can use this option when the default private-network address collides with an address that is already in use within the enterprise. You can also use this option to customize the size of the IP address range that is reserved for use by the private interconnect. For more information, see the networks(4) and netmasks(4) man pages.

If not specified, the default network address for the private interconnect is 172.16.0.0. The default netmask is 255.255.240.0. This IP address range supports up to 62 nodes, 10 private networks, and 12 zone clusters.

The -w netaddr-options are as follows:

–w netaddr=netaddr[,netmask=netmask]
–w netaddr=netaddr[,maxnodes=nodes,maxprivatenets=maxprivnets,numvirtualclusters=zoneclusters]
–w netaddr=netaddr[,netmask=netmask,maxnodes=nodes,maxprivatenets=maxprivnets\
,numvirtualclusters=zoneclusters]
netaddr=netaddr

Specifies the private network address. The last two octets of this address must always be zero.

[netmask=netmask]

Specifies the netmask. The specified value must provide an IP address range that is greater than or equal to the default.

To assign a smaller IP address range than the default, specify the maxnodes, maxprivatenets, and numvirtualclusters operands.

[,maxnodes=nodes,maxprivatenets=maxprivnets,numvirtualclusters=zoneclusters]

Specifies the maximum number of nodes, private networks, and zone clusters that the cluster is ever expected to have. The command uses these values to calculate the minimum netmask that the private interconnect requires to support the specified number of nodes, private networks, and zone clusters. The maximum value for nodes is 62 and the minimum value is 2. The maximum value for maxprivnets is 128 and the minimum value is 2. You can set a value of 0 for zoneclusters.

[,netmask=netmask,maxnodes=nodes,maxprivatenets=maxprivnets\ ,numvirtualclusters=zoneclusters]

Specifies the netmask and the maximum number of nodes, private networks, and zone clusters that the cluster is ever expected to have. You must specify a netmask that can sufficiently accommodate the specified number of nodes, privnets, and zoneclusters. The maximum value for nodes is 62 and the minimum value is 2. The maximum value for privnets is 128 and the minimum value is 2. You can set a value of 0 for zoneclusters.

If you specify only the netaddr suboption, the command assigns the default netmask of 255.255.240.0. The resulting IP address range accommodates up to 62 nodes, 10 private networks, and 12 zone clusters.

To change the private-network address or netmask after the cluster is established, use the cluster command or the clsetup utility.

-A adapter-options

Specifies the transport adapter and, optionally, its transport type. This option is only legal when the -F or -N option is also specified.

Each occurrence of the -A option configures a cluster transport adapter that is attached to the node from which you run the scinstall command.

If no -A options are specified, an attempt is made to use a default adapter and transport type. The default transport type is dlpi. On the SPARC platform, the default adapter is hme1.

When the adapter transport type is dlpi, you do not need to specify the trtype suboption. In this case, you can use either of the following two forms to specify the -A adapter-options:

-A [trtype=type,]name=adaptername[,vlanid=vlanid][,other-options]
-A adaptername
[trtype=type]

Specifies the transport type of the adapter. Use the trtype option with each occurrence of the -A option for which you want to specify the transport type of the adapter. An example of a transport type is dlpi (see the sctransp_dlpi(7p) man page).

The default transport type is dlpi.

name=adaptername

Specifies the adapter name. You must use the name suboption with each occurrence of the -A option to specify the adaptername. An adaptername is constructed from a device name that is immediately followed by a physical-unit number, for example, hme0.

If you specify no other suboptions with the -A option, you can specify the adaptername as a standalone argument to the -A option, as -A adaptername.

vlanid=vlanid

Specifies the VLAN ID of the tagged-VLAN adapter.

[other-options]

Specifies additional adapter options. When a particular adapter provides any other options, you can specify them by using the -A option. Refer to the individual Oracle Solaris Cluster man page for the cluster transport adapter for information about any special options that you might use with the adapter.

-B switch-options

Specifies the transport switch, also called transport junction. This option is only legal when the -F or -N option is also specified.

Each occurrence of the -B option configures a cluster transport switch. Examples of such devices can include, but are not limited to, Ethernet switches, other switches of various types, and rings.

If you specify no -B options, scinstall attempts to add a default switch at the time that the first node is instantiated as a cluster node. When you add additional nodes to the cluster, no additional switches are added by default. However, you can add them explicitly. The default switch is named switch1, and it is of type switch.

When the switch type is type switch, you do not need to specify the type suboption. In this case, you can use either of the following two forms to specify the -B switch-options.

-B [type=type,]name=name[,other-options]
-B name

If a cluster transport switch is already configured for the specified switch name, scinstall prints a message and ignores the -B option.

If you use directly-cabled transport adapters, you are not required to configure any transport switches. To avoid configuring default transport switches, use the following special -B option:

-B type=direct
[type=type]

Specifies the transport switch type. You can use the type option with each occurrence of the -B option. Ethernet switches are an example of a cluster transport switch which is of the switch type switch. See the individual Oracle Solaris Cluster man page for the cluster transport switch for more information.

You can specify the type suboption as direct to suppress the configuration of any default switches. Switches do not exist in a transport configuration that consists of only directly connected transport adapters. When the type suboption is set to direct, you do not need to use the name suboption.

name=name

Specifies the transport switch name. Unless the type is direct, you must use the name suboption with each occurrence of the -B option to specify the transport switch name. The name can be up to 256 characters in length and is made up of either letters or digits, with the first character being a letter. Each transport switch name must be unique across the namespace of the cluster.

If no other suboptions are needed with -B, you can give the switch name as a standalone argument to -B (that is, -B name).

[other-options]

Specifies additional transport switch options. When a particular switch type provides other options, you can specify them with the -B option. Refer to the individual Oracle Solaris Cluster man page for the cluster transport switch for information about any special options that you might use with the switches.

-C clustername

Specifies the name of the cluster. This option is only legal when the -F or -N option is also specified.

  • If the node that you configure is the first node in a new cluster, the default clustername is the same as the name of the node that you are configuring.

  • If the node that you configure is being added to an already-existing cluster, the default clustername is the name of the cluster to which cluster-member already belongs.

It is an error to specify a clustername that is not the name of the cluster to which cluster-member belongs.

-G {lofi | special | mount-point}

Specifies a lofi file, a raw special disk device, or a dedicated file system for the global-devices mount point. This option is only legal when the -F, -N, or -r option is also specified.

  • When used with the -F or -N option, the -G option specifies one of the following locations on which to create the global-devices namespace:

    lofi

    Specifies to use a lofi device on which to create the global-devices namespace. For more information about lofi devices, see the lofi(7D)man page.

    special

    Specifies the name of the raw special disk device to use in place of the /globaldevices mount point.

    mount-point

    Specifies the name of the file system mount-point to use in place of the /globaldevices mount point.

    Each cluster node must have a local file system that is mounted globally on /global/.devices/node@nodeID before the node can successfully participate as a cluster member. By default, the scinstall command looks for an empty file system that is mounted on /globaldevices or on the mount point that is specified to the -G option. If such a file system is provided, the scinstall command makes the necessary changes to the /etc/vfstab file. Since the node ID is not known until the scinstall command is run, scinstall attempts to add the necessary entry to the vfstab file when it does not find a /global/.devices/node@nodeID mount. If a dedicated partition exists for the global-devices namespace, these changes to the vfstab file create a new /global/.devices/node@nodeID mount point and remove the default /globaldevices mount point.

    If /global/.devices/node@nodeID is not mounted and an empty /globaldevices file system is not provided, the command fails.

    If -G lofi is specified, a /.globaldevices file is created. A lofi device is associated with that file, and the global-devices file system is created on the lofi device. No /global/.devices/node@nodeID entry is added to the /etc/vfstab file.

    If a raw special disk device name is specified and /global/.devices/node@nodeID is not mounted, a file system is created on the device by using the newfs command. It is an error to supply the name of a device with an already-mounted file system.

    As a guideline, a dedicated file system should be at least 512 Mbytes in size. If this partition or file system is not available, or is not large enough, it might be necessary to reinstall the Oracle Solaris operating environment.

    For a namespace that is created on a lofi device, 100 MBytes of free space is needed in the root file system.

  • When used with the -r option, if the global-devices namespace is mounted on a dedicated partition, the -G mount-point option specifies the new mount-point name to use to restore the former /global/.devices mount point. If the -G option is not specified and the global-devices namespace is mounted on a dedicated partition, the mount point is renamed /globaldevices by default.

-T authentication-options

Specifies node-authentication options for the cluster. This option is only legal when the -F option is also specified.

Use this option to establish authentication policies for nodes that attempt to add themselves to the cluster configuration. Specifically, when a machine requests that it be added to the cluster as a cluster node, a check is made to determine whether or not the node has permission to join. If the joining node has permission, it is authenticated and allowed to join the cluster.

You can only use the -T option with the scinstall command when you set up the very first node in the cluster. If the authentication list or policy needs to be changed on an already-established cluster, use the scconf command.

The default is to allow any machine to add itself to the cluster.

The -T authentication-options are as follows.

-T node=nodename[,…][,authtype=authtype]
node=nodename[,…]

Specifies node names to add to the node authentication list. You must specify at least one node suboption to the -T option. This option is used to add node names to the list of nodes that are able to configure themselves as nodes in the cluster. If the authentication list is empty, any node can request that it be added to the cluster configuration. However, if the list has at least one name in it, all such requests are authenticated by using the authentication list. You can modify or clear this list of nodes at any time by using the scconf command or the clsetup utility from one of the active cluster nodes.

[authtype=authtype]

Specifies the type of node authentication. The only currently supported authtypes are des and sys (or unix). If no authtype is specified, sys is the default.

If you will you specify des (Diffie-Hellman) authentication, first add entries to the publickey(4) database for each cluster node to be added, before you run the -T option to the scinstallcommand.

You can change the authentication type at any time by using the scconf command or the clsetup utility from one of the active cluster nodes.

Upgrade Options

The -u upgrade-modes and the upgrade-options for standard (nonrolling) upgrade, rolling upgrade, live upgrade, and dual-partition upgrade are as follows.

Standard (Nonrolling), Rolling, and Live Upgrade

Use the -u update mode to upgrade a cluster node to a later Oracle Solaris Cluster software release in standard (nonrolling), rolling, or live upgrade mode.

The upgrade-options to -u update for standard and rolling mode are as follows.

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u update
  [-s {srvc[,…] | all}] [-d dvdimage-dir] [ -O ]
    [-S {interact | testaddr=testipaddr@adapter[,testaddr=…]} ]

For live upgrade mode, also use the -R BE-mount-point option to specify the inactive boot environment. The upgrade-options to -u update for live upgrade mode are as follows.

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u update
  -R BE-mount-point [-s {srvc[,…] | all}] [-d dvdimage-dir] [ -O ]
    [-S {interact | testaddr=testipaddr@adapter[,testaddr=…]} ]
-R BE-mount-point

Specifies the root for an inactive boot environment. This is the mount point that is specified to the lumount command. This option is required if you are performing a live upgrade.

-s {srvc[,…] | all}

Upgrades data services. If the -s option is not specified, only cluster framework software is upgraded. If the -s option is specified, only the specified data services are upgraded.

The -s option is not compatible with the -S test IP address option.

The following suboptions to the -s option are specific to the update mode of upgrade:

all

Upgrades all data services.

This suboption to -s is only legal with the update mode.

This suboption upgrades all data services currently installed on the node, except those data services for which an update version does not exist in the update release.

srvc

Specifies the upgrade name of an individual data service.

The value of srvc for a data service can be derived from the CLUSTER entry of the .clustertoc file for that data service. The .clustertoc file is located in the media-mnt-pt/components/srvc/Solaris_ver/Packages/ directory of the data service software. The CLUSTER entry takes the form SUNWC_DS_srvc. For example, the value of the CLUSTER entry for the Oracle Solaris Cluster HA for NFS data service is SUNWC_DS_nfs. To upgrade only the Oracle Solaris Cluster HA for NFS data service, you issue the command scinstall -u update -s nfs, where nfs is the upgrade name of the data service.

-O

Overrides the hardware validation and bypasses the version-compatibility checks.

-S {interact | testaddr=testipaddr@adapter[,testaddr=…]

In Sun Cluster 3.1 software, this option was specified to convert NAFO groups used in Sun Cluster 3.0 to IPMP groups used beginning in Sun Cluster 3.1. This option is unnecessary for versions that do not support upgrade from Sun Cluster 3.0. It is provided only for backwards compatibility with user scripts that use this option.

Specifies test IP addresses. This option allows the user either to direct the command to prompt the user for the required IP network multipathing (IPMP) addresses or to supply a set of IPMP test addresses on the command line for the conversion of NAFO to IPMP groups. See Chapter 27, Introducing IPMP (Overview), in Oracle Solaris Administration: IP Services for additional information about IPMP.

It is illegal to combine both the interact and the testaddr suboptions on the same command line.


Note - The -S option is only required when one or more of the NAFO adapters in pnmconfig is not already converted to use IPMP.


The suboptions of the -S option are the following:

interact

Prompt the user to supply one or more IPMP test addresses individually.

testaddr=testipaddr@adapter

Directly specify one or more IPMP test addresses.

testipaddr

The IP address or hostname, in the /etc/inet/hosts file, that will be assigned as the routable, no-failover, deprecated test IP address to the adapter. IPMP uses test addresses to detect failures and repairs. See IPMP Addressing in Oracle Solaris Administration: IP Services for additional information about configuring test IP addresses.

adapter

The name of the NAFO network adapter to add to an IPMP group.

Dual-Partition Upgrade

Use the -u upgrade-modes and upgrade-options for dual-partition upgrade to perform the multiple stages of a dual-partition upgrade. The dual-partition upgrade process first involves assigning cluster nodes into two groups, or partitions. Next, you upgrade one partition while the other partition provides cluster services. You then switch services to the upgraded partition, upgrade the remaining partition, and rejoin the upgraded nodes of the second partition to the cluster formed by the upgraded first partition. The upgrade-modes for dual-partition upgrade also include a mode for recovery after a failure during a dual-partition upgrade.

Dual-partition upgrade modes are used in conjunction with the -u update upgrade mode. See the upgrade chapter of the Oracle Solaris Cluster Software Installation Guide for more information.

The upgrade-modes and upgrade-options to -u for dual-partition upgrade are as follows:

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall \
 -u begin -h nodelist
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u plan
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u recover
media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/scinstall -u status
/usr/cluster/bin/scinstall -u apply
/usr/cluster/bin/scinstall -u status
apply

Specifies that upgrade of a partition is completed. Run this form of the command from any node in the upgraded partition, after all nodes in that partition are upgraded.

The apply upgrade mode performs the following tasks:

First partition

When run from a node in the first partition, the apply upgrade mode prepares all nodes in the first partition to run the new software.

When the nodes in the first partition are ready to support cluster services, the command remotely executes the scripts /etc/cluster/ql/cluster_pre_halt_apps and /etc/cluster/ql/cluster_post_halt_apps that are on the nodes in the second partition. These scripts are used to call user-written scripts that stop applications that are not under Resource Group Manager (RGM) control, such as Oracle Real Application Clusters (Oracle RAC).

  • The cluster_pre_halt_apps script is run before applications that are under RGM control are stopped.

  • The cluster_post_halt_apps script is run after applications that are under RGM control are stopped, but before the node is halted.


Note - Before you run the apply upgrade mode, modify the script templates as needed to call other scripts that you write to stop certain applications on the node. Place the modified scripts and the user-written scripts that they call on each node in the first partition. These scripts are run from one arbitrary node in the first partition. To stop applications that are running on more than one node in the first partition, modify the user-written scripts accordingly. The unmodified scripts perform no default actions.


After all applications on the second partition are stopped, the command halts the nodes in the second partition. The shutdown initiates the switchover of applications and data services to the nodes in the first partition. Then the command boots the nodes in the second partition into cluster mode.

If a resource group was offline because its node list contains only members of the first partition, the resource group comes back online. If the node list of a resource group has no nodes that belong to the first partition, the resource group remains offline.

Second partition

When run from a node in the second partition, the apply upgrade mode prepares all nodes in the second partition to run the new software. The command then boots the nodes into cluster mode. The nodes in the second partition rejoin the active cluster that was formed by the nodes in the first partition.

If a resource group was offline because its node list contains only members of the second partition, the resource group comes back online.

After all nodes have rejoined the cluster, the command performs final processing, reconfigures quorum devices, and restores quorum vote counts.

begin

Specifies the nodes to assign to the first partition that you upgrade and initiates the dual-partition upgrade process. Run this form of the command from any node of the cluster. Use this upgrade mode after you use the plan upgrade mode to determine the possible partition schemes.

First the begin upgrade mode records the nodes to assign to each partition. Next, all applications are stopped on one node, then the upgrade mode shuts down the node. The shutdown initiates switchover of each resource group on the node to a node that belongs to the second partition, provided that the node is in the resource-group node list. If the node list of a resource group contains no nodes that belong to the second partition, the resource group remains offline.

The command then repeats this sequence of actions on each remaining node in the first partition, one node at a time.

The nodes in the second partition remain in operation during the upgrade of the first partition. Quorum devices are temporarily unconfigured and quorum vote counts are temporarily changed on the nodes.

plan

Queries the cluster storage configuration and displays all possible partition schemes that satisfy the shared-storage requirement. Run this form of the command from any node of the cluster. This is the first command that you run in a dual-partition upgrade.

Dual-partition upgrade requires that each shared storage array must be physically accessed by at least one node in each partition.

The plan upgrade mode can return zero, one, or multiple partition solutions. If no solutions are returned, the cluster configuration is not suitable for dual-partition upgrade. Use instead the standard upgrade method.

For any partition solution, you can choose either partition group to be the first partition that you upgrade.

recover

Recovers the cluster configuration on a node if a fatal error occurs during dual-partition upgrade processing. Run this form of the command on each node of the cluster.

You must shut down the cluster and boot all nodes into noncluster mode before you run this command.

Once a fatal error occurs, you cannot resume or restart a dual-partition upgrade, even after you run the recover upgrade mode.

The recover upgrade mode restores the /etc/vfstab file, if applicable, and the Cluster Configuration Repository (CCR) database to their original state, before the start of the dual-partition upgrade.

The following list describes in which circumstances to use the recover upgrade mode and in which circumstances to take other steps.

  • If the failure occurred during -u begin processing, run the -u recover upgrade mode.

  • If the failure occurred after -u begin processing completed but before the shutdown warning for the second partition was issued, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the -u recover upgrade mode.

    • If the failure occurred on a node in the second partition, no recovery action is necessary.

  • If the failure occurred after the shutdown warning for the second partition was issued but before -u apply processing started on the second partition, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the -u recover upgrade mode.

    • If the failure occurred on a node in the second partition, reboot the failed node into noncluster mode.

  • If the failure occurred after -u apply processing was completed on the second partition but before the upgrade completed, determine where the error occurred:

    • If the failure occurred on a node in the first partition, run the -u recover upgrade mode.

    • If the failure occurred on a node in the first partition but the first partition stayed in service, reboot the failed node.

    • If the failure occurred on a node in the second partition, run the -u recover upgrade mode.

In all cases, you can continue the upgrade manually by using the standard upgrade method, which requires the shutdown of all cluster nodes.

status

Displays the status of the dual-partition upgrade. The following are the possible states:

Upgrade is in progress

The scinstall -u begin command has been run but dual-partition upgrade has not completed.

The cluster also reports this status if a fatal error occurred during the dual-partition upgrade. In this case, the state is not cleared even after recovery procedures are performed and the cluster upgrade is completed by using the standard upgrade method

Upgrade not in progress

Either the scinstall -u begin command has not yet been issued, or the dual-partition upgrade has completed successfully.

Run the status upgrade mode from one node of the cluster. The node can be in either cluster mode or noncluster mode.

The reported state is valid for all nodes of the cluster, regardless of which stage of the dual-partition upgrade the issuing node is in.

The following option is supported with the dual-partition upgrade mode:

-h nodelist

Specifies a space-delimited list of all nodes that you assign to the first partition. You choose these from output displayed by the plan upgrade mode as valid members of a partition in the partition scheme that you use. The remaining nodes in the cluster, which you do not specify to the begin upgrade mode, are assigned to the second partition.

This option is only valid with the begin upgrade mode.

Examples

Establishing a Two-Node Cluster

The following sequence of commands establishes a typical two-node cluster with Oracle Solaris Cluster software for Solaris 10 on SPARC based platforms. The example assumes that Oracle Solaris Cluster software packages are already installed on the nodes.

Insert the installation media on node1 and issue the following commands:

node1# cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_10/Tools/
node1# ./scinstall -i -F

Insert the installation media on node2 and issue the following commands:

node2# cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_10/Tools/
node2# ./scinstall -i -N node1

Establishing a Single-Node Cluster

The following sequence of commands establish a single-node cluster with Oracle Solaris Cluster software for Solaris 10 on SPARC based platforms, with all defaults accepted. The example assumes that Oracle Solaris Cluster software packages are already installed on the node.

Insert the installation media and issue the following commands:

# cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_10/Tools/
# ./scinstall -i -F -o

Setting Up a Solaris Install Server

The following sequence of commands sets up a JumpStart install server to install and initialize Oracle Solaris Cluster software for Solaris 10 on SPARC based platforms on a three-node cluster.

Insert the installation media on the install server and issue the following commands:

# cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/
# ./scinstall -a /export/sc3.3
# cd /export/sc3.3/Solaris_sparc/Product/sun_cluster/Solaris_10/Tools/
# ./scinstall –c /export/jumpstart -h node1 -F -A hme2
# ./scinstall –c /export/jumpstart -h node2 -N node1 -A hme2
# ./scinstall –c /export/jumpstart -h node3 -N node1 -A hme2

Upgrading the Framework and Data Service Software (Standard or Rolling Upgrade)

The following sequence of commands upgrades the framework and data service software of a cluster to the next Oracle Solaris Cluster release. This example uses the Oracle Solaris Cluster version for Solaris 10 on SPARC based platforms. Perform these operations on each cluster node.


Note - For a rolling upgrade, perform these operations on one node at a time, after you use the clnode evacuate command to move all resource groups and device groups to the other nodes which will remain in the cluster.


Insert the installation media and issue the following commands:

ok> boot -x
# cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_10/Tools/
# ./scinstall -u update -S interact
# cd /
# eject cdrom
# /usr/cluster/bin/scinstall -u update -s all -d /cdrom/cdrom0
# reboot

Performing a Dual-Partition Upgrade

The following sequence of commands uses the dual-partition method to upgrade the framework and data service software of a cluster to the next Oracle Solaris Cluster release. This examples uses the Oracle Solaris Cluster version for Solaris 10 on SPARC based platforms. The example queries the cluster for valid partition schemes, assigns nodes to partitions, reboots the node in the first partition, returns the first partition to operation after upgrade and reboots the node in the second partition, and returns the second partition to the cluster after upgrade.

# media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_10/Tools/scinstall \
-u plan
  Option 1
    First partition
      phys-schost-1
    Second partition
      phys-schost-2
…
# media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_10/Tools/scinstall \
-u begin -h phys-schost-1 phys-schost-3
ok boot -x
 
(Upgrade the node in the first partition)
 
phys-schost-1# /usr/cluster/bin/scinstall -u apply
ok boot -x
 
(Upgrade the node in the second partition)
 
phys-schost-2# /usr/cluster/bin/scinstall -u apply

Upgrading the Framework and Data Service Software (Live Upgrade)

The following sequence of commands illustrates the process of performing a live upgrade on an inactive boot environment on a SPARC system that runs Solaris 10. In these commands, sc32u3 is the current boot environment and sc33 is the inactive boot environment being upgraded. In this example, the data services that are being upgraded are from the Agents installation media.


Note - The commands shown below typically produce copious output. This output is not shown except where necessary for clarity.


# lucreate -c sc32u3 -m /:/dev/dsk/c0t4d0s0:ufs -n sc33
lucreate: Creation of Boot Environment sc32 successful

# luupgrade -u -n sc33 \
-s /net/installmachine/export/solarisX/OS_image
The Solaris upgrade of the boot environment sc33 is complete.

# lumount sc33 /sc33

# cd media-mnt-pt/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools/
# ./scinstall -u update -R /sc33
# cd /usr/cluster/bin
# ./scinstall -R /sc33 -u update -s all -d /cdrom/cdrom0

# cd /
# eject /cdrom/cdrom0

# luumount -f sc33
# luactivate sc33
Activation of boot environment sc32 successful.
# init 0
ok> boot

Uninstalling a Node

The following sequence of commands places the node in noncluster mode, then removes Oracle Solaris Cluster framework and data-service software and configuration information from the cluster node, renames the global-devices mount point to the default name /globaldevices, and performs cleanup. This examples removes an Oracle Solaris Cluster version for SPARC based platforms.

ok> boot -x
# cd /
# /usr/cluster/bin/scinstall -r

Exit Status

The following exit values are returned:

0

Successful completion.

non-zero

An error occurred.

Files

media-mnt-pt/.cdtoc

media-mnt-pt/Solaris_arch/Product/sun_cluster/.producttoc

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/ \

Packages/.clustertoc

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/ \

Packages/.order

media-mnt-pt/Solaris_arch/Product/sun_cluster/Solaris_ver/ \

Tools/defaults

media-mnt-pt/components/srvc/Solaris_ver/Packages/.clustertoc

media-mnt-pt/components/srvc/Solaris_ver/Packages/.order

/.globaldevices

/etc/cluster/ql/cluster_post_halt_apps

/etc/cluster/ql/cluster_pre_halt_apps

Attributes

See attributes(5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
Java Enterprise System installation media, SUNWsczu
Interface Stability
Evolving

See Also

Intro(1CL), claccess(1CL), clinterconnect(1CL), clnode(1CL), clsetup(1CL), cluster(1CL), add_install_client(1M), luactivate(1M), lucreate(1M), lumount(1M), luupgrade(1M), luumount(1M), newfs(1M), patchadd(1M), scconf(1M), scprivipadm(1M), scsetup(1M), scversions(1M), setup_install_server(1M), clustertoc(4), netmasks(4), networks(4), order(4), packagetoc(4), lofi(7D), sctransp_dlpi(7p)

Oracle Solaris Cluster Software Installation Guide, Oracle Solaris Cluster System Administration Guide, Oracle Solaris Cluster Upgrade Guide, Oracle Solaris Administration: IP Services