JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Software Installation Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Planning the Oracle Solaris Cluster Configuration

2.  Installing Software on Global-Cluster Nodes

3.  Establishing the Global Cluster

4.  Configuring Solaris Volume Manager Software

5.  Creating a Cluster File System

6.  Creating Zone Clusters

Overview of Creating and Configuring a Zone Cluster

Creating and Configuring a Zone Cluster

Creating a Zone Cluster

How to Install and Configure Trusted Extensions

How to Create a Zone Cluster

How to Configure a Zone Cluster to Use Trusted Extensions

Adding File Systems to a Zone Cluster

How to Add a Highly Available Local File System to a Zone Cluster

How to Add a ZFS Storage Pool to a Zone Cluster

How to Add a Cluster File System to a Zone Cluster

Adding Local File Systems to a Specific Zone-Cluster Node

How to Add a Local File System to a Specific Zone-Cluster Node

How to Add a Local ZFS Storage Pool to a Specific Zone-Cluster Node

Adding Storage Devices to a Zone Cluster

How to Add a Global Storage Device to a Zone Cluster

How to Add a Raw-Disk Device to a Specific Zone-Cluster Node

7.  Uninstalling Software From the Cluster

Index

Creating and Configuring a Zone Cluster

This section provides the following information and procedures to create and configure a zone cluster.

Creating a Zone Cluster

This section provides procedures on how to use the clsetup utility to create a zone cluster, and add a network address, file system, ZFS storage pool, and storage device to the new zone cluster.

If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.

You can alternatively use the clzonecluster utility to create and configure a cluster. See the clzonecluster(1CL) man page for more information.

This section contains the following procedures:

How to Install and Configure Trusted Extensions

This procedure prepares the global cluster to use the Trusted Extensions feature of Oracle Solaris with zone clusters. If you do not plan to enable Trusted Extensions, proceed to Creating a Zone Cluster.

Perform this procedure on each node in the global cluster.

Before You Begin

Perform the following tasks:

  1. Assume the root role on a node of the global cluster.
  2. Install and configure Trusted Extensions software.

    Follow procedures in Chapter 3, Adding the Trusted Extensions Feature to Oracle Solaris (Tasks), in Trusted Extensions Configuration and Administration.

  3. Disable the Trusted Extensions zoneshare and zoneunshare scripts.

    The Trusted Extensions zoneshare and zoneunshare scripts support the ability to export home directories on the system. An Oracle Solaris Cluster configuration does not support this feature.

    Disable this feature by replacing each script with a symbolic link to the /bin/true utility.

    phys-schost# ln -s /usr/lib/zones/zoneshare /bin/true
    phys-schost# ln -s /usr/lib/zones/zoneunshare /bin/true
  4. Configure all logical-hostname and shared-IP addresses that are to be used in the zone cluster.

    See How to Create a Default Trusted Extensions System in Trusted Extensions Configuration and Administration.

  5. (Optional) Enable remote login by the LDAP server to the global-cluster node.
    1. In the /etc/default/login file, comment out the CONSOLE entry.
    2. Enable remote login.
      phys-schost# svcadm enable rlogin
    3. Modify the /etc/pam.conf file.

      Modify the account management entries by appending a Tab and typing allow_remote or allow_unlabeled respectively, as shown below.

      other   account requisite       pam_roles.so.1        Tab  allow_remote
      other   account required        pam_unix_account.so.1 Tab  allow_unlabeled
  6. Modify the admin_low template.
    1. Assign the admin_low template to each IP address that does not belong to a Trusted Extensions machine that is used by the global zone.
      # tncfg -t admin_low
      tncfg:admin_low> add host=ip-address1
      tncfg:admin_low> add host=ip-address2
      …
      tncfg:admin_low> exit
    2. Remove the wildcard address 0.0.0.0/32 from the tncfg template.
      # tncfg -t admin_low remove host=0.0.0.0
  7. Assign the cipso template to each IP address that does belong to a Trusted Extensions machine that is used by the global zone.
    # tncfg -t cipso
    tncfg:cipso> add host=ip-address1
    tncfg:cipso> add host=ip-address2
    …
    tncfg:cipso> exit
  8. Repeat Step 1 through Step 7 on each remaining node of the global cluster.

    When all steps are completed on all global-cluster nodes, perform the remaining steps of this procedure on each node of the global cluster.

  9. On each global-cluster node, add the IP address of the Trusted Extensions-enabled LDAP server to the /etc/inet/hosts file.

    The LDAP server is used by the global zone and by the nodes of the zone cluster.

  10. (Optional) Make the global-cluster node an LDAP client.

    See Make the Global Zone an LDAP Client in Trusted Extensions in Trusted Extensions Configuration and Administration.

  11. Add Trusted Extensions users.

    See Creating Roles and Users in Trusted Extensions in Trusted Extensions Configuration and Administration.

Next Steps

Create the zone cluster. Go to Creating a Zone Cluster.

How to Create a Zone Cluster

Perform this procedure to create a zone cluster.

To modify the zone cluster after it is installed, see Performing Zone Cluster Administrative Tasks in Oracle Solaris Cluster System Administration Guide and the clzonecluster(1CL) man page.

Before You Begin


Tip - While in the clsetup utility, you can press the < key to return to a previous screen.


  1. Assume the root role on an active member node of a global cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Ensure that the node of the global cluster is in cluster mode.
    phys-schost# clnode status
    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-2                                   Online
    phys-schost-1                                   Online
  3. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.

  4. Choose the Zone Cluster menu item.
  5. Choose the Create a Zone Cluster menu item.
  6. Type the name of the zone cluster you want to add.

    A zone cluster name can contain ASCII letters (a-z and A-Z), numbers, a dash, or an underscore. The maximum length of the name is 20 characters.

  7. Choose the property you want to change.

    You can set the following properties:


    Property
    Description
    zonepath=zone-cluster-node-path
    Specifies the path to the zone cluster node. For example, /zones/sczone.
    brand=brand-type
    Specifies the solaris, solaris10, or labeled zones brand used in the zone cluster.

    Note - To use Trusted Extensions, you must use only the labeled brand. To create an exclusive-IP zone cluster, you must use only the solaris brand.


    ip-type=value
    Specifies the type of network IP address used by the zone cluster. Valid ip-type values are shared and exclusive.

    The maximum number of exclusive-IP zone clusters is constrained by the cluster property num_xip_zoneclusters, which you can set during initial cluster installation. This value has a default of three. For more information, see the cluster(1CL) man page.

    enable_priv_net=value
    When set to true, Oracle Solaris Cluster private network communication is enabled between the nodes of the zone cluster. The Oracle Solaris Cluster private hostnames and IP addresses for the zone cluster nodes are automatically generated by the system. Private network communication is disabled if the value is set to false. The default value is true.

    When the enable_priv_net property is set to true along with the following properties, private communication occurs in the following ways:

    • ip-type=shared – Communication between zone cluster nodes uses the private networks of the global cluster.

    • ip-type=exclusive (solaris brand only) – Communication between zone cluster nodes uses the specified privnet resources. The privnet resources are either Virtual Network Interfaces (VNICs) for the Ethernet type of private network adapters, or InfiniBand (IB) partitions for the IB type of private network adapters. The VNICs or IB partitions are automatically created by the wizard over each private network adapter of the global cluster, and used to configure a zone cluster.

    The VNICs or IB partitions that the wizard generates use the following naming conventions:

    For the Ethernet type: private-network-interface-name_zone-cluster-name_vnic0.

    For the IB type: private-network-interface-name_zone-cluster-name_ibp0.

    For example, the private network interfaces of the global cluster are net2 and net3, and the zone cluster name is zone1. If net2 and net3 are Ethernet type network interfaces, the two VNICs that are created for the zone cluster will have the names net2_zone1_vnic0 and net3_zone1_vnic0.

    If net2 and net3 are IB type network interfaces, the two IB partitions created for the zone cluster will have the names net2_zone1_ibp0 and net3_zone1_ibp0.

  8. For a solaris10 brand zone cluster, enter a zone root password.

    A root account password is required for a solaris10 brand zone.

  9. (Optional) Choose the Zone System Resource Control property that you want to change.

    You can set the following properties:


    Property
    Description
    max-lwps=value
    Specifies the maximum number of lightweight processes (LWPs) simultaneously available to this zone cluster.
    max-shm-memory=value
    Specifies the maximum amount of shared memory in GBytes allowed for this zone cluster.
    max-shm-ids=value
    Specifies the maximum number of shared memory IDs allowed for this zone cluster.
    max-msg-ids=value
    Specifies the maximum number of message queue IDs allowed for this zone cluster.
    max-sem-ids=value
    Specifies the maximum number of semaphore IDs allowed for this zone cluster.
    cpu-shares=value
    Specifies the number of Fair Share Scheduler (FSS) shares to allocate to this zone cluster.
  10. (Optional) Choose the Zone CPU Resource Control property that you want to change.

    You can set the following properties:


    Property
    Description
    scope=scope-type
    Specifies whether the ncpus property used in a zone cluster is dedicated-cpu or capped-cpu.
    ncpus=value
    Specifies the limit for the scope type.
    • If the scope property is set to dedicated-cpu, the ncpus property sets a limit on the number of CPUs that should be assigned for this zone's exclusive use. The zone will create a pool and processor set when it boots. See the pooladm(1M) and poolcfg(1M) man pages for more information on resource pools.

    • If the scope property is set to capped-cpu, the ncpus property sets a limit on the amount of CPU time that can be used by a zone cluster. The unit used translates to the percentage of a single CPU that can be used by all user threads in a zone, expressed as a fraction (for example, .75) or a mixed number (whole number and fraction, for example, 1.25). An ncpus value of 1 means 100% of a CPU. See the pooladm(1M), pooladm(1M), and poolcfg(1M) man pages for more information on resource pools.

  11. (Optional) Choose the capped-memory property that you want to change.

    You can set the following properties:


    Property
    Description
    physical=value
    Specifies the GByte limit for physical memory.
    swap=value
    Specifies the GByte limit for swap memory.
    locked=value
    Specifies the GByte limit for locked memory.
  12. Choose a physical host from the list of available physical hosts.

    You can select one or all of the available physical nodes (or hosts), and then configure one zone-cluster node at a time.

    You can set the following properties:


    Property
    Description
    hostname=hostname
    Specifies the zone-cluster node hostname. For example, zc-host-1.
    address=public-network-address
    Specifies the public network address for the zone-cluster node on a shared-IP type zone cluster. For example, 172.1.1.1.
    physical=physical-interface
    Specifies a network physical interface for the public network from the available network interfaces that are discovered on the physical nodes. For example, sc_ipmp0 or net0.
    defrouter=default-router
    Specifies the default router for the network address, if your zone is configured in a different subnet. Each zone or set of zones that uses a different defrouter setting must be on a different subnet, for example, 192.168.0.1. See the zonecfg(1M) man page for more information about the defrouter property.
  13. Specify the network addresses for the zone cluster.

    The network addresses can be used to configure a logical hostname or shared IP cluster resources in the zone cluster. The network address is in the zone cluster global scope.

  14. At the Review Configuration screen, press Return to continue and then type c to create the zone cluster.

    The results of your configuration change are displayed, similar to the following:

     >>> Result of the Creation for the Zone Cluster(sczone) <<<
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            create
            set brand=solaris
            set zonepath=/zones/sczone
            set ip-type=shared
            set enable_priv_net=true
            add capped-memory
            set physical=2G
            end
            add node
            set physical-host=phys-schost-1
            set hostname=zc-host-1
            add net
            set address=172.1.1.1
            set physical=net0
            end
            end
            add net
            set address=172.1.1.2
                  end
    
        Zone cluster, zc2 has been created and configured successfully.
    
        Continue to install the zone cluster(yes/no) ?
  15. Type yes to continue.

    The clsetup utility performs a standard installation of a zone cluster and you cannot specify any options.

  16. When finished, exit the clsetup utility.
  17. Verify the zone cluster configuration.

    The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, no output is displayed.

    phys-schost-1# clzonecluster verify zone-cluster-name
    phys-schost-1# clzonecluster status zone-cluster-name
    === Zone Clusters ===
    
    --- Zone Cluster Status ---
    
    Name      Node Name   Zone HostName   Status    Zone Status
    ----      ---------   -------------   ------    -----------
    zone       basenode1     zone-1           Offline   Configured
              basenode2     zone-2           Offline   Configured
  18. For Trusted Extensions, make the password files writable on each zone-cluster node.

    From the global zone, launch the txzonemgr GUI.

    phys-schost# txzonemgr

    Select the global zone, then select the item, Configure per-zone name service.

  19. Install the zone cluster.
    phys-schost-1# clzonecluster install options zone-cluster-name
    Waiting for zone install commands to complete on all the nodes 
    of the zone cluster "zone-cluster-name"...
    • For a solaris or labeled brand zone cluster, the following options are valid.
      Option
      Description
      -c config-profile.xml
      Includes system configuration information. The -c config-profile.xml option provides a configuration profile for all non-global zones of the zone cluster. Using this option changes only the hostname of the zone, which is unique for each zone in the zone cluster. All profiles must have a .xml extension.
      -M manifest.xml
      Specifies a custom Automated Installer manifest that you configure to install the necessary packages on all zone-cluster nodes. Use this option if the base global-cluster nodes for the zone-cluster are not all installed with the same Oracle Solaris Cluster packages but you do not want to change which packages are on the base nodes. If the clzonecluster install command is run without the -M option, zone-cluster installation fails on a base node if it is missing a package that is installed on the issuing base node.
    • For a solaris10 brand zone cluster, the following options are valid.

      Use either the -a or -d option to install Geographic Edition software, core packages, and agents that are supported in the zone cluster:


      Note - For a list of agents that are currently supported in a solaris10 brand zone cluster, see Oracle Solaris Cluster 4 Compatibility Guide.



      Option
      Description
      Required – Oracle Solaris Cluster patch 145333-15 for SPARC and 145334–15 for x86
      You must install a minimum of Oracle Solaris Cluster 3.3 patch 145333–15 for SPARC or 145334–15 for x86 before you install the solaris10 brand zone cluster. Log in to My Oracle Support to retrieve the patch. Then from the global zone, use the -p option to install the patch:
      # clzonecluster install-cluster \
      -p patchdir=patchdir[,patchlistfile=filename] \
      [-n phys-schost-1[,…]] \
      [-v] \
      zone-cluster-name

      For additional instructions on installing patches, log in to My Oracle Support and search for ID 1278636.1, How to Find and Download any Revision of a Solaris Patch.

      -a absolute_path_to_archive zone-cluster-name
      Specifies the absolute path to an image archive to be used as the source image.
      # clzonecluster install \
      [-n nodename] \
      -a absolute_path_to_archive \
      zone-cluster-name
      -d dvd-image zone-cluster-name
      Specifies the full directory path to the root directory of an installed solaris10 non-global zone. The cluster software DVD directory must be accessible from the global zone of the node where you run the command.
      # clzonecluster install-cluster \
      -d dvd-image \
      zoneclustername

    For more information, see the clzonecluster(1CL) man page.

  20. If you did not use the -c config-profile.xml option when you installed the zone cluster, perform sysid configuration.

    Otherwise, skip to Step 21.


    Note - In the following steps, the non-global zone zcnode and zone-cluster-name share the same name.


    • For an exclusive-IP labeled brand zone cluster , perform the following steps.

      Configure only one zone-cluster node at a time.

      1. Boot the non-global zone of one zone-cluster node.
        phys-schost# zoneadm -z zcnode boot
      2. Unconfigure the Oracle Solaris instance and reboot the zone.
        phys-schost# zlogin zcnode
        zcnode# sysconfig unconfigure
        zcnode# reboot

        The zlogin session terminates during the reboot.

      3. Issue the zlogin command and progress through the interactive screens.
        phys-schost# zlogin -C zcnode
      4. When finished, exit the zone console.

        For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.

      5. From the global zone, halt the zone-cluster node.
        phys-schost# zoneadm -z zcnode halt
      6. Repeat the preceding steps for each remaining zone-cluster node.
    • For a shared-IP labeled brand zone cluster, perform the following steps on each zone-cluster node.
      1. From one global-cluster node, boot the zone cluster.
        phys-schost# clzonecluster boot zone-cluster-name
      2. Unconfigure the Oracle Solaris instance and reboot the zone.
        phys-schost# zlogin zcnode
        zcnode# sysconfig unconfigure
        zcnode# reboot

        The zlogin session terminates during the reboot.

      3. Issue the zlogin command and progress through the interactive screens.
        phys-schost# zlogin -C zcnode
      4. When finished, exit the zone console.

        For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.

      5. Repeat Step b through Step d for each remaining zone-cluster node.
    • For a solaris or solaris10 brand zone cluster, perform the following steps on each zone-cluster node.
      1. From one global-cluster node, boot the zone cluster.
        phys-schost# clzonecluster boot zone-cluster-name
      2. Issue the zlogin command and progress through the interactive screens.
        phys-schost# zlogin -C zcnode
      3. When finished, exit the zone console.

        For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.

      4. Repeat Step b through Step c for each remaining zone-cluster node.
  21. Boot the zone cluster

    Installation of the zone cluster might take several minutes.

    phys-schost# clzonecluster boot zone-cluster-name
  22. (Exclusive-IP zone clusters) Manually configure an IPMP group.

    The clsetup utility does not automatically configure IPMP groups for exclusive-IP zone clusters. You must create an IPMP group manually before you create a logical-hostname or shared-address resource.

    phys-schost# ipadm create-ipmp -i interface sc_ipmp0
    phys-schost# ipadm delete-addr interface/name
    phys-schost# ipadm create-addr -T static -a IPaddress/prefix sc_ipmp0/name

Next Steps

To configure Oracle Solaris Cluster 3.3 data services that you installed in a solaris10 brand zone cluster, follow procedures for zone clusters in the applicable data-service manual. See Oracle Solaris Cluster 3.3 Documentation.

To complete Trusted Extensions configuration, go to How to Configure a Zone Cluster to Use Trusted Extensions.

Otherwise, add file systems or storage devices to the zone cluster. See the following sections:

How to Configure a Zone Cluster to Use Trusted Extensions

After you create a labeled brand zone cluster, perform the following steps to finish configuration to use Trusted Extensions.

  1. Complete IP-address mappings for the zone cluster.

    Perform this step on each node of the zone cluster.

    1. From a node of the global cluster, display the node's ID.
      phys-schost# cat /etc/cluster/nodeid
      N
    2. Log in to a zone-cluster node on the same global-cluster node.

      Ensure that the SMF service has been imported and all services are up before you log in.

    3. Determine the IP addresses used by this zone-cluster node for the private interconnect.

      The cluster software automatically assigns these IP addresses when the cluster software configures a zone cluster.

      In the ifconfig -a output, locate the clprivnet0 logical interface that belongs to the zone cluster. The value for inetis the IP address that was assigned to support the use of the cluster private interconnect by this zone cluster.

      zc1# ifconfig -a
      lo0:3: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
              zone zc1
              inet 127.0.0.1 netmask ff000000
      net0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
              inet 10.11.166.105 netmask ffffff00 broadcast 10.11.166.255
              groupname sc_ipmp0
              ether 0:3:ba:19:fa:b7
      ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
              inet 10.11.166.109 netmask ffffff00 broadcast 10.11.166.255
              groupname sc_ipmp0
              ether 0:14:4f:24:74:d8
      ce0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
              zone zc1
              inet 10.11.166.160 netmask ffffff00 broadcast 10.11.166.255
      clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
              inet 172.16.0.18 netmask fffffff8 broadcast 172.16.0.23
              ether 0:0:0:0:0:2
      clprivnet0:3: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
              zone zc1
              inet 172.16.0.22 netmask fffffffc broadcast 172.16.0.23
    4. Add to the zone-cluster node's /etc/inet/hosts file the following addresses of the zone-cluster node.
      • The hostname for the private interconnect, which is clusternodeN-priv, where N is the global-cluster node ID

        172.16.0.22    clusternodeN-priv 
      • Each net resource that was specified to the clzonecluster command when you created the zone cluster

    5. Repeat on the remaining zone-cluster nodes.
  2. Authorize communication with zone-cluster components.

    Create new entries for the IP addresses used by zone-cluster components and assign each entry a CIPSO template. These IP addresses which exist in the zone-cluster node's /etc/inet/hosts file are as follows:

    • Each zone-cluster node private IP address

    • All cl_privnet IP addresses in the zone cluster

    • Each logical-hostname public IP address for the zone cluster

    • Each shared-address public IP address for the zone cluster

    phys-schost# tncfg -t cipso
    tncfg:cipso> add host=ipaddress1
    tncfg:cipso> add host=ipaddress2
    …
    tncfg:cipso> exit

    For more information about CIPSO templates, see How to Configure a Different Domain of Interpretation in Trusted Extensions Configuration and Administration.

  3. Set IP strict multihoming to weak.

    Perform the following commands on each node of the zone cluster.

    phys-schost# ipadm set-prop -p hostmodel=weak ipv4
    phys-schost# ipadm set-prop -p hostmodel=weak ipv6

    For more information about the hostmodel property, see hostmodel (ipv4 or ipv6) in Oracle Solaris 11.1 Tunable Parameters Reference Manual.

Next Steps

To add file systems or storage devices to the zone cluster. See the following sections:

See Also

If you want to update the software on a zone cluster, follow procedures in Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide. These procedures include special instructions for zone clusters, where needed.

Adding File Systems to a Zone Cluster

After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.


Note - To add a file system whose use is limited to a single zone-cluster node, see instead Adding Local File Systems to a Specific Zone-Cluster Node.


This section provides the following procedures to add file systems for use by the zone cluster:

How to Add a Highly Available Local File System to a Zone Cluster

Perform this procedure to configure a highly available local file system on the global cluster for use by a zone cluster. The file system is added to the zone cluster and is configured with an HAStoragePlus resource to make the local file system highly available.

Perform all steps of the procedure from a node of the global cluster.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.
  2. On the global cluster, create a file system that you want to use in the zone cluster.

    Ensure that the file system is created on shared disks.

  3. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip - To return to a previous screen, type the < key and press Return.


  4. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  5. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  6. Choose the zone cluster where you want to add the file system.

    The Storage Type Selection menu is displayed.

  7. Choose the File System menu item.

    The File System Selection for the Zone Cluster menu is displayed.

  8. Choose the file system you want to add to the zone cluster.

    The file systems in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify all properties for a file system.

    The Mount Type Selection menu is displayed.

  9. Choose the Loopback mount type.

    The File System Properties for the Zone Cluster menu is displayed.

  10. Change the properties that you are allowed to change for the file system you are adding.

    Note - For UFS file systems, enable logging.


    When, finished, type d and press Return.

  11. Type c to save the configuration change.

    The results of your configuration change are displayed.

  12. When finished, exit the clsetup utility.
  13. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zone-cluster-name

Example 6-1 Adding a Highly Available Local File System to a Zone Cluster

This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/global/oracle/d1
clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1
clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> add options [logging]
clzc:sczone:fs> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /global/oracle/d1
    special:                                   /dev/md/oracle/dsk/d1
    raw:                                       /dev/md/oracle/rdsk/d1
    type:                                      ufs
    options:                                   [logging]
    cluster-control:                           [true]
…

Next Steps

Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Add a ZFS Storage Pool to a Zone Cluster

Perform this procedure to add a ZFS storage pool to a zone cluster. The pool can be local to a single zone-cluster node or configured with HAStoragePlus to be highly available.

The clsetup utility discovers and displays all configured ZFS pools on the shared disks that can be accessed by the nodes where the selected zone cluster is configured. After you use the clsetup utility to add a ZFS storage pool in cluster scope to an existing zone cluster, you can use the clzonecluster command to modify the configuration or to add a ZFS storage pool in node-scope.

Before You Begin

Ensure that the ZFS pool is connected on shared disks that are connected to all nodes of the zone cluster. See Oracle Solaris 11.1 Administration: ZFS File Systems for procedures to create a ZFS pool.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip - To return to a previous screen, type the < key and press Return.


  3. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  4. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  5. Choose the zone cluster where you want to add the ZFS storage pool.

    The Storage Type Selection menu is displayed.

  6. Choose the ZFS menu item.

    The ZFS Pool Selection for the Zone Cluster menu is displayed.

  7. Choose the ZFS pool you want to add to the zone cluster.

    The ZFS pools in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify properties for a ZFS pool.

    The ZFS Pool Dataset Property for the Zone Cluster menu is displayed. The selected ZFS pool is assigned to the name property.

  8. Type d and press Return.

    The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.

  9. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

     >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
        Adding file systems or storage devices to sczone zone cluster...
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            add dataset
            set name=myzpool5
            end
    
        Configuration change to sczone zone cluster succeeded.
  10. When finished, exit the clsetup utility.
  11. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername
  12. To make the ZFS storage pool highly available, configure the pool with an HAStoragePlus resource.

    The HAStoragePlus resource manages the mounting of file systems in the pool on the zone-cluster node that currently hosts the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Add a Cluster File System to a Zone Cluster

The clsetup utility discovers and displays the available file systems that are configured on the cluster nodes where the selected zone cluster is configured. When you use the clsetup utility to add a file system, the file system is added in cluster scope.

You can add the following types of cluster file systems to a zone cluster:

Before You Begin

Ensure that the cluster file system you want to add to the zone cluster is configured. See Planning Cluster File Systems and Chapter 5, Creating a Cluster File System.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
    phys-schost# vi /etc/vfstab
    • For a UFS entry, include the global mount option, similar to the following example:
      /dev/md/datadg/dsk/d0 /dev/md/datadg/rdsk/d0 /global/fs ufs 2 no global, logging
  3. On the global cluster, start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip - To return to a previous screen, type the < key and press Return.


  4. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  5. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  6. Choose the zone cluster where you want to add the file system.

    The Storage Type Selection menu is displayed.

  7. Choose the File System menu item.

    The File System Selection for the Zone Cluster menu is displayed.

  8. Choose a file system from the list.

    You can also type e to manually specify all properties for a file system.

    The Mount Type Selection menu is displayed.

  9. Choose the Loopback file system mount type for the zone cluster.

    For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in Oracle Solaris 11.1 Administration: Devices and File Systems.

    The File System Properties for the Zone Cluster menu is displayed.

  10. Specify the mount point directory.

    Type the number for the dir property and press Return. Then type the LOFS mount point directory name in the New Value field and press Return.

    When finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.

  11. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

      >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
        Adding file systems or storage devices to sczone zone cluster...
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            add fs
            set dir=/dev/md/ddg/dsk/d9
            set special=/dev/md/ddg/dsk/d10
            set raw=/dev/md/ddg/rdsk/d10
            set type=lofs
            end
    
        Configuration change to sczone zone cluster succeeded.
  12. When finished, exit the clsetup utility.
  13. Verify the addition of the LOFS file system.
    phys-schost# clzonecluster show -v zone-cluster-name

Next Steps

(Optional) Configure the cluster file system to be managed by an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems in the global cluster, and later performs a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

Adding Local File Systems to a Specific Zone-Cluster Node

This section describes how to add file systems that are dedicated to a single zone-cluster node. To instead configure file systems for use by the entire zone cluster, go to Adding File Systems to a Zone Cluster.

This section contains the following procedures:

How to Add a Local File System to a Specific Zone-Cluster Node

Perform this procedure to add a local file system to a single, specific zone-cluster node of a specific zone cluster. The file system is not managed by Oracle Solaris Cluster software but is instead passed to the underlying Oracle Solaris zone.


Note - To add a highly available local file system to a zone cluster, perform procedures in How to Add a Highly Available Local File System to a Zone Cluster.


  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of this procedure from a node of the global cluster.


  2. Create the local file system that you want to configure to a specific zone-cluster node.

    Use local disks of the global-cluster node that hosts the intended zone-cluster node.

  3. Add the file system to the zone-cluster configuration in the node scope.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> select node physical-host=baseclusternode
    clzc:zoneclustername:node> add fs
    clzc:zoneclustername:node:fs> set dir=mountpoint
    clzc:zoneclustername:node:fs> set special=disk-device-name
    clzc:zoneclustername:node:fs> set raw=raw-disk-device-name
    clzc:zoneclustername:node:fs> set type=FS-type
    clzc:zoneclustername:node:fs> end
    clzc:zoneclustername:node> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    dir=mountpoint

    Specifies the file-system mount point

    special=disk-device-name

    Specifies the name of the disk device

    raw=raw-disk-device-name

    Specifies the name of the raw-disk device

    type=FS-type

    Specifies the type of file system


    Note - Enable logging for UFS file systems.


  4. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 6-2 Adding a Local File System to a Zone-Cluster Node

This example adds a local UFS file system /local/data for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> select node physical-host=phys-schost-1
clzc:sczone:node> add fs
clzc:sczone:node:fs> set dir=/local/data
clzc:sczone:node:fs> set special=/dev/md/localdg/dsk/d1
clzc:sczone:node:fs> set raw=/dev/md/localdg/rdsk/d1
clzc:sczone:node:fs> set type=ufs
clzc:sczone:node:fs> add options [logging]
clzc:sczone:node:fs> end
clzc:sczone:node> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
     --- Solaris Resources for phys-schost-1 --- 
…
   Resource Name:                                fs
     dir:                                           /local/data
     special:                                       /dev/md/localdg/dsk/d1
     raw:                                           /dev/md/localdg/rdsk/d1
     type:                                          ufs
     options:                                       [logging]
     cluster-control:                               false ...

How to Add a Local ZFS Storage Pool to a Specific Zone-Cluster Node

Perform this procedure to add a local ZFS storage pool to a specific zone-cluster node. The local ZFS pool is not managed by Oracle Solaris Cluster software but is instead passed to the underlying Oracle Solaris zone.


Note - To add a highly available local ZFS pool to a zone cluster, see How to Add a Highly Available Local File System to a Zone Cluster.


Perform all steps of the procedure from a node of the global cluster.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.
  2. Create the local ZFS pool that you want to configure to a specific zone-cluster node.

    Use local disks of the global-cluster node that hosts the intended zone-cluster node.

  3. Add the pool to the zone-cluster configuration in the node scope.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> select node physical-host=baseclusternode
    clzc:zoneclustername:node> add dataset
    clzc:zoneclustername:node:dataset> set name=localZFSpoolname
    clzc:zoneclustername:node:dataset> end
    clzc:zoneclustername:node> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    set name=localZFSpoolname

    Specifies the name of the local ZFS pool

  4. Verify the addition of the ZFS pool.
    phys-schost# clzonecluster show -v zoneclustername

Example 6-3 Adding a Local ZFS Pool to a Zone-Cluster Node

This example adds the local ZFS pool local_pool for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> select node physical-host=phys-schost-1
clzc:sczone:node> add dataset
clzc:sczone:node:dataset> set name=local_pool
clzc:sczone:node:dataset> end
clzc:sczone:node> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
     --- Solaris Resources for phys-schost-1 --- 
…
   Resource Name:                                dataset
     name:                                          local_pool

Adding Storage Devices to a Zone Cluster

This section describes how to add the direct use of global storage devices by a zone cluster or add storage devices that are dedicated to a single zone-cluster node. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.

After a device is added to a zone cluster, the device is visible only from within that zone cluster.

This section contains the following procedures:

How to Add a Global Storage Device to a Zone Cluster

Perform this procedure to add one of the following types of storage devices in cluster scope:


Note - To add a raw-disk device to a specific zone-cluster node, go instead to How to Add a Raw-Disk Device to a Specific Zone-Cluster Node.


The clsetup utility discovers and displays the available storage devices that are configured on the cluster nodes where the selected zone cluster is configured. After you use the clsetup utility to add a storage device to an existing zone cluster , use the clzonecluster command to modify the configuration. For instructions on using the clzonecluster command to remove a storage device from a zone cluster, see How to Remove a Storage Device From a Zone Cluster in Oracle Solaris Cluster System Administration Guide.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Identify the device to add to the zone cluster and determine whether it is online.
    phys-schost# cldevicegroup status
  3. If the device that you are adding is not online, bring it online.
    phys-schost# cldevicegroup online device
  4. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip - To return to a previous screen, type the < key and press Return.


  5. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  6. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  7. Choose the zone cluster where you want to add the storage device.

    The Storage Type Selection menu is displayed.

  8. Choose the Device menu item.

    A list of the available devices is displayed.

  9. Choose a storage device from the list.

    You can also type e to manually specify properties for a storage device.

    The Storage Device Property for the Zone Cluster menu is displayed.

  10. Add or change any properties for the storage device you are adding.

    Note - An asterisk (*) is used as a wildcard character in the path name.


    When, finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.

  11. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

     >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
        Adding file systems or storage devices to sczone zone cluster...
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            add device
            set match=/dev/md/ddg/*dsk/*
            end
            add device
            set match=/dev/md/shared/1/*dsk/*
            end
    
        Configuration change to sczone zone cluster succeeded.
        The change will become effective after the zone cluster reboots.
  12. When finished, exit the clsetup utility.
  13. Verify the addition of the device.
    phys-schost# clzonecluster show -v zoneclustername

How to Add a Raw-Disk Device to a Specific Zone-Cluster Node

Perform this procedure to add a raw-disk device to a specific zone-cluster node. This device would not be under Oracle Solaris Cluster control. Perform all steps of the procedure from a node of the global cluster.


Note - To add a raw-disk device for use by the full zone cluster, go instead to How to Add a Global Storage Device to a Zone Cluster.


  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Identify the device (cNtXdYsZ) to add to the zone cluster and determine whether it is online.
  3. Add the device to the zone-cluster configuration in the node scope.

    Note - An asterisk (*) is used as a wildcard character in the path name.


    phys-schost# clzonecluster configure zone-cluster-name
    clzc:zone-cluster-name> select node physical-host=baseclusternode
    clzc:zone-cluster-name:node> add device
    clzc:zone-cluster-name:node:device> set match=/dev/*dsk/cNtXdYs*
    clzc:zone-cluster-name:node:device> end
    clzc:zone-cluster-name:node> end
    clzc:zone-cluster-name> verify
    clzc:zone-cluster-name> commit
    clzc:zone-cluster-name> exit
    match=/dev/*dsk/cNtXdYs*

    Specifies the full device path of the raw-disk device

  4. Verify the addition of the device.
    phys-schost# clzonecluster show -v zoneclustername

Example 6-4 Adding a Raw-Disk Device to a Specific Zone-Cluster Node

The following example adds the raw–disk device c1t1d0s0 for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> select node physical-host=phys-schost-1
clzc:sczone:node> add device
clzc:sczone:node:device> set match=/dev/*dsk/c1t1d0s0
clzc:sczone:node:device> end
clzc:sczone:node> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
     --- Solaris Resources for phys-schost-1 --- 
…
   Resource Name:                                device
     name:                                          /dev/*dsk/c1t1d0s0