Go to main content

Oracle® Solaris Cluster 4.3 Software Installation Guide

Exit Print View

Updated: June 2019
 
 

Establishing a New Logical Domain Cluster by Deploying the Oracle Solaris Cluster Oracle VM Server for SPARC Template

This section provides procedures to use the Oracle VM Server for SPARC template for Oracle Solaris Cluster, to configure only guest domains or I/O domains as cluster nodes.


Note -  This template is not valid to use with control domains. To configure control domains as cluster nodes, instead follow procedures as for physical machines to install software and establish the cluster. See Finding Oracle Solaris Cluster Installation Tasks.

To add a logical domain to an existing logical-domain cluster by using the Oracle VM Server for SPARC template for Oracle Solaris Cluster, go to How to Add a Logical Domain to an Existing Logical-Domain Cluster by Using the Oracle VM Server for SPARC Template for Oracle Solaris Cluster.

How to Deploy the Oracle VM Server for SPARC Template for Oracle Solaris Cluster to Configure a Logical Domain Cluster

Perform this procedure to create a cluster of guest domains or of I/O domains.


Note -  This procedure cannot be used for the following tasks:
  • Create a cluster that contains both guest domains and I/O domains.

  • Create a cluster of control domains.

  • Add logical-domain nodes to an existing cluster.

Instead, follow procedures as for physical machines to perform these tasks. See Finding Oracle Solaris Cluster Installation Tasks.


Before You Begin

  • Ensure that the ovmtutils package is installed in the control domain. You can use the following command to verify whether the package has been installed.

    # pkg info ovmtutils
  • Ensure that the Oracle VM Server for SPARC template file is accessible from the control domains.

  • Ensure that the Oracle VM Server for SPARC services have been defined:

    • Virtual disk service - The ovmtutils create and configure various aspects of the Oracle VM Server for SPARC environment during deployment, but will require some services to be present. Additionally, some services are required by subsequent tasks. The following is an example command to create a disk service and is run from the control domain:

      # /usr/sbin/ldm add-vds primary-vds0 primary
    • Virtual console concentrator service - The following is an example command to create a console concentrator service and is run from the control domain:

      # /usr/sbin/ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Type all commands in this procedure from the control domain. The template file is located at /export/ovmt/ovm_osc43_sparc.ova.

  1. Remove the target logical domain if it already exists.
    # ovmtdeploy -U newdomain
  2. Create the working directory.

    If the working directory already exists, make sure that the directory is empty.

    # mkdir -p /domains/newdomain
    # ls -l /domains/newdomain
    total 0
  3. List the contents of the template without deploying the template.
    # ovmtdeploy -n -l -d newdomain /export/ovmt/ovm_osc43_sparc.ova
    
    Oracle VM for SPARC Deployment Utility
    ovmtdeploy Version 3.4.0.0.11
    Copyright (c) 2014, 2015, Oracle and/or its affiliates. All rights reserved.
    
    STAGE 1 - EXAMINING SYSTEM AND ENVIRONMENT
    ------------------------------------------
    Checking user privilege
    Performing platform & prerequisite checks
    Checking for required services
    Named resourced available
    
    STAGE 2 - ANALYZING ARCHIVE & RESOURCE REQUIREMENTS
    ---------------------------------------------------
    Checking .ova format and contents
    Validating archive configuration
    Listing archive configuration
    
    Assembly
    ------------------------
    Assembly name: ovm_osc43_sparc.ovf
    Gloabl settings:
    References: system -> System.img.gz
    Disks: system -> system
    Networks: primary-vsw0
    
    Virtual machine 1
    ------------------------
    Name: newdomain
    Description: Oracle Solaris Cluster 4.3 with 2 vCPUs, 4G memory, 1 disk image(s)
    vcpu Quantity: 2
    Memory Quantity: 4G
    Disk image 1: ovf:/disk/system -> system
    Network adapter 1: Ethernet_adapter_0 -> primary-vsw0
    Oracle Solaris Cluster 4.3
        name
    Solaris 11 System
        computer-name
        ifname
        time-zone
        keyboard
        language
    Solaris 11 Root Account
        root-password
    Solaris 11 User Account
        name.0
        real-name.0
        password.0
    Solaris 11 Network
        ipaddr.0
        netmask
        gateway.0
        dns-servers.0
        dns-search-domains.0
        name-service
        domain-name
        nis-servers
        ldap-profile
        ldap-servers
        ldap-search-base
        ldap-proxy-bind-distinguished-name
        ldap-proxy-bind-password
    Oracle Solaris Cluster
        cluster_name
        node_list
        interconnect
        private_netaddr
  4. Prepare the system configuration property files which are required for configuring each domain.

    Use the template Oracle Solaris system configuration file to compose your own file. The template for the Oracle Solaris system configuration property file is available at /opt/ovmtutils/share/props/solaris.properties.

    The system configuration property file is different for each node. A name service must be provided in the Oracle Solaris property file, so that the nodes can resolve the remote sponsor node name when they join the cluster.

  5. Prepare the cluster configuration property files which is required to add each domain to form the cluster.

    The cluster configuration file includes the following Oracle Solaris Cluster properties:

    • com.oracle.hacluster.config.cluster_name – Specifies the cluster name.

    • com.oracle.hacluster.config.node_list – Comma-separated list of hostnames of the logical domains to form the cluster. The first node in the list is the first one to be added to the cluster and serves as the sponsor node for the rest of the nodes. It is required that all domains deployed by using the template to have the exact same list, as the order of the list matters: the first host name is the sponsor node.

    • com.oracle.hacluster.config.interconnect – Comma-separated list of the interconnect adapters, or pkeys if you are using InfiniBand partitions.

    • com.oracle.hacluster.config.private_netaddr – (Optional) Specify a private network address that is compatible with netmask255.255.240.0. The default address is 172.16.0.0. When using InfiniBand, the default private network address can be used as the pkeys are different among the clusters.

    You can use the same cluster configuration property file for all the new domains.

  6. In the control domains, type the ovmtdeploy command to deploy the new domains.

    You can use different options in the following scenarios:

    • If the control domain is in the vanilla state and does not have the switches created yet, use the –e option to specify the adapters for creating vswitches or vnets.

    • If the switches are already created in the control domain, you can use the order in the template as shown in the output of ovmtdeploy –n –l, or use the –t option to specify the order for using existing vswitches for each of the vnets.

    • Specify the SR-IOV virtual functions by using the –I option.

      The following example deploys a new domain with the specified switches/adapters and disks. The first disk specified by the –v option is the local root disk for the new domain, and the following two disks are shared disks:

      # /opt/ovmtutils/bin/ovmtdeploy -d newdomain -o /domains/newdomain \
       -k -s -c 8 -t primary-vsw0,priv-vsw1,priv-vsw2 -e net0,net2,net3 \
      -v /dev/rdsk/c0tNd0s2,/dev/rdsk/c0tX9d0s2,/dev/rdsk/c0tYd0s2 \
      /export/ovmt/ovm_osc43_sparc.ova

      The following example uses SR-IOV virtual functions for deploying a new domain:

      # /opt/ovmtutils/bin/ovmtdeploy -d newdomain -o /domains/newdomain -k -s -c 8 \
      -I /SYS/PCI-EM0/IOVIB.PF0.VF0,/SYS/PCI-EM4/IOVIB.PF0.VF0 \
      -e net0 \
      -v /dev/rdsk/c0tNd0s2,/dev/rdsk/c0tX9d0s2,/dev/rdsk/c0tYd0s2\
      /export/ovmt/ovm_osc43_sparc.ova

    The –v option specifies a comma-separated list of target devices. For cluster, you can specify an Oracle Solaris raw whole disk device, for example, /dev/rdsk/c3t3d0s2 or an iSCSI device, such as /dev/rdsk/c0t600144F00021283C1D7A53609BE10001d0s2. A target device on a slice and a target device on a block device are not supported. Specify the root zpool disk as the very first one. If you specify multiple disks including local disks and shared devices, specify them in the same order for all the domains. For more information, see the ovmtdeploy(1M) man page.

  7. Configure the new domain to form the cluster.

    In all the control domains, use the ovmtconfig command to configure the new domains with the system and Oracle Solaris Cluster configuration property files created in Step 4 and Step 5. The ovmtconfig command will also boot the domain to complete the remaining configuration operations done by the software. In this process the domain will be rebooted twice, and the last reboot will bring it into cluster mode.

    Use the –P option to specify the system and Oracle Solaris Cluster configuration property files, or use the –p option to specify an individual property which overrides the same property specified inside the property file. Information about the use of other options can be found in the ovmtconfig(1M) man page.

    # ovmtconfig -d newdomain -s -v \
    -P /export/ovmt/properties/system_node1OVM.props,/export/ovmt/properties/cluster_newdomain.props
    # ldm ls

    For more information, see the ovmtconfig(1M) man page.

  8. Identify the console port number of the domain and then connect to the console of that domain.
    # ldm ls newdomain
    # telnet 0 console-port-number-of-newdomain
    1. When all the domains join the cluster, log in to the domain and check the cluster configuration and status.
      # cluster show
      # cluster status
    2. Use the pkg info command to confirm whether the cluster packages are installed.
    3. Use the cluster check command to verify the cluster configuration.
    4. Check whether any SMF services are in maintenance mode.
      # svcs -xv
    5. Check the public network configuration.
      # ipmpstat -g
  9. If the svc:/system/cluster/sc-ovm-config:default SMF service is failed and is in maintenance mode, check the deploy log file at /var/cluster/logs/install for the detailed list of errors.
  10. Request and download your own key and certificate files.

    The solaris and ha-cluster publishers that are set in the deployed domain do not work until you perform this step.

    1. Unset the solaris and ha-cluster publishers.
      # pkg unset-publisher solaris
      # pkg unset-publisher ha-cluster
    2. Go to https://pkg-register.oracle.com.
    3. Choose Oracle Solaris Cluster software.
    4. Accept the license.
    5. Request a new certificate by choosing Oracle Solaris Cluster software and submitting a request.

      The certification page is displayed with download buttons for the key and the certificate.

    6. Download the key and certificate files and install them as described in the returned certification page.
    7. Configure the ha-cluster publisher with the downloaded SSL keys and set the location of the Oracle Solaris Cluster 4.3 repository.

      In the following example the repository name is https://pkg.oracle.com/repository-location/.

      # pkg set-publisher \
      -k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem \
      -c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem \
      -O https://pkg.oracle.com/repository-location/ ha-cluster
      –k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem

      Specifies the full path to the downloaded SSL key file.

      –c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem

      Specifies the full path to the downloaded certificate file.

      –O https://pkg.oracle.com/repository-location/

      Specifies the URL to the Oracle Solaris Cluster 4.3 package repository.

      For more information, see the pkg(1) man page.