JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for Apache Tomcat Guide     Oracle Solaris Cluster 4.0
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring Oracle Solaris Cluster HA for Apache Tomcat

A.  Deployment Example: Installing Apache Tomcat in the Global Zone

B.  Deployment Example: Installing Apache Tomcat in a Failover Zone

Target Cluster Configuration

Software Configuration

Assumptions

Installing and Configuring Apache Tomcat on Global Storage in the Failover Zone

Example: Preparing the Cluster for Apache Tomcat

Example: Configuring Cluster Resources for Apache Tomcat

Example: Creating and Configuring the Failover Zone

Example: Installing the Apache Tomcat Software on Shared Storage

Example: Modifying the Apache Tomcat Configuration Files

Enabling the Apache Tomcat Software to Run in the Cluster

Index

Example: Creating and Configuring the Failover Zone

  1. Create and configure the zone on all nodes that can host this failover zone.

    The zpool hosting the zonepath must be on a shared disk. For a two-node cluster, the zone configuration must be executed on both nodes. Following is an example using the phys-schost-1 node. Perform the same actions on the phys-schost-2 node.

    phys-schost-1# zonecfg -z solarisfz1 \ 'create -b;
    set zonepath=/ha-zones/solaris/solarisfz1;
    set autoboot=false; set ip-type=shared;
    add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;
    add net; set address=zone-hostname ; set physical=sc_ipmp0; end;'
  2. Verify that the node is configured on both nodes.
    phys-schost-1# zoneadm list -cv
     ID NAME             STATUS       PATH                            BRAND    IP
      0 global           running       /                              solaris  shared
      - solarisfz1       configured    /ha-zones/solaris/solarisfz1   solaris  shared
    phys-schost-2# zoneadm list -cv
     ID NAME             STATUS       PATH                            BRAND    IP
      0 global           running       /                              solaris  shared
      - solarisfz1       configured    /ha-zones/solaris/solarisfz1   solaris  shared
  3. Install the zone on phys-schost-1, which is where the ha-zones zpool is online.
    phys-schost-1:~# zoneadm -z solarisfz1 install
    Progress being logged to /var/log/zones/zoneadm.20030401T184050Z.solarisfz1.install
        Image:       Preparing at /ha-zones/solaris/solarisfz1/root.
        Install Log: /system/volatile/install.3349/install_log
        AI Manifest: /tmp/manifest.xml.QGa4Gg
        SC Profile:  /usr/share/auto_install/sc_profiles/enable_sci.xml
        Zonename:    solarisfz1
    Installation:    Starting ...
    
        Creating IPS image
        Installing packages from:
           solaris
             origin:  http://pkg.oracle.com/solaris/release/
           ha-cluster
             origin:  http://localhost:1008/ha-cluster/2c76b8fe7512dde39 \
                      c04c11f28f6be4603f39c66/
    DOWNLOAD                                  PKGS       FILES    XFER (MB)
    Completed                              167/167 32062/32062  175.8/175.8$<3>
    
    PHASE                                        ACTIONS
    Install Phase                            44313/44313
    
    PHASE                                          ITEMS
    Package State Update Phase                   167/167
    Image State Update Phase                         2/2
    Installation: Succeeded
    
            Note: Man pages can be obtained by installing pkg:/system/manual. Done.
            Done: Installation completed in 550.217 seconds.
    Next Steps: Boot the zone, then log into the zone console (zlogin -C)to complete
    the configuration process.
    Log saved as /ha-zones/solaris/solarisfz1/
    root/var/log/zones/zoneadm.20030401T184050Z.solarisfz1.install.
  4. Verify that the zone was successfully installed and can boot up on phys-schost-1.
    1. Verify that the zone was installed.
      phys-schost-1# zoneadm list -cv
        ID NAME             STATUS      PATH                             BRAND    IP
         0 global           running       /                              solaris  shared
         - solarisfz1       installed     /ha-zones/solaris/solarisfz1   solaris  shared
    2. In a different window (for example, from an ssh, rlogin, or telnet window), log into the zone's console and boot the zone.
      phys-schost-1# zlogin -C solarisfz1
      phys-schost-1# zoneadm -z solarisfz1 boot
    3. Follow the prompts in the interactive screens to configure the zone.
    4. Shut down the zone and switch the resource group to another node in the resource group nodelist.
      phys-schost-1# zoneadm -z solarisfz1 shutdown
      phys-schost-1# clresourcegroup switch -n phys-schost-2 zone-rg
      phys-schost-1# zoneadm -z solarisfz1 detach -F
      phys-schost-1# zoneadm list -cv
        ID NAME             STATUS      PATH                             BRAND    IP
         0 global           running       /                              solaris  shared
         - solarisfz1       configured    /ha-zones/solaris/solarisfz1   solaris  shared
  5. Assign the universally unique identifier (UUID) for the active boot environment (BE) from the first node, phys-schost-1, to the active BE on the second node, phys-schost-2.
    1. Get the UUID for the active BE on phys-schost-1.
      phys-schost-1:~# beadm list -H
      b175b-fresh;70db96a2-5006-c84e-da77-f8bd430ba914;;;64512;static;1319658138
      s11_175b;b5d7b547-180d-467e-b2c4-87499cfc1e9d;NR;/;8000659456;static;1319650094
      s11_175b-backup-1;aba7a813-feb9-e880-8d7b-9d0e5bcd09af;;;166912;static;1319658479
      phys-schost-2:~# beadm list -H
      b175b-fresh;c37d524b-734a-c1e2-91d9-cf460c94110e;;;65536;static;1319471410
      s11_175b;1d0cca6d-8599-e54a-8afa-beb518b1d87a;NR;/;8096948224;static;1319293680
      s11_175b-backup-1;db2b581a-ea82-6e8c-9a3d-c1b385388fb7;;;167936;static;1319472971
    2. Set the UUID for the active BE of the global zone on phys-schost-2 to be the same as phys-schost-1. The active BE has flag N in the third field separated by a semicolon. The UUID is set on the data set of the BE. You can get the UUID by running df -b /.
      phys-schost-2:~# df -b /
      Filesystem                       avail
      rpool/ROOT/s11_175b   131328596
      root@vzoolah3a:/#
      phys-schost-2:~# zfs set org.opensolaris.libbe:uuid=b5d7b547-180d-467e-b2c4 \
      -87499cfc1e9d \
      rpool/ROOT/s11_175b
  6. Attach the zone and verify the zone can boot on the second node.
    1. Attach the zone.
      phys-schost-2# zoneadm -z solarisfz1 attach -F
    2. From another session, connect to the zone console.
      phys-schost-2# zlogin -C solarisfz1
    3. Boot the zone and observe the boot messages on the console.
      phys-schost-2# zoneadm -z solarisfz1 boot
  7. If the boot up succeeded, shut down and detach the zone.
    phys-schost-2# zoneadm -z solarisfz1 shutdown
    phys-schost-2# zoneadm -z solarisfz1 detach -F
  8. On both nodes, install the failover container agent if it is not already installed.

    The following example shows how to install the agent on phys-schost-1.

    phys-schost-1# pkg install ha-cluster/data-service/ha-zones
  9. Create the resource from any one node and set the parameters on both nodes.

    Steps a and b show these steps performed on phys-schost-1.

    1. Register the resource
      phys-schost-1# clresourcetype register SUNW.gds
    2. On both nodes, edit the sczbt_configfile and set the parameters.
      phys-schost-1# cd /opt/SUNWsczone/sczbt/util
      phys-schost-1# cp -p sczbt_config sczbt_config.solarisfz1-rs
      phys-schost-1# vi sczbt_config.solarisfz1-rs
      RS=solarisfz1-rs
      RG=zone-rg
      PARAMETERDIR=/ha-zones/solaris/solarisfz1/params
      SC_NETWORK=false
      SC_LH=
      FAILOVER=true
      HAS_RS=ha-zones-hasp-rs
      Zonename="solarisfz1"
      Zonebrand="solaris"
      Zonebootopt=""
      Milestone="svc:/milestone/multi-user-server"
      LXrunlevel="3"
      SLrunlevel="3"
      Mounts=""
    3. On phys-schost-2, create the params directory that appears in the sczbt_config file.
      phys-schost-2# mkdir /ha-zones/solaris/solarisfz1/params
    4. On one node, configure the zone-boot resource.

      The resource is configured with the parameters that you set in the sczbt_config file.

      phys-schost-2# ./sczbt_register -f ./sczbt_config.solarisfz1-rs
    5. On one node, enable the failover zone resource that was created.
      phys-schost-2# clresource enable solarisfz1-rs
    6. On one node, check the status of the resource groups and resources.
      phys-schost-2# clresource status -g zone-rg
      === Cluster Resources ===
      
      Resource Name         Node Name      State      Status Message
      -------------------   -------------  -----      -------------------
      solarisfz1-rs         phys-schost-1  Offline    Offline
                            phys-schost-2  Online     Online
      
      ha-zones-hasp-rs      phys-schost-1  Offline    Offline
                            phys-schost-2  Online     Online
    7. Verify that the zone successfully boots up and then switch to the other node to test the switchover capability.
      phys-schost-2# clresourcegroup switch -n phys-schost-1 zone-rg
      phys-schost-2# clresource status -g zone-rg
      === Cluster Resources ===
      
      Resource Name         Node Name      State       Status Message
      -------------------   -------------  -----       -------------------
      solarisfz1-rs         phys-schost-1  Online      Online
                            phys-schost-2  Offline     Offline
      
      ha-zones-hasp-rs      phys-schost-1  Online      Online
                            phys-schost-2  Offline     Offline
    8. Verify that the zone successfully switched over to the other node.
      phys-schost-1# zlogin -C solarisfz1