Skip Headers
Oracle® Auto Service Request Installation and Operations Guide
Release 4.9 for Linux and Solaris

E18475-28
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

B Other ASR Manager Administration

This appendix provides additional or alternative information for managing your ASR Manager environment. Sections include:

B.1 ASR Manager and High Availability

The following are steps that were used for a more recoverable ASR Manager setup than a single server. This setup shows one way without using complex cluster software but there are many other ways.

B.1.1 Using Solaris 10 Local/Nonglobal Zone

The concept is to select 2 servers that are identical and has shared storage. A local/non-global zone path/location is setup on the shared storage where the ASR Manager software is installed. The local/non-global zone can then be moved from primary server in the event that the primary server fails and cannot be brought back on-line in a timely manner, to the secondary server where the local/non-global zone and can be brought up. ASR Manager is installed on the local/non-global zone and allows the application to be moved between primary and secondary server.

The shared storage can be direct fiber attached, SAN, iSCSI etc. In this example we use direct fiber attached storage and ZFS. The basics apply no matter what the shared storage is.

The basic concept for moving the local/non-global zone is shutdown ASR local/non-global zone on primary server, export the ZFS zpool on primary server. Then on secondary server, import zpool and boot local/nonglobal zone.

Several things to keep in mind when preparing the setup and process used for fail-over.

  • It is preferred to use identical servers for primary and secondary host. This allows you to move the local/non-global zone from one server without having to run zonecfg to change network interface device or storage device.

  • Both primary and secondary server must have the same Solaris 10 revision and same patches installed.

  • Set zone autoboot to false. This avoids situations of the local zone/non-global zone trying to be booted on both servers.

  • If using ZFS, be sure to only import the zpool to one server. ZFS does not support a zpool being imported to 2 separate hosts at the same time.

  • In this example we setup the local/non-global zone manually on the secondary server. One can use the zone detach and attach within a script if preferred.

Required hardware setup:

  • Two Sun Servers that are the same and support ASR Manager requirements. See Hardware Requirements for more details.

  • Share storage that has a file system that can be moved between primary and secondary server or supports the ability to have file system mounted on both hosts at the same time such as a cluster supported file system.

  • ASR Manager software.

B.1.1.1 Setup and Overview

Initial setup and overview process of primary and secondary hosts:

  1. Build two Sun servers with Solaris 10 Update 6 (10u6) and later.

  2. Attach shared storage to both primary and secondary host.

  3. Create file system on shared storage and test the move (export/import) between primary and secondary host.

  4. Create ASR local/non-global zone for ASR Manager

  5. Copy the zone cfg.xml file and the zone index file entry from primary host to secondary host

  6. Verify you can shut down ASR Manager local/non-global zone on primary host and bring up the ASR Manager on secondary host.

  7. Install and verify ASR Manager (see Install ASR).

  8. Finally configure ASR Manager to monitor systems.

The following is an example of moving zone and ZFS file system from primary host to secondary host:

In this example we will use the following labels:

  • Local/non-global hostname: asrmanager

  • Primary server: asrprd-01

  • Secondary server: asrprd-02

  • Zpool name for ZFS: /asr-zones

  • Path to ASR zone: /asr-zones/asrmanager

At this point the primary host has the ZFX zpool imported and asrmanager local/non-global zone is booted:

  • Show running asrmanager local/non-global zone:

    asrprd-01# zoneadm list -vc
    
    ID NAME        STATUS     PATH                   BRAND    IP
    0 global       running    /                      native   shared
    1 asrmanager   running    /asr-zones/asrmanager  native   shared
    
  • Show ZFS zpool:

    asrprd-01# zpool list
    
    NAME        SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
    asr-zones   272G  1.04G   271G     0%  ONLINE  -
    
  • Show ZFS file systems:

    asrprd-01# zfs list | grep asr
    
    asr-zones             1.03G   267G    23K  /asr-zones
    asr-zones/asrmanager  1.03G   267G  1.03G  /asr-zones/asrmanager
    

B.1.1.2 Moving from Primary Host to Secondary Host

Note:

This step is required in case of any issues or maintenance work with the primary server.

Steps used to move from primary host to secondary host:

  1. Shut down asrmanager local/non-global zone:

    asrprd-01# zoneadm -z asrmanager halt
    
  2. Verify zone is shut down:

    asrprd-01# zoneadm list -vc
    

    Command output should look like this:

    ID  NAME         STATUS      PATH                    BRAND    IP
    0   global       running     /                       native   shared
    -   asrmanager   installed   /asr-zones/asrmanager   native   shared
    
  3. Export ZFS zpool:

    asrprd-01# zpool export asr-zones
    
  4. Verify ZFS zpool has been exported:

    asrprd-01# zpool list
    

    Expected command output should be:

    no pools available
    

Now that the asrmanager local/non-global zone has been shut down and the ZFS zpool exported, log in to the secondary host and import the zpool and boot the local/non-global zone:

  1. To show that ZFS zpool is not imported:

    asrprd-02# zpool list
    
  2. Import the zone ZFS zpool where asrmanager zone resides:

    asrprd-02# zpool import asr-zones
    
  3. Verify ZFS zpool has been imported:

    asrprd-02# zpool list
    
    NAME        SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
    asr-zones   272G  1.03G   271G     0%  ONLINE  -
    
  4. Show ZFS file systems:

    asrprd-02# zfs list | grep asr
    
    asr-zones             1.03G   267G    23K  /asr-zones
    asr-zones/asrmanager  1.03G   267G  1.03G  /asr-zones/asrmanager
    
  5. Boot asrmanager local/non-global zone:

    asrprd-02# zoneadm -z asrmanager boot
    
  6. Verify asrmanager local/non-global zone has booted:

    asrprd-02# zoneadm list -vc
    
    ID  NAME         STATUS     PATH                     BRAND    IP
    0   global       running    /                        native   shared
    1   asrmanager   running    /asr-zones/asrmanager    native   shared
    

ASR Manager is now running in a local/non-global zone on the secondary host.

B.1.2 Using Linux and IP Route Package

The concept is to select two servers that are identical and have shared storage. A virtual IP address is set up using the IP Route utility to send ASR traffic to and from the source IP using the virtual IP. Shared storage is mounted between each host where the ASR Manager software is installed.

Using the IP Route utility, the virtual IP that ASR Manager uses can be moved from the primary server (for example, in the event that the primary server fails and cannot be brought back on-line in a timely manner) to the secondary server where the VIP/source route can be brought up. Finally, the shared storage file systems are mounted, and ASR Manager can be started.

The shared storage can be direct fiber attached, SAN, iSCSI etc. The example below uses direct fiber attached storage and ext3 file systems. The basics apply no matter what shared storage is used.

The basic concept for moving from the primary server to the secondary server is:

  • On the primary server:

    1. Shut down ASR Manager on the primary host (if primary host is up).

    2. Run the ip route command to remove source route.

    3. Unplumb VIP.

    4. Unmount file systems that ASR Manager uses.

  • On the secondary server:

    1. Plumb the VIP.

    2. Run ip route to add source routing.

    3. Mount file systems.

    4. Start ASR Manager.

Several things to keep in mind when preparing the setup and process used for fail-over.

  • It is preferred to use identical servers for the primary and secondary host.

  • Both primary and secondary servers must have the same Linux revision and same patches installed.

  • Do not start ASR Manager on boot.

  • If using ext3, do not mount file systems on both hosts at the same time.

Required hardware setup:

  • Two servers that are the same and support ASR Manager requirements. See Hardware Requirements for more details.

  • Shared storage that has a file system that can be moved between primary and secondary server or supports the ability to have file system mounted on both hosts at the same time (for example, a cluster supported file system).

  • ASR Manager software.

B.1.2.1 Setup and Overview

Initial setup and overview process of primary and secondary hosts:

  1. Build two Linux servers with versions such as Oracle Linux update7 and later.

  2. Add IP Route package. The iproute-2.6.18-11.el5.i386.rpm file was used in the example below. This rpm file is located in the ”Server” directory on the Oracle Linux DVD.

  3. Attach shared storage to both primary and secondary hosts.

  4. Create file systems /opt and /var/opt on shared storage and test the move of file system between primary and secondary host.

  5. Plumb VIP interface and install/test IP Route source routing using the VIP's IP. (Read IP Route documentation)

  6. Install and verify ASR Manager (see Install ASR).

The example below shows how to move the ASR Manager from a primary host to a secondary host.

In this example we will use the following labels:

  • Virtual IP: asrmanager / 10.10.0.20

  • Primary server: asrprd-01 / 10.10.0.10

  • Secondary server: asrprd-02 / 10.10.0.11

  • File system mounts for ASR manager: /opt and /var/opt

On the primary host, create the virtual IP, using the IP Route utility for source route and file system mount:

  1. Verify file systems /opt and /var/opt are mounted:

    [root@asrprd-01]# df | grep opt
     
    /dev/sdc             281722700    243924 267168072   1% /opt 
    /dev/sdb             281722700    243776 267168220   1% /var/opt
    
  2. Show the source IP:

    [root@asrprd-01]# ip route show
     
    10.79.208.0/24 dev eth0  scope link  src 10.10.0.20 
    default via 10.10.0.1 dev eth0
    
  3. Plumb the virtual IP interface:

    [root@asrprd-01]# /sbin/ifconfig eth0:0 10.10.0.20/24 broadcast 10.79.208.255
    
  4. Change the source IP:

    [root@asrprd-01]# ip route change 10.79.208.0/24 dev eth0 src 10.10.0.20
    
  5. Verify the source IP is set to a virtual IP:

    [root@asrprd-01]# ip route
     
    10.79.208.0/24 dev eth0  scope link  src 10.10.0.20
    default via 10.10.0.1 dev eth0
    

After source IP is set to the virtual IP, you can ping another host from the primary server and should see source IP of the virtual IP on that host and no longer the IP of the primary server.

At this point, install the ASR Manager software which should install in /opt and /var/opt (see Install ASR).

To move the ASR Manager and the virtual IP to a secondary host:

  1. Log in to the primary server.

  2. Shut down ASR Manager:

    service sasm stop
    
  3. Change source IP route back:

    [root@asrprd-01]# ip route change 10.79.208.0/24 dev eth0 src 10.10.0.10
    
  4. Verify the source IP is back to the primary server IP address:

    [root@asrprd-01]# ip route show
     
    10.79.208.0/24 dev eth0  scope link  src 10.10.0.10 
    default via 10.10.0.1 dev eth0
    
  5. Unplumb the virtual IP interface:

    [root@asrprd-01]# /sbin/ifconfig eth0:0 down
    
  6. Unmount the /opt and /var/opt file systems from shared storage.

  7. Log in into secondary server.

  8. Show current source IP:

    [root@asrprd-02]# ip route show
     
    10.79.208.0/24 dev eth0  proto kernel  scope link  src 10.10.0.11 
    default via 10.10.0.1 dev eth0
    
  9. Plumb virtual IP interface:

    [root@asrprd-02]# /sbin/ifconfig eth0:0 10.10.0.20/24 broadcast 10.79.208.255
    
  10. Change source IP:

    [root@asrprd-02 ~]# ip route change 10.79.208.0/24 dev eth0 src 10.10.0.20
    
  11. Verify source IP is set to the virtual IP:

    [root@asrprd-02 ~]# ip route
     show
    10.79.208.0/24 dev eth0  scope link  src 10.10.0.20 
    default via 10.10.0.1 dev eth0
    
  12. Mount the /opt and /var/opt file system from shared storage.

  13. Start ASR Manager on secondary host:

    service sasm start
    

ASR Manager is now running on the secondary host.

B.2 Run OASM or ASR Manager as Non-root User

To run OASM or the ASR Manager as a non-root user:

  1. Stop OASM:

    For Solaris: svcadm disable sasm

    For Linux: service sasm stop

  2. Create OASM role and assign it to a normal user:

    1. Run: /opt/SUNWswasr/util/oasm_asr_nonroot.sh

    2. Set the password, run: passwd oasm

    3. Assign OASM role, run: usermod -R oasm <non-root-user>

      Where <non-root-user> is a normal user account.

      Note:

      This step is for Solaris only.
  3. Start OASM:

    For Solaris: svcadm enable sasm

    For Linux: service sasm start

  4. Log in to ASR Manager as <non-root-user>. Switch the <non-root-user> to the OASM role:

    su - oasm
    
  5. Once the role is switched, then you can perform the following tasks:

    • Check OASM status:

      For Solaris: svcs sasm

      For Linux: service sasm status

    • Disable OASM service:

      For Solaris: svcadm disable sasm

      For Linux: service sasm stop

    • Enable OASM service:

      For Solaris: svcadm enable sasm

      For Linux: service sasm start

    Note:

    Disable ASR Auto Update functionality on OASM/ASR Manager running as a non-root user. To disable Auto Update:
    asr> disable_autoupdate