Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Verifying the Installation and Configuration of WebSphere MQ

This section contains the procedure you need to verify the installation and configuration.

ProcedureHow to Verify the Installation and Configuration of WebSphere MQ

This procedure does not verify that your application is highly available because you have not yet installed your data service.

Perform this procedure on one node or zone of the cluster unless a specific steps indicates otherwise.

  1. Ensure the zone is booted, if a non-global zone or failover zone is being used.

    Repeat this step on all nodes on the cluster for a non-global zone and on one node of the cluster if a failover zone is being used.

    Boot the zone if it is not running.


    # zoneadm list -v
    # zoneadm -z zonename boot
    
  2. Login to the zone, if a non-global zone or failover zone is being used.


    # zlogin zonename
    
  3. Start the queue manager, create a persistent queue and put a test message to that queue.


    # su - mqm
    $ strmqm queue-manager
    $ runmqsc queue-manager
    def ql(sc3test) defpsist(yes)
    end
    $
    $ /opt/mqm/samp/bin/amqsput SC3TEST queue-manager
    test test test test test
    ^C
  4. Stop the queue manager.


    $ endmqm -i queue-manager
    $ exit
    
  5. Logout from the zone, if a non-global zone or failover zone is being used.


    # exit
    
  6. Unmount the highly available local file system.

    Perform this step in the global zone only.

    You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere MQ

    1. If a non ZFS highly available local file system is being used for WebSphere MQ.

      1. If the global zone is being used for WebSphere MQ.


        # umount websphere-mq-highly-available-local-file-system
        
      2. If a non-global zone or failover zone is being used for WebSphere MQ.

        Unmount the highly available local file system from the zone.


        # umount /zonepath/root/websphere-mq-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere MQ.


      # zpool export -f HAZpool
      
  7. Relocate the shared storage to other node.

    Perform this step on another node of the cluster.

    1. If a non ZFS highly available local file system is being used for the WebSphere MQ files.

      Ensure the node has ownership of the disk set or disk group.

      For Solaris Volume Manager.


      # metaset -s disk-set -t
      

      For Veritas Volume Manager.


      # vxdg -C import disk-group
      # vxdg -g disk-group startall
      
      1. If the global zone is being used for WebSphere MQ.


        # mount websphere-mq-highly-available-local-file-system
        
      2. If a non-global zone or failover zone is being used for WebSphere MQ.

        Create the mount point on all zones of the cluster that are being used for WebSphere MQ.

        Mount the highly available local file system on one of the zones being used .


        # zlogin zonename mkdir websphere-mq-highly-available-local-file-system
        #
        # mount -F lofs websphere-mq-highly-available-local-file-system \
        > /zonepath/root/websphere-mq-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere MQ.


      # zpool import -R /zonepath/root HAZpool
      
  8. Login to the zone, if a non-global zone or failover zone is being used.

    Perform this step on the other node of the cluster.


    # zlogin zonename
    
  9. Start the queue manager, get the test message and delete the queue.

    Perform this step on the other node or zone of the cluster.


    # su - mqm
    $ strmqm queue-manager
    $ /opt/mqm/samp/bin/amqsget SC3TEST queue-manager
    ^C
    $ runmqsc queue-manager
    delete ql(sc3test)
    end
    
  10. Stop the queue manager.

    Perform this step on the other node or zone of the cluster.


    $ endmqm -i queue-manager
    $ exit
    
  11. Logout from the zone, if a non-global zone or failover zone is being used.


    # exit
    
  12. Unmount the highly available local file system.

    Perform this step in the global zone only.

    You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere MQ

    1. If a non ZFS highly available local file system is being used for WebSphere MQ.

      1. If the global zone is being used for WebSphere MQ.


        # umount websphere-mq-highly-available-local-file-system
        
      2. If a non-global zone or failover zone is being used for WebSphere MQ.

        Unmount the highly available local file system from the zone.


        # umount /zonepath/root/websphere-mq-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere MQ.


      # zpool export -f HAZpool
      
  13. Shutdown the zone, if a failover zone is being used.


    Note –

    This step is only required if a failover zone is being used.



    # zlogin zonename halt