6.4 Verifying and Re-applying Oracle VM Manager Tuning after Software Update

During a software update from Release 1.0.2 to Release 1.1.1 it may occur that certain Oracle VM Manager tuning settings are not applied properly and default settings are used instead. After updating the Oracle Virtual Compute Appliance software stack, you must verify these tuning settings, and re-apply them if necessary. Run the following procedure:

Verifying Oracle VM Manager Tuning Settings

  1. Using SSH and an account with superuser privileges, log into the master management node.


    The data center IP address used in this procedure is an example.

    # ssh root@
    root@'s password:
    [root@ovcamn05r1 ~]#
  2. Verify that you are logged in to the master management node.

    [root@ovcamn05r1 ~]# ovca-check-master
    NODE:  MASTER: True

    If the command returns MASTER: False, log in to the other management node and run the same command.

  3. Log in to the Oracle VM shell as the admin user.

    # /usr/bin/ovm_shell.sh -u admin
    OVM Shell: 3.2.<version_id> Interactive Mode
  4. At the Oracle VM shell prompt, enter the following command:

    >>> OvmClient.getOvmManager().getFoundryContext().getModelManager().getMaxCacheSize()

    To exit Oracle VM shell, press Ctrl+D.

    If the value returned is not 300000, proceed with the next step.

  5. From the Oracle Linux command line on the master management node, apply the required Oracle VM Manager tuning settings by running the following Oracle VM shell script as the admin user:

    # /usr/bin/ovm_shell.sh -u admin -i /var/lib/ovca/ovm_scripts/ovmm_tuning.py
    live events max age: 24 hours
    archive events max age: 72 hours
    max cache size: 150000 objects
    live jobs max age: 168 hours
    archive jobs max age: 14 hours
    live jobs max age (after): 24 hours
    archive jobs max age (after): 168 hours
    live events max age (after): 3 hours
    archive events max age (after): 6 hours
    max cache size (after): 300000 objects
  6. When the tuning script completes successfully, log out of the master management node.