Many factors can affect the resources used by Oracle Enterprise Manager Ops Center, and ultimately its ability to scale to manage large deployments.
This chapter includes the following sections:
By default, the Agent Controller heap size is set to 1 GB. If an Agent Controller is not being used for virtualization, you can reduce this heap size to save resources on the managed system. If you later want to use such a system for virtualization, you should increase the heap size.
You can run the OCDoctor script with the
--troubleshoot option to check the current heap usage.
An Agent Controller requires a minimum of 512 MB of RAM.
You can check the current heap usage on an Agent Controller system using the OCDoctor.
/var/opt/sun/xvm/OCDoctor/OCDoctor.shscript with the
--troubleshootoption. An analysis of the current Agent Controller heap usage is displayed.
WARNING: Agent heap stats: configured: 1024MB max. usable: 980MB currently used: 925MB percentage used: 94% WARNING: WARNING: The heap usage is above 90% of the maximum usable size! WARNING: WARNING: To increase the maximum allowed heap size, you could use e.g. the following command: WARNING: WARNING: /var/opt/sun/xvm/OCDoctor/toolbox/changeXmxValue.sh 2048 m agent WARNING:
INFO: Agent heap stats: configured: 1024MB max. usable: 980MB currently used: 41MB percentage used: 4% INFO: INFO: The heap usage is below 5%. INFO: INFO: If desired, you can reduce the maximum allowed heap size by running e.g. the following command: INFO: INFO: /var/opt/sun/xvm/OCDoctor/toolbox/changeXmxValue.sh 256 m agent INFO:
You can use the
OCDoctor's /var/opt/sun/xvm/OCDoctor/toolbox/changeXmxValue.sh script to change an Agent Controller's heap size.
/var/opt/sun/xvm/OCDoctor/toolbox/changeXmxValue.shscript to specify the new heap size.
/var/opt/sun/xvm/OCDoctor/toolbox/changeXmxValue.sh 256 m agent
When working with agents that access several thousand LUNs, the timeout for fetching all the LUNs can be exceeded.
#mpathadm list lu
/opt/sun/n1gc/etc/xvmluinfo.propertieson the agent system. In this file, add the following line:
If you are using Dynamic Storage Libraries containing thousands of LUNs (a few thousand LUNs), in some cases not all LUNs are retrieved correctly.
If some LUNs are not displayed in the dynamic library, open the proxy logs in the
/var/cacao/instances/scn-proxy/logs directory and check for the following message: "unable to fetch storage element info.”
If this message exists in the logs, this situation may be resolved by opening the
/opt/sun/n1gc/lib/XVM_PROXY.properties file, changing the value of the
storage.driver.pagination.size property to 100, and restarting the proxy. Allow some time for the LUNs to be fetched.
In some Oracle Solaris 11 configurations, ZFS ARC can consume all available memory even when it should be released. You can limit the size of the ZFS ARC cache to prevent this issue.
Use this formula to determine the recommended ZFS ARC cache size:
ZFS ARC cache = (Physical memory - Enterprise Controller heap size - Database memory) x 70%
Once you have the size, you can set it by changing the value of the
zfs_arc_max property in a system file. On Oracle Solaris 11.0 or 11.1, edit the
/etc/system file. On Oracle Solaris 11.2 or higher, edit the configuration file in the
If you plan to provision multiple Oracle VM Server for SPARC guests in parallel, the default number of threads configured by the pkg depot may be too low to manage all of the jobs. You can avoid this issue by increasing the number of threads configured by the depot to 500.
When discovering and managing a ZFS Storage Appliance, Ops Center can work with one of two drivers: a StorageConnect driver (which uses the StorageConnect API to communicate with the appliance) or an OpsCenter Driver (which mostly uses the RESTful API and also some SSH API). In the ZFS appliance discovery profile, you select which driver is used.
When the ZFS appliance is serving a large amount of shares (typically a few thousand LUNs) then using the StorageConnect API frequently fails. This can be verified by observing the string "list index out of range" in the logs on the proxy. The proxy logs are located in the
In such a case, using the OpsCenter driver often helps. If the discovery failed when using the StorageConnect driver and the above string appears in the logs, you can try to edit the discovery profile to use the OpsCenter driver, and re-discover the appliance. Alternatively, if the appliance was discovered using StorageConnect, you can "Switch Driver" to the OpsCenter driver from the actions pane.
Note:The OpsCenter driver can only work with ZFS appliances with version 2013.1.2 or above.
In environments with large numbers of assets, the automatic display of the asset hierarchy in the navigation pane and the display of the membership graph showing the relationship between managed assets can cause the user interface to respond slowly. You can disable the automatic asset tree expansion and the membership graph to improve performance.
You can disable automatic asset tree expansion. The asset tree in the navigation pane will not be automatically expanded.
You can disable the membership graph. The membership graph will not be displayed for selected assets.