When you design a Sun Cluster HA for SAP configuration, consider the following guidelines.
Use an SAP software version that is qualified with Sun Cluster 3.0.
Use an SAP software version with automatic enqueue reconnect mechanism capability - Sun Cluster HA for SAP relies on this capability. SAP 4.0 software with patch information and later releases should have automatic enqueue reconnect mechanism capability.
Retrieve the latest patch for the sapstart executable - This patch enables Sun Cluster HA for SAP users to configure a lock file. For details on the benefits of this patch in your cluster environment, see "Setting Up a Lock File".
Read all of the related SAP online service-system notes for the SAP software release and database that you are installing on your Sun Cluster configuration - Identify any known installation problems and fixes.
Consult SAP software documentation for memory and swap recommendations - SAP software uses a large amount of memory and swap space.
Generously estimate the total possible load on nodes that might host the central instance, the database instance, and the application server, if you have an internal application server - This guideline is especially important if you configure the cluster to ensure that the central instance, database instance, and application server will all exist on one node if failover occurs.
Install application servers on either the same cluster that hosts the central instance or on a separate cluster - If you install and configure any application server outside of the cluster environment, Sun Cluster HA for SAP does not perform fault monitoring and does not automatically restart or fail over those application servers. You must manually start and shut down application servers that you install and configure outside of the cluster environment.
Limit node names as outlined in the SAP installation guide - This limitation is an SAP software requirement.
Use the same instance number and the same SID when you create all application server instances on multiple cluster nodes - This guideline ensures ease of maintenance and ease of administration because you will only need to use one set of commands to maintain all application servers on multiple nodes.
Install the application servers locally on the cluster node instead of on a cluster file system - This guideline ensures that another application server does not overwrite the log/data/work/sec directory for the application server.
Ensure that the SAPSIDadm home directory resides on a cluster file system - This guideline enables you to maintain only one set of scripts for all application server instances that run on all nodes. However, if you have some application servers that need to be configured differently (for example, application servers with different profiles), install those application servers with different instance numbers, and then configure them in a separate resource group.
Place the application servers into multiple resource groups if you want to use RGOffload functionality to shut down one or more application servers when a higher priority resource is failing over - This guideline provides flexibility and availability if you want to use RGOffload functionality (a separate resource type) to offload one or more application servers for the database. The functionality you gain from this guideline supersedes the ease of use you gain from placing the application servers into one large group. See "Freeing Node Resources by Offloading Non-critical Resource Groups" on page 332 for more information on using the RGOffload resource type.