8 Preparing the Host Computers for an Enterprise Deployment

This chapter describes the tasks you must perform from each computer or server that will be hosting the enterprise deployment. For example, it explains how to mount the required shared storage systems to the host and how to enable the required virtual IP addresses on each host.

This chapter contains the following sections:

8.1 Verifying the Minimum Hardware Requirements for Each Host

After you have procured the required hardware for the enterprise deployment, log into each host computer and verify the system requirements listed in Section 5.1, "Hardware and Software Requirements for the Enterprise Deployment Topology".

If you are deploying to a virtual server environment, such as Oracle Exalogic, ensure that each of the virtual servers meets the minimum requirements.

Ensure that you have sufficient local disk and shared storage is configured as described in Chapter 7, "Preparing the File System for an Enterprise Deployment."

Allow sufficient swap and temporary space. Specifically:

  • Swap Space–The system must have at least 500MB.

  • Temporary Space–There must be a minimum of 500MB of free space in /tmp.

8.2 Verifying Linux Operating System Requirements

To ensure the host computers meet the minimum operating system requirements, be sure you have installed a certified operating system and that you have applied all the necessary patches for the operating system.

In addition, review the following sections for typical Linux operating system settings for an enterprise deployment:

8.2.1 Setting Linux Kernel Parameters

The kernel parameter and shell limit values shown below are recommended values only. Oracle recommends that you tune these values to optimize the performance of the system. See your operating system documentation for more information about tuning kernel parameters.

Kernel parameters must be set to a minimum of those below on all nodes in the topology.

The values in the following table are the current Linux recommendations. For the latest recommendations for Linux and other operating systems, see Oracle Fusion Middleware System Requirements and Specifications.

If you are deploying a database onto the host, you might need to modify additional kernel parameters. Refer to the 12c Oracle Grid Infrastructure Installation Guide for your platform.

Table 8-1 UNIX Kernel Parameters

Parameter Value

kernel.sem

256 32000 100 142

kernel.shmmax

4294967295


To set these parameters:

  1. Log in as root and add or amend the entries in the file /etc/sysctl.conf.

  2. Save the file.

  3. Activate the changes by issuing the command:

  4. /sbin/sysctl -p
    

8.2.2 Setting the Open File Limit and Number of Processes Settings on UNIX Systems

On UNIX operating systems, the open file limit is an important system setting, which can affect the overall performance of the software running on the host computer.

For guidance on setting the Open File Limit for an Oracle Fusion Middleware enterprise deployment, see Section 5.1.2, "Host Computer Hardware Requirements".

Note:

The following examples are for Linux operating systems. Consult your operating system documentation to determine the commands to be used on your system.

For more information, see the following topic:

8.2.2.1 Viewing the Number of Currently Open Files

You can see how many files are open with the following command:

/usr/sbin/lsof | wc -l

To check your open file limits, use the commands below.

C shell:

limit descriptors

Bash:

ulimit -n

8.2.2.2 Setting the Operating System Open File and Processes Limit

To change the Open File Limit:

  1. Log in as root and edit the following file:

    /etc/security/limits.conf

  2. Add the following lines to the limits.conf file. (The values shown here are for example only):

    * soft  nofile  4096
    * hard  nofile  65536
    * soft  nproc   2047
    * hard  nproc   16384
    

    The nofiles values represent the open file limit; the values represent the number of processes limit.

    For information on the suggested values, see Section 5.1.2.3, "Typical Memory, File Descriptors, and Processes Required for an Oracle SOA Suite Enterprise Deployment".

  3. Save the changes, close the limits.conf file.

  4. Reboot the host computer.

8.2.3 Verifying IP Addresses and Host Names in DNS or hosts File

Before you begin the installation of the Oracle software, ensure that the IP address, fully-qualified host name, and the short name of the host are all registered with your DNS server. Alternatively, you can use the local hosts file and add an entry similar to the following:

IP_Address Fully_Qualified_Name Short_Name

For example:

10.229.188.205  host1.example.com  host1

8.3 Configuring Operating System Users and Groups

The following provides a list of the users and groups to define on each of the computers that will host the enterprise deployment.

Groups

You must create the following groups on each node.

  • oinstall

  • dba

Users

You must create the following user on each node.

  • nobody–An unprivileged user.

  • oracle–The owner of the Oracle software. You may use a different name. The primary group for this account must be oinstall. The account must also be in the dba group.

Notes:

  • The group oinstall must have write privileges to all the file systems on shared and local storage that are used by the Oracle software.

  • Each group must have the same Group ID on every node.

  • Each user must have the same User ID on every node.

8.4 Enabling Unicode Support

Your operating system configuration can influence the behavior of characters supported by Oracle Fusion Middleware products.

On UNIX operating systems, Oracle highly recommends that you enable Unicode support by setting the LANG and LC_ALL environment variables to a locale with the UTF-8 character set. This enables the operating system to process any character in Unicode. Oracle SOA Suite technologies, for example, are based on Unicode.

If the operating system is configured to use a non-UTF-8 encoding, Oracle SOA Suite components may function in an unexpected way. For example, a non-ASCII file name might make the file inaccessible and cause an error. Oracle does not support problems caused by operating system constraints.

8.5 Mounting the Required Shared File Systems on Each Host

The shared storage configured described in Section 7.2, "Shared Storage Recommendations When Installing and Configuring an Enterprise Deployment", must be available on the hosts that use it.

In an enterprise deployment, it is assumed that you have a hardware storage filer, which is available and connected to each of the host computers you have procured for the depoyment.

You must mount the shared storage to all servers that require access.

Each host must have appropriate privileges set within the Network Attached Storage (NAS) or Storage Area Network (SAN) so that it can write to the shared storage.

Follow the best practices of your organization for mounting shared storage. This section provides an example of how to do this on Linux using NFS storage.

You must create and mount shared storage locations so that SOAHOST1 and SOAHOST2 can see the same location if it is a binary installation in two separate volumes.

For more information, see Section 7.2, "Shared Storage Recommendations When Installing and Configuring an Enterprise Deployment".

You use the following command to mount shared storage from a NAS storage device to a linux host. If you are using a different type of storage device or operating system, refer to your manufacturer documentation for information about how to do this.

Note:

The user account used to create a shared storage file system owns and has read, write, and execute privileges for those files. Other users in the operating system group can read and execute the files, but they do not have write privileges.

For more information about installation and configuration privileges, see "Selecting an Installation User" in the Oracle Fusion Middleware Installation Planning Guide.

In these examples, nasfiler represents the shared storage filer. Also note that these are examples only. Typically, the mounting of these shared storage locations should be done using the /etc/fstabs file on UNIX systems, so that the mounting of these devices survives a reboot. Refer to your Operating System documentation for more information.

From SOAHOST1:

Create the /u01/oracle directory on SOAHOST1 and then mount the shared storage. For example:

mount -t nfs nasfiler:VOL1/oracle /u01/oracle

From SOAHOST2:

Repeat the procedure on SOAHOST2. Create the /u01/oracle directory, and then mount the shared storage as follows:

mount -t nfs nasfiler:VOL1/oracle /u01/oracle

Validating the Shared Storage Configuration

Ensure that you can read and write files to the newly mounted directories by creating a test file in the shared storage location you just configured.

For example:

$ cd newly mounted directory
$ touch testfile

Verify that the owner and permissions are correct:

$ ls -l testfile

Then remove the file:

$ rm testfile

Note:

The shared storage can be a NAS or SAN device. The following illustrates an example of creating storage for a NAS device from SOAHOST1. The options may differ depending on the specific storage device.
mount -t nfs -o rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768 nasfiler:VOL1/Oracle /u01/oracle

Contact your storage vendor and machine administrator for the correct options for your environment.

8.6 Enabling the Required Virtual IP Addresses on Each Host

To prepare each host for the enterprise deployment, you must enable the virtual IP (VIP) addresses described in Section 5.2, "Reserving the Required IP Addresses for an Enterprise Deployment".

It is assumed that you have already reserved the VIP addresses and host names and that they have been enabled by your network administrator. You can then enable the VIPs on the appropriate host, as described in Section 5.2.3, "Physical and Virtual IP Addresses Required by the Enterprise Topology".

Note that the virtual IP addresses used for the enterprise topology are not persisted because they are managed by Whole Server Migration (for selected Managed Servers and clusters) or by manual failover (for the Administration Server).

To enable the VIP addresses on each host, run the following commands as root:

/sbin/ifconfig interface:index IPAddress netmask netmask
/sbin/arping -q -U -c 3 -I interface IPAddress

where interface is eth0, or eth1, and index is 0, 1, or 2.

For example:

/sbin/ifconfig eth0:1 100.200.140.206 netmask 255.255.255.0

Enable your network to register the new location of the virtual IP address:

/sbin/arping -q -U -c 3 -I eth0 100.200.140.206

Validate that the address is available by pinging it from another node, for example:

/bin/ping 100.200.140.206