High Availability Database (HADB) is a horizontally scalable database. It is designed to support data availability with load balancing, failover, and state recovery capabilities. HADB is used for session fail overs for portal server instances, which are clustered. This is also used for portlet session fail over.
The following is the topology that you need to follow when you cluster Portal Server:
Gateway (This comes with the SRA components of the portal) : https://gateway.com
LoadBalancer : https://loadbalancer.com/portal
Access Manager : https://accessmanager.com:8080/amconsole
Portal Server instance 1 and Administration Server : http://machine1.com:38080/portal
Portal Server instance 2 : http://machine2.com:38080/portal
where Portal Server instance 1 and instance 2 are clustered with Administration Server on 4848 port of machine 1. In the following procedure, HADB is installed on machine 1 and machine 2.
All machines should be in the same subnet.
Ensure that name resolution of all servers is correct on each server, either through the hosts file or DNS as required.
Ensure that the fully qualified hostname is the first entry after the IP address in the /etc/hosts file. Enter the other machine details also in hosts file.
For example, machine1.domainname should be present in hosts file of machine 2 and vice versa.
cat /etc/hosts file on machine1 : 127.0.0.1 localhost 172.12.144.23 machine1 machine1.com loghost 172.12.144.24 machine2 machine2.com |
Repeat the above procedure for hosts file on machine 2.
Any previously installed Java Enterprise System components are removed from the system before starting the installation procedure.
Installation happens in a shared memory configuration.
Check the physical memory of the nodes.
prtconf | grep Mem
Calculate the value of the shminfo_shmmax parameter using the following formula.
shminfo_shmmax = ( Server's Physical Memory in MB / 256 MB ) * 10000000
For example, if the physical memory is 512 MB, the value of the shminfo_shmmax parameter is 20000000. For 1 GB, it is 40000000 and for 2 GB, it is 80000000.
Add the following parameters to the /etc/system configuration file.
set shmsys:shminfo_shmmax=0x40000000 set shmsys:shminfo_shmseg=20 set semsys:seminfo_semmni=16 set semsys:seminfo_semmns=128 set semsys:seminfo_semmnu=1000 set ip:dohwcksum=0 |
Check that hostname lookup and reverse lookup is functioning correctly.
Check the contents of the /etc/nsswitch.conf file hosts entry.
cat /etc/nsswitch.conf | grep hosts
Allow non-console root login by commenting out the CONSOLE=/dev/console entry in the /etc/default/login file.
cat /etc/default/login | grep "CONSOLE="
CONSOLE=/dev/console |
If you need to enable remote root FTP, comment out the root entry in the /etc/ftpd/ftpusers file.
cat /etc/ftpd/ftpusers | grep root
Permit ssh root login. Set PermitRootLogin to yes in the /etc/ssh/sshd_config file, and restart the ssh daemon process.
cat /etc/ssh/sshd_config | grep PermitRootLogin
PermitRootLogin yes
etc/init.d/sshd stop /etc/init.d/sshd start |
Generate the ssh public and private key pair on each machine. In this procedure, on machine 1 and machine 2..
ssh-keygen -t dsa
Enter file in which to save the key(//.ssh/id_dsa): <Return> Enter passphrase(empty for no passphrase): <Return> Enter same passphrase again: <Return> |
When running the ssh-keygen utility program, do NOT enter a passphrase and press Return. Otherwise, whenever ssh is used by the Application Server 9.1, the passphrase will be prompted for — breaking the automated scripts.
Generate the keys on all Application Server 9.1 nodes before proceeding to the next step where the public key values are combined into the authorized_keys file.
Copy all the public key values to each server's authorized_keys file. Create the authorized_keys file on each server.
cd ~/.ssh cp id_dsa.pub authorized_keys |
Run the above command on machine 2 also and copy the public key of machine 1 that is present in the authorized_keys file on machine 1 to the authorized_keys file on machine 2. Repeat the same process vice versa.
Disable IPv6 interface that is not supported by HADB. To do this on Solaris, remove the /etc/hostname6.__0 file, where __ is eir or hme.
Make Date and Time same on all the machines involved in the topology.
Restart both the machines using init 6 command.
Install Application Server 9.1 including HADB from the Portal Server 7.2 GUI installer on both machine 1 and machine 2.
Run MA on both machine 1 and machine 2.
cd /opt/SUNWhadb/4/bin ./ma |
You can notice that running MA will not install all the HADB packages. So, on machine 1 and machine 2, navigate to the directory where you have unzipped the Application Server 9.1 installer. For example, /Application Server 9.1 unzip location/installer-ifr/package/.
Run the pkgadd -d . SUNWhadbe SUNWhadbi SUNWhadbv command.
Now the machine 1 has DAS and first Portal instance and machine 2 has second Portal instance.
Create cluster on machine 1.
./asadmin create-cluster --user admin pscluster
Create node agent on machine1.
./asadmin create-node-agent --host machine1.com --port 4848 --user admin machine1node
Create node agent on machine2.
./asadmin create-node-agent --host machine1 --port 4848 --user admin machine2node
The -- should point to machine 1, since machine 1 is the Administration Server.
Create an Application Server 9.1 instance on machine 1.
./asadmin create-instance --user admin --cluster pscluster --nodeagent machine1node --systemproperties HTTP_LISTENER_PORT=38080 machine1instance
Create an Application Server 9.1 instance on machine 2.
./asadmin create-instance --user admin --cluster pscluster --host machine1.com --nodeagent machine2node --systemproperties HTTP_LISTENER_PORT=38080 machine2instance
Start both the node agents.
/opt/SUNWappserver/appserver/bin/asadmin start-node-agent machine1node
Run configure-ha-cluster.
./asadmin configure-ha-cluster --user admin --port 4848 --devicesize 256 --hosts machine1.com,machine2.com pscluster
If configure-ha-cluster fails, then while re-installing the next time, run the ps -ef | grep ma command and kill all the ma processes and also the processes, which runs on port 15200. Restart the machines and try to run configure-ha-cluster again.
If configure-ha-cluster is successful, you can install Portal Server 7.2 to leverage its enhanced cluster capability.