Perform the following procedure on the pair of back-end servers, called nodes, in each cluster. See 1.2.1 Physical System Names for more details.
Edit the /etc/inet/hosts file on both nodes to contain the following lines. Set the IP addresses appropriately for each cluster:
10.2.0.129 phys-bedgeN-1-ic-privateInterface1 10.2.1.1 phys-bedgeN-1-ic-privateInterface2 10.2.193.1 clusternode1-priv 10.2.0.130 phys-bedgeN-2-ic-privateInterface1 10.2.1.2 phys-bedgeN-2-ic-privateInterface2 10.2.193.2 clusternode2-priv |
Enable host-based ssh authentication from the first node to the second node with the following commands:
Copy the public key:
phys-bedgeN-1# cat /etc/ssh/ssh_host_rsa_key.pub phys-bedgeN-1# cp -p /etc/ssh/ssh_host_rsa_key /.ssh/id_rsa |
Establish an ssh connection to create the file /.ssh/know_hosts:
phys-bedgeN-1# ssh phys-bedgeN-2 |
Add the public key of the first node to the end of the list of authorized keys on the second node:
phys-bedgeN-2# vi /.ssh/authorized_keys |
Save a backup of the sshd configuration file, then change the value of PermitRootLogin from no to yes:
phys-bedgeN-2# cp -p /etc/ssh/sshd_config /etc/ssh/sshd_config.bak phys-bedgeN-2# vi /etc/ssh/sshd_config |
Restart the ssh daemon on the second node and exit ssh:
phys-bedgeN-2# /etc/init.d/sshd stop; /etc/init.d/sshd start phys-bedgeN-2# exit |
Connect to the second node with the following command to verify whether ssh is configured properly:
phys-bedgeN-1# ssh root@phys-bedgeN-2 -o "BatchMode yes" \ -o "StrictHostKeyChecking yes" -n "uname -a" |
While still connected to the second node, back up the /etc/system file and then edit its contents:
phys-bedgeN-2# cp -p /etc/system /etc/system.bak phys-bedgeN-2# vi /etc/system |
Comment out the following line:
#set c2audit:audit_load = 1 |
On both nodes, perform the following commands:
phys-bedgeN-[12]# touch /etc/cluster/.installed phys-bedgeN-[12]# vi /etc/inet/inetd.conf |
In the /etc/inet/inetd.conf file, uncomment the lines for rpc.metad and rpc.metamedd, if they are commented out.
Run the Sun Cluster installation script on the first node:
phys-bedgeN-1# /usr/cluster/bin/scinstall *** Main Menu *** Please select from one of the following (*) options: -> * 1) Install a cluster or cluster node *** Install Menu *** Please select from any one of the following options: -> 1) Install all nodes of a new cluster *** Installing all Nodes of a New Cluster *** >>> Type of Installation <<< -> 2) Custom >>> Cluster Name <<< -> bedgeN >>> Cluster Nodes <<< -> phys-bedgeN-1, phys-bedgeN-2, Ctrl-D >>> Authenticating Requests to Add Nodes <<< -> Do you need to use DES authentication (yes/no) [no]? Enter >>> Network Address for the Cluster Transport <<< -> Is it okay to accept the default network address (yes/no) [yes]? Enter Is it okay to accept the default netmask (yes/no) [yes]? Enter >>> Point-to-Point Cables <<< -> Does this two-node cluster use transport junctions (yes/no) [yes]? no >>> Cluster Transport Adapters and Cables <<< -> Pick appropriate adapters >>> Software Patch Installation <<< -> Do you want scinstall to install patches for you (yes/no) [yes]? no >>> Global Devices File System <<< -> For node "phys-bedgeN-1", Is it okay to use this default (yes/no) [yes]? Enter For node "phys-bedgeN-2", Is it okay to use this default (yes/no) [yes]? Enter Is it okay to begin the installation (yes/no) [yes]? Enter Interrupt the installation for sccheck errors (yes/no) [no]? Enter |
If both nodes do no reboot automatically after the installation, reboot them starting with the second one first.
Restore the modified files on the second node, and restart its ssh daemon:
phys-bedgeN-2# mv /etc/system.bak /etc/system phys-bedgeN-2# mv /etc/ssh/sshd_config.bak /etc/ssh/sshd_config phys-bedgeN-2# /etc/init.d/sshd stop; /etc/init.d/sshd start |
On the first node only, set the quorum device and reset the install mode flag with the following command:
phys-bedgeN-1# /usr/cluster/bin/scdidadm -L |
Again on the first node, list the DID numbers and select one to use in the following command, for example ld0-00:
phys-bedgeN-1# /usr/cluster/bin/scconf -a -q globaldev=DIDnumber phys-bedgeN-1# /usr/cluster/bin/scconf -c -q reset |
Configure NTP by adding the following lines to the /etc/inet/ntp.conf.cluster file on both nodes. The NTPservers should be those in the same domain as your Edge complex:
peer clusternode1-priv prefer peer clusternode2-priv server NTPserver1 server NTPserver2 |
Then restart NTP with the following command:
phys-bedgeN-[12]# /etc/init.d/xntpd stop; /etc/init.d/xntpd.cluster start |
Configure IPMP on both nodes with the appropriate adapters:
phys-bedgeN-[12]# cp /etc/hostname.publicInterface1 /etc/hostname.publicInterface1.bak phys-bedgeN-[12]# vi /etc/hostname.publicInterface1 |
Modify the file as follows:
phys-bedgeN-[12] netmask + broadcast + group ipmp1 up \ addif monitoringIP1 netmask + broadcast + deprecated -failover up |
Back up and modify the second file on both nodes:
phys-bedgeN-[12]# cp /etc/hostname.publicInterface2 /etc/hostname.publicInterface2.bak phys-bedgeN-[12]# vi /etc/hostname.publicInterface2 |
Modify the file as follows:
monitoringIP2 netmask + broadcast + deprecated group ipmp1 \ -failover standby up |
Configure the public interfaces on both nodes with the following commands:
phys-bedgeN-M# ifconfig publicInterface1 group ipmp1 phys-bedgeN-M# ifconfig publicInterface2 plumb phys-bedgeN-M# ifconfig publicInterface2 group ipmp1 phys-bedgeN-M# ifconfig publicInterface1 addif monitoringIP1 \ netmask + broadcast + deprecated -failover up phys-bedgeN-M# ifconfig publicInterface2 monitoringIP2 netmask \ + broadcast + deprecated -failover standby up |
Setup disksets and file systems on the first node only. The following information should be used as a guide. See 2.2 Storage Area Network (SAN) for further details.
Each cluster has one diskset.
Each disk must be labeled via format, which best to do before creating a metaset. A script can be used to do the format.
Disks ending in 04d0s2 are for LUN mapping and do not belong in a metaset but should be labeled to avoid errors on boot.
Disks ending in 03d0s2 02d0s2 01d0s2 will be the stores starting at metadevice 311.
Disks ending in 00d0s2 are the 20GB partitions and are subpartitioned into 5 and 15GB respectively for s0 and s1.
Disks ending in 00d0s2 use metadevices d300, d301, d302, and d304 (5GB conf, 15GB imta, 5GB var, and 15GB dbbackup respectively).
Reminder: when disks are added into a metaset, metadbs are automatically created and the disk is automatically partitioned.
Mirror across minnows and from the same logical device (ld0 to ld0) using corresponding partition of RAID5 logical drive.
Use the following commands on minnows to get information needed in creating metasets:
# sccli minnow show unique # sccli minnow show logical |
In general once a metaset is created on the first mail cluster, the metastat -p output can be used for clusters 2 and 3; cluster 4 may have differences due to fact it uses all the minnows and does not have LDAP on node 2.
Because there is no data and newfs will be used, the following example attaches both mirrors using metainit instead of using metattach:
# metaset -s bedgeN-ds -a -h phys-bedgeN-1 phys-bedgeN-2 # metaset -s bedgeN-ds -a -m phys-bedgeN-1 phys-bedgeN-2 # metaset -s bedgeN-ds -a /dev/did/dsk/DIDnumber /dev/did/dsk/DIDnumber .. Sample: # metainit -s bedgeN-ds d400 1 1 /dev/did/dsk/dAs0 # metainit -s bedgeN-ds d500 1 1 /dev/did/dsk/dBs0 # metainit -s bedgeN-ds d300 -m d400 d500 # metainit -s bedgeN-ds d401 1 1 /dev/did/dsk/dAs1 # metainit -s bedgeN-ds d501 1 1 /dev/did/dsk/dBs1 # metainit -s bedgeN-ds d301 -m d401 d501 # metainit -s bedgeN-ds d402 1 1 /dev/did/dsk/dCs0 # metainit -s bedgeN-ds d502 1 1 /dev/did/dsk/dDs0 # metainit -s bedgeN-ds d302 -m d402 d502 # metainit -s bedgeN-ds d403 1 1 /dev/did/dsk/dCs1 # metainit -s bedgeN-ds d503 1 1 /dev/did/dsk/dDs1 # metainit -s bedgeN-ds d303 -m d403 d503 ... # newfs /dev/md/bedgeN-ds/d300 # newfs /dev/md/bedgeN-ds/d301 # newfs /dev/md/bedgeN-ds/d302 # newfs /dev/md/bedgeN-ds/d303 # newfs -m 3 -i 4096 -o time /dev/md/bedgeN-ds/d311 # newfs -m 3 -i 4096 -o time /dev/md/bedgeN-ds/d312 ... |
For the messaging clusters, add the following lines to /etc/vfstab on both nodes, then run mkdir on one of the nodes:
/dev/md/disksetName/dsk/d300 /dev/md/disksetName/rdsk/d300 \ /shared/bedgeN/msg/conf ufs 1 no logging,nosuid /dev/md/disksetName/dsk/d301 /dev/md/disksetName/rdsk/d301 \ /shared/bedgeN/msg/imta ufs 1 no logging,nosuid /dev/md/disksetName/dsk/d302 /dev/md/disksetName/rdsk/d302 \ /shared/bedgeN/msg/var ufs 1 no logging,nosuid /dev/md/disksetName/dsk/d303 /dev/md/disksetName/rdsk/d303 \ /shared/bedgeN/msg/dbbackup ufs 1 no logging,nosuid /dev/md/disksetName/dsk/d311 /dev/md/disksetName/rdsk/d311 \ /shared/bedgeN/msg/partition/store001 ufs 2 no logging,nosuid /dev/md/disksetName/dsk/d312 /dev/md/disksetName/rdsk/d312 \ /shared/bedgeN/msg/partition/store002 ufs 2 no logging,nosuid ... |
# mkdir /shared/bedgeN/msg/conf # mkdir /shared/bedgeN/msg/imta # mkdir /shared/bedgeN/msg/var # mkdir /shared/bedgeN/msg/dbbackup # mkdir /shared/bedgeN/msg/partition/store001 # mkdir /shared/bedgeN/msg/partition/store002 |
For the calendar clusters, add the following lines to /etc/vfstab on both nodes, then run mkdir on one of the nodes:
/dev/md/disksetName/dsk/d300 /dev/md/disksetName/rdsk/d300 \ /shared/bedgeN/cal/opt ufs 2 no logging /dev/md/disksetName/dsk/d301 /dev/md/disksetName/rdsk/d301 \ /shared/bedgeN/cal/dbbackup ufs 2 no logging,nosuid |
# mkdir /shared/bedgeN/cal/opt # mkdir /shared/bedgeN/cal/dbbackup |