Before you begin, make sure that all Solaris systems have access to the following:
All Perl packages (SUNWpl5*)
The SUNWbash package
All language packages
The management station that will be built from a puppet server
Port 6364 open in firewall for puppet interface
Port 6789 open in firewall for Storade interface
The management station is built from a server, called the puppet server, that is originally connected to your corporate network.
Obtain a copy of the management station flash archive from your Sun representative. Save it with the following path and filename:
/export/puppet/world/archive/full-flars/standard/genericmodel /sparc-sunos5.9/mgmt.flar |
Assign a new IP address in your corporate network for the management station.
Add the management station's MAC address to your local /etc/ethers file.
Run the /export/puppet/bin/preconfig utility and provide the following information when prompted:
Puppet Host Configuration ========================= Main Menu: [A]dd a new client [D]elete clients [L]ist clients [M]odify a client ========================= [Q]uit Which one [Q]:a Client name []:mgmt-name-01.domain Client mac: 0:3:cd:aa:d7:21 Machine Type ("uname -m") []: sun4u Client processor: sparc Select a bootable image: 1: sunos_5.10_74L2a_sparc 2: sunos_5.10_b72_sparc 3: sunos_5.10_b73_sparc 4: sunos_5.9_u5cd1combined_sparc 5: sunos_5.9_u7cd1combined_sparc Enter the number of your selection []:5 Model selection: 1: none 2: itserver59 [Meta] 3: desktop [Children] 4: sunray [Children] 5: itsunray [Meta] 6: itdesktop59-nwk [Meta] 7: itsunray30 [Meta] 8: itdesktop59 [Meta] 9: server [Children] ================================================== [S]elect, [U]nselect, [L]ist selected, [D]escribe, [R]eturn, [Q]uit [R]:1 1: itdesktop59-nwk [Meta] 2: server [Children] 3: itdesktop59 [Meta] ON 4: none 5: desktop [Children] 6: itsunray [Meta] 7: itserver59 [Meta] 8: sunray [Children] 9: itsunray30 [Meta] ================================================== [S]elect, [U]nselect, [L]ist selected, [D]escribe, [R]eturn, [Q]uit [R]:Enter Select a flash image: 1: nfs://server/export/puppet/world/archive/full-flars/standard/genericmodel /sparc-sunos5.9/mgmt.flar 4: Do not use flash. Enter the number of your selection []:1 Profile selection: 1: DEFAULT 2: LASTPROFILE 3: all+locales_04GB+_rootdisk 4: all_04GB+_rootdisk [S]elect, [C]ustom, [V]iew, [Q]uit: [S]: c Please select a starting profile: 1: DEFAULT 2: LASTPROFILE 3: all+locales_04GB+_rootdisk 4: all_04GB+_rootdisk [S]elect, [R]eturn, [V]iew, [Q]uit: [S]1 Change filesystem layout only to filesys rootdisk.s1 8192 swap filesys rootdisk.s0 10240 / logging filesys rootdisk.s3 1024 /home logging filesys rootdisk.s4 free /var logging filesys rootdisk.s5 16384 /opt logging filesys c1t2d0s6 free /export logging filesys rootdisk.s7 512 filesys c1t1d0s7 512 filesys c1t2d0s7 512 filesys c1t3d0s7 512 Summary for mgmt-sfbay-01: MAC Address : 0:3:cd:aa:d7:21 Machine Type: sun4u Boot Image : sunos_5.9_u7cd1combined_sparc Flash Image : nfs://server/export/puppet/world/archive/full-flars/standard/ genericmodel/sparc-sunos5.9/mgmt.flar Platform : sparc-sunos5.9 Profile : custom Model Config: none Correct ([Y]/N):y Configuring mgmt-name-01.... Performing add_install_client... cleaning up preexisting install client "mgmt-name-01" removing mgmt-name-01 from bootparams updating /etc/bootparams Using /export/puppet/world/archive/os_images/sunos_5.9_u7cd1combined_sparc /Solaris_9/Misc/jumpstart_sample/check quickcheck: Only the rule for mgmt-name-01 will be verified. It is assumed that the rest of the rules in /export/puppet/world/rules are correct. Validating /tmp/checkrules.7470... Validating profile hostconfig/mgmt-name-01/jumpstartprofile... /tmp/checkrules.7470.ok file not created The custom JumpStart configuration is ok. Puppet Host Configuration ========================= Main Menu: [A]dd a new client [D]elete clients [L]ist clients [M]odify a client ========================= [Q]uit Which one [Q]:q |
Run the following commands:
prtvtoc -h /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2 prtvtoc -h /dev/rdsk/c1t2d0s2 | fmthard -s - /dev/rdsk/c1t3d0s2 metadb -afc 3 /dev/dsk/c1t0d0s7 /dev/dsk/c1t1d0s7 /dev/dsk/c1t2d0s7 /dev/dsk/c1t3d0s7 metainit -f d100 1 1 c1t0d0s0 metainit -f d101 1 1 c1t0d0s1 metainit -f d103 1 1 c1t0d0s3 metainit -f d104 1 1 c1t0d0s4 metainit -f d105 1 1 c1t0d0s5 metainit -f d106 1 1 c1t2d0s6 metainit -f d0 -m d100 metainit -f d1 -m d101 metainit -f d3 -m d103 metainit -f d4 -m d104 metainit -f d5 -m d105 metainit -f d6 -m d106 metaroot d0 |
Edit the /etc/vfstab file and make the following modifications:
Change swap to /dev/md/dsk/d1
Change /home to /dev/md/dsk/d3 /dev/md/rdsk/d3
Change /var to /dev/md/dsk/d4 /dev/md/rdsk/d4
Change /opt to /dev/md/dsk/d5 /dev/md/rdsk/d5
Change /export to /dev/md/dsk/d6 /dev/md/rdsk/d6
Run the following commands:
metainit -f d200 1 1 c1t1d0s0 metainit -f d201 1 1 c1t1d0s1 metainit -f d203 1 1 c1t1d0s3 metainit -f d204 1 1 c1t1d0s4 metainit -f d205 1 1 c1t1d0s5 metainit -f d206 1 1 c1t3d0s6 |
Reboot the management station.
Run the following commands:
metattach d0 d200 metattach d1 d201 metattach d3 d203 metattach d4 d204 metattach d5 d205 metattach d6 d206 sys-unconfig |
Move the management station to the server network, then connect it to the back-end (BE) and front-end (FE) networks.
Boot the management station and set up the default route and both BE and FE interfaces.
Configure the management station as an NTP server for other Edge hosts that aren't able to refer to NTP servers on the corporate network nor on the Internet. The following is a template for /etc/inet/ntp.conf:
# NTP stratum 4 config # # stratum 3 (domainclock) servers nearby # server t3-name1.local-domain prefer server t3-name2.domain2 server t3-name3.domain3 server t3-name4.domain4 # # stratum 4 (timetone) local peers # peer t4-name1.local-domain peer t4-name2.local-domain peer t4-name3.local-domain peer t4-name4.local-domain # # Set up for site-wide multicast with one network hop. Increase ttl # value carefully since you can swamp other sites with multicast traffic. # broadcast 224.0.1.1 ttl 1 # # This line sets up the server so that the server can not be modified # remotely. In addition remote logging traps are disabled. # restrict default nomodify notrap # # This re-enables all functions locally so that you can change stuff # locally on the fly. # restrict 127.0.0.1 # enable monitor driftfile /var/ntp/ntp.drift statsdir /var/ntp/ntpstats/ filegen peerstats file peerstats type day enable filegen loopstats file loopstats type day enable # # Clockstats is only needed if a reference clock is attached to the server. #filegen clockstats file clockstats type day enable. |
Once /etc/inet/ntp.conf is modified, restart xntpd with the following commands to enable the new configuration:
# /etc/init.d/xntpd stop; /etc/init.d/xntpd start |
Check /var/adm/messages with the following command to see if the process has started without problems:
# grep ntp /var/adm/messages |
Use the following procedures to prepare for jump-starting the servers and configuring them afterwards. Jump-starting a server reinstalls the operating system and file system on a server, obliterating any old files. Before you begin, create a list of all servers with their MAC address, IP address and hostname for each one. Group the servers in the following categories:
Front-end (FE) servers
FE servers with Message Transfer Agent (MTA)
Back-end (BE) servers
Administration station
Backup server
Then initialize the management station as follows:
Add the MAC address of all servers to /etc/ethers.
Add the IP address and hostname of all servers to /etc/hosts.
For each server, you must perform the following preparation according to the server category, and you must jump-start the server before configuring the next one. The configuration of the server afterwards may be performed immediately after jump-starting or after all servers have been jump-started.
Log into the puppet web interface at https://puppetServer.domain:6364/config.
Select the Advanced Client link. On the Advanced Client page, enter the hostname of the server to jump-start, and press Enter. The server's MAC address should automatically be entered in the Ethernet Address field. Fill in the other fields as follows:
Machine Type: sun4u
Boot Image: sparc-sunos5.9_u7
Model Configuration: for a BE server, select PSC_META, for all others select FE_PSC_META
Click Add when all fields are complete.
Click on Update Screen, and then on Verify Configuration. In the next screen, click on Commit Changes, and then on Go Back.
Under Base Jumpstart Profile, select the profile corresponding to the type of server:
For a BE server, select req_BE+_rootdisk
For a FE server, select req_FE+_rootdisk
For a FEMTA server, select req_FEMTA+_rootdisk
For the administration station, select all_AW+_rootdisk
For the backup server, select req_BACKUP+_rootdisk
Make sure the sysidbase value is set to DEFAULT.
Click on Verify Configuration, then on Commit Changes. Now click Go Back and Fetch Current Configuration.
Review the profile and if everything is correct, click on Verify Configuration. The next screen will reconfirm the information, with different models for different server types:
Hostname serverName Ethernet Address 0:3:cd:aa:d7:21 Boot Method bootp Puppet Platform sparc-sunos5.9 Install Image sparc-sunos5.9_u7 Picked Models PSC_META Expanded Models server server:std-server200501 std-server200501:psc-JES- model200501 std-server200501:psc-cluster-model200501 std-server200501:psc- model200501 std-server200501:psc-vts200501 std-server200501:san44-model200501 SunOS: Profile req_BE+_rootdisk SunOS: sysidcfg(s) base: DEFAULT |
Click Commit Changes one last time and verify that there are no errors in the configuration. The next screen summarized the actions of the configurator:
Puppet: Begin Installation Configuration * Puppet: Client configuration written * SunOS: profile written. * SunOS: sysidcfg written. * SunOS: bootenv.rc written. * SunOS: rules written. * SunOS: SI_CONFIG_DIR/rules re-built. * SunOS: passed quickcheck. * Puppet: OS Driver passed Puppet: End Installation Configuration |
Connect to the console port of the server to be jump-started and enter the following command:
ok> boot net - install |
After the server has rebooted, follow the configuration procedure for the type of server:
Edit the prompt in the /.profile for root:
PS1="`tput bold``/usr/ucb/whoami`@`uname -n`:#`tput sgr0` " |
Restore the contents of /var/bits with the following commands:
# cd /var # gzcat bits/bits.tar.gz | tar xf - |
Change ownership of home directories with the following commands:
# cd /home # for I in `ls` > do > chown $I $I > done # cd / |
Once the system is on the corporate network, edit /etc/passwd and /etc/user_attr to put root back as a role.
Add the cluster to the path:
PATH=/usr/bin:/usr/sbin:/usr/ucb:/usr/ccs/bin:/usr/cluster/bin:$PATH |
Also, add the cluster to the path in the /.cshrc file for root:
set path=( /usr/bin /usr/sbin /usr/ccs/bin /usr/cluster/bin $path ) |
Create additional metadbs with the following command:
metadb -a -c 3 /dev/dsk/c1t1d0s7 /dev/dsk/c1t2d0s7 /dev/dsk/c1t3d0s7 |
Create mirrors of /globaldevices and /var/crash. This requires several commands, some of which depend on which node of the cluster the server belongs to.
Unmount the file systems with the following commands:
On all servers regardless of cluster node:
umount /globaldevices umount /var/crash newfs -v -i 1024 /dev/rdsk/c1t0d0s3 metainit -f d106 1 1 c1t2d0s0 metainit -f d206 1 1 c1t3d0s0 metainit -f d6 -m d106 |
On servers in cluster node 1:
metainit -f d103 1 1 c1t0d0s3 metainit -f d203 1 1 c1t1d0s3 metainit -f d3 -m d103 |
On servers in cluster node 2:
metainit -f d113 1 1 c1t0d0s3 metainit -f d213 1 1 c1t1d0s3 metainit -f d13 -m d113 |
Then edit the /etc/vfstab file:
On all servers regardless of cluster node, comment out the entries for /globaldevices and /var/crash. Then add the following line:
/dev/md/dsk/d6 /dev/md/rdsk/d6 /var/crash ufs 2 yes logging |
On servers in cluster node 1, add the following line:
/dev/md/dsk/d3 /dev/md/rdsk/d3 /globaldevices ufs 2 |
On servers in cluster node 2, add the following line:
/dev/md/dsk/d13 /dev/md/rdsk/d13 /globaldevices ufs 2 yes logging |
Attach the submirrors with the following commands:
The BE servers are V440 machines with 2 available disks of 73 GB each.
Partition Name |
Size |
Raid |
Description |
---|---|---|---|
/ |
10 GB |
Mirror |
Root file system |
/opt |
30 GB |
Mirror |
AllJava ESinstalled component binaries (LDAP, messaging, portal, calendar, identity). |
/var |
30 GB |
Log file system |
|
/swap |
4 GB |
Disk 3 |
Memory tmp file system |
Dump Device |
Disk 4 |
Core dump filesystem |
|
/shared/bedgeN/cal/backup |
20 GB |
Mount point |
|
/shared/bedgeN/cal/db |
20 GB |
Mount point |
|
/shared/bedgeN/msg/conf |
5 GB |
Mount point (soft partition) |
|
/shared/bedgeN/msg/imta |
15 GB |
Mount point (soft partition) |
|
/shared/bedgeN/msg/imta/db | |||
/shared/bedgeN/msg/imta/queue | |||
/shared/bedgeN/partition/ store### |
180 or 230 GB |
Mount point |
|
/shared/bedgeN/var |
5 GB |
Mount point |
|
/shared/bedgeN/var/log | |||
/shared/bedgeN/var/backup | |||
/shared/bedgeN/dbbackup |
15 GB |
Mount point |
Edit the prompt in the /.profile for root:
PS1="`tput bold``/usr/ucb/whoami`@`uname -n`:#`tput sgr0` " |
Restore the contents of /var/bits with the following commands:
# cd /var # gzcat bits/bits.tar.gz | tar xf - |
Change ownership of home directories with the following commands:
# cd /home # for I in `ls` > do > chown $I $I > done # cd / |
Once the system is on the corporate network, edit /etc/passwd and /etc/user_attr to put root back as a role.
Create an additional metadb with the following command:
metadb -a -c 3 /dev/dsk/c1t1d0s7 |
Attach the roots mirror:
metattach d0 d200 |
Set up NTP using appropriate NTP server for FE/Domain1 nodes.
Set up DNS using appropriate DNS servers for FE/Domain1 nodes.
Do a preliminary sendmail configuration using MODE="" and stopping then restarting sendmail. You will need to re-verify this once the FE/Domain1 nodes are fully configured to be sure that root and cron mail is being delivered properly.
Remove the metacheck entry from the cron table, or else put in place a valid metacheck script.
Fix the holidays files on all nodes in the /etc/acct/holidays file to be accurate for the current year.
The FE servers are V210 machines with 2 available disks of 73 GB each.
Partition Name |
Size |
Description |
---|---|---|
/ |
14 GB |
Root file system |
/opt |
20 GB |
Where all Java EScomponent binaries (LDAP, messaging, portal, calendar, identity) are installed. |
/var |
20 GB |
Log file system |
/swap |
20 GB |
Memory tmp file system |
Edit the prompt in the /.profile for root:
PS1="`tput bold``/usr/ucb/whoami`@`uname -n`:#`tput sgr0` " |
Restore the contents of /var/bits with the following commands:
# cd /var # gzcat bits/bits.tar.gz | tar xf - |
Change ownership of home directories with the following commands:
# cd /home # for I in `ls` > do > chown $I $I > done # cd / |
Once the system is on the corporate network, edit /etc/passwd and /etc/user_attr to put root back as a role.
Create additional metadbs with the following command:
metadb -a -c 3 /dev/dsk/c1t1d0s7 /dev/dsk/c1t2d0s7 /dev/dsk/c1t3d0s7 |
Re-create the mirror of /queue with striped disks:
Unmount it first with the following commands:
umount /queue rm -r /queue mkdir /imta |
Set up slice3 on disks 3 & 4 using format with a start cylinder of 4226 and size of 3678 cylinders. The result should look like the following
Total disk cylinders available: 14087 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 swap wu 0 - 4121 20.00GB (4122/0/0) 41945472 2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312 3 unassigned wm 4226 - 7903 17.85GB (3678/0/0) 37427328 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 4122 - 4225 516.75MB (104/0/0) 1058304 |
Run the following command:
metainit -f d103 1 2 c1t0d0s3 c1t1d0s3 -i 32b metainit -f d203 1 2 c1t2d0s3 c1t3d0s3 -i 32b metainit -f d3 -m d103 newfs -m 2 -i 4096 -o time /dev/md/rdsk/d3 |
Edit the /etc/vfstab file. Comment out the following line:
/dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /queue ufs 2 yes logging |
Add the following line:
/dev/md/dsk/d3 /dev/md/rdsk/d3 /imta ufs 2 yes logging |
Run the following command:
mount /imta metattach d0 d200 metattach d3 d203 |
Set up NTP using appropriate NTP server for FE/Domain1 nodes.
Set up DNS using appropriate DNS servers for FE/Domain1 nodes.
Do a preliminary sendmail configuration using MODE="" and stopping then restarting sendmail. You will need to re-verify this once the FE/Domain1 nodes are fully configured to be sure that root and cron mail is being delivered properly.
Remove the metacheck entry from the cron table, or else put in place a valid metacheck script.
Fix the holidays files on all nodes in the /etc/acct/holidays file to be accurate for the current year.
The FE MTA servers are V240 machines with 4available disks of 73 GB each.
Partition Name |
Size |
Description |
---|---|---|
/ |
10 GB |
Root file system |
/opt |
20 GB |
Where all Java EScomponent binaries (LDAP, messaging, portal, calendar, identity) are installed. |
/var |
20 GB |
Log file system |
/swap |
20 GB |
Disk 3: memory temp file system |
/queue |
16 GB |
Stripe of 4 disks (4x4) for MTA queues |
Edit the prompt in the /.profile for root:
PS1="`tput bold``/usr/ucb/whoami`@`uname -n`:#`tput sgr0` " |
Restore the contents of /var/bits with the following commands:
# cd /var # gzcat bits/bits.tar.gz | tar xf - |
Change ownership of home directories with the following commands:
# cd /home # for I in `ls` > do > chown $I $I > done # cd / |
Once the system is on the corporate network, edit /etc/passwd and /etc/user_attr to put root back as a role.
Create additional metadbs with the following command:
metadb -a -c 3 /dev/dsk/c1t1d0s7 /dev/dsk/c1t2d0s7 /dev/dsk/c1t3d0s7 |
Create a mirror of /export:
Unmount it first with the following command:
umount /export metainit -f d106 1 1 c1t2d0s6 metainit -f d206 1 1 c1t3d0s6 metainit -f d6 -m d106 |
Edit the /etc/vfstab file. Comment out the entry for /export and add the following line:
/dev/md/dsk/d6 /dev/md/rdsk/d6 /export ufs 2 yes logging |
Remount and attach the submirrors with the following commands:
mount /export metattach d0 d200 metattach d6 d206 |
The following procedure adjusts some network and system settings to optimize performance.
Setup coreadm with the following commands:
mkdir -p /var/crash/cores coreadm -g /var/crash/cores/%f.%n.%p.core -e global -e global-setid -e log -i /var/crash/cores/core.%f.%p.%n.%u.%g.%t -e process -e proc-setid |
Setup a cron job to regularly purge old core files:
0 5 * * * find /var/crash/cores -mtime +7 -type f -exec rm {} \; |
Make sure the /etc/hosts file contain hostname resolutions of all BE and FE servers:
set ncsize=4194304 set ufs_ninode=8388608 |
Tune the network settings for LDAP access on the servers in the mail clusters:
ndd -set /dev/tcp tcp_conn_req_max_q 4096 ndd -set /dev/tcp tcp_keepalive_interval 600000 ndd -set /dev/tcp tcp_ip_abort_cinterval 10000 ndd -set /dev/tcp tcp_ip_abort_interval 60000 |
Make these changes permanent by adding the above four configuration commands to the bottom of the /etc/init.d/inetinit file.
If desired, run the Sun Security Weakness Attack Tool (SunSWAT) to audit network vulnerabilities:
/opt/NSGswat/bin/sunswat -s isss |