Zone is a virtualized operating system environment that you can set up for systems that run the Solaris 10 Operating System. Every Solaris system contains a global zone, the default zone for the system. You can create non-global zones. Non-global zone can either be a whole root zone or a sparse root zone.
Before You Begin
The following must be available:
A whole root zone must be available.
Host name and IP address must be available for the whole root zone.
Lockhart 2.2.3 or above must be available in the global zone.
Apache Tomcat must be available in the global zone.
Task |
Instructions |
---|---|
Install Sun Cluster 3.1 Update 4 on each cluster node This task is required only if the user wants to configure Sun Management Center in a Sun Cluster environment. |
Chapter 2, Installing and Configuring Sun Cluster Software, in Sun Cluster Software Installation Guide for Solaris OS. |
Install and configure Sun Cluster HA agent for Solaris Container data service This task is required only if the user wants to configure Sun Management Center in a Sun Cluster environment. |
Chapter 1, Installing and Configuring Sun Cluster HA for Solaris Containers, in Sun Cluster Data Service for Solaris Containers Guide |
Enable a zone to run in a failover configuration | |
Configure and install a whole root zone |
To Configure a Whole Root Zone and To Install a Whole Root Zone |
Install and set up Sun Management Center inside a whole root zone |
To Install and Set Up Sun Management Center Server Inside a Whole Root Zone |
Register the SUNW.HAStoragePlus resource type.
# scrgadm -a -t SUNW.HAStoragePlus
Create a failover resource group.
# scrgadm -a -g solaris-zone-resource-group
Create a resource for the zone disk storage.
# scrgadm -a -j solaris-zone-has-resource \
-g wholerootzone-resource-group \
-t SUNW.HAStoragePlus \
-x FilesystemMountPoints=/global/zones/HA
Add an entry for logical host in the /etc/hosts file on each cluster node.
# scrgadm -a -L -g sunmc-zone-resource-group -j sunmc-lh-rs -l logical host name
Enable the failover resource group.
# scswitch -e -j solaris-zone-has-resource
# scswitch -Z -g wholerootzone-resource-group
Start the zone configuration.
#zonecfg -z wholerootzone, where wholerootzone is the name of the new whole root zone.
Create a configuration for the specified zone.
zonecfg:wholerootzone> create -b
Set the zone path.
The zone path must specify a highly available local file system. The file system must be managed by the SUNW.HAStoragePlus resource.
zonecfg:wholerootzone> set zonepath=/global/zones/HA/wholerootzone
Set the autoboot value.
If the autoboot value is set to true, the zone is automatically booted when the global zone is booted. The default value is false.
zonecfg:wholerootzone> set autoboot=false
If resource pools are enabled on the system, associate a pool with the zone.
zonecfg:wholerootzone> set pool=pool_default, where pool_default is the name of the resource pool on the system.
Add a network virtual interface.
zonecfg:wholerootzone> add net
Set the IP address for the network interface.
zonecfg:wholerootzone> set address=10.255.255.255
Set the physical device type for the network interface.
zonecfg:wholerootzone> set physical=hme0
zonecfg:wholerootzone> end
Verify and commit the zone configuration.
zonecfg:wholerootzone> verify
zonecfg:wholerootzone> commit
zonecfg:wholerootzone> exit
Install the whole root zone that is configured.
# zoneadm -z wholerootzone install, where wholerootzone is the name of the whole root zone that is configured.
Boot the whole root zone.
# zoneadm -z wholerootzone boot
Log in to the zone console.
# zlogin -C wholerootzone
Log in to the zone.
# zlogin wholerootzone
(required for Sun Cluster environment) Add the entry of the whole root zone to the /etc/zones/index file on the cluster node.
(required for Sun Cluster environment) Copy the wholerootzone.xml file to the /etc/zones/index directory on the cluster node.
# rcp zone-install-node:/etc/zones/wholerootzone.xml
Verify the zone installation and configuration.
# zoneadm -z wholerootzone boot
# zlogin -z wholerootzone
Ensure that you are inside the whole root zone that is configured and installed.
Follow the steps in the install wizard to install Sun Management Center.
Edit the /etc/project file for shared memory before setup. Otherwise, database setup will fail. For example,
default:3::::project.max-shm-memory=(privileged,2147483648,deny)
2147483648 is the sample shared memory in bytes. The shared memory depends on the amount of physical memory.
Follow the steps in the setup wizard to set up Sun Management Center.
Sun Management Center supports the server layer of all add-ons inside a non-global zone. Sun Management Center does not support the agent layer of add-ons like ELP Config Reader, X86 Config Reader, and Solaris Container Manager inside a non-global zone.