This chapter contains information and tasks about using the Logical Domains software that are not described in the preceding chapters.
This chapter covers the following topics:
To use CPU Power Management (PM) software, you first need to set the power management policy in ILOM 3.0 firmware. This section summarizes the information that you need to be able to use power management with LDoms software. Refer to “Monitoring Power Consumption” in the Sun Integrated Lights Out Manager (ILOM) 3.0 CLI Procedures Guide for more details.
The power policy is the setting that governs system power usage at any point in time. The Logical Domains Manager, version 1.2, supports two power policies, assuming that the underlying platform has implemented Power Management features:
Performance – The system is allowed to use all the power that is available.
Elastic – The system power usage is adapted to the current utilization level. For example, power up or down just enough system components to keep utilization within thresholds at all times, even if the workload fluctuates.
For instructions on configuring the power policy using the ILOM 3.0 firmware CLI, refer to “Monitoring Power Consumption” in the Sun Integrated Lights Out Manager (ILOM) 3.0 CLI Procedures Guide.
To achieve maximum power savings, do not run the ldm bind-domain command and then leave the domain in the bound state for a long period of time. When a domain is in the bound state, all of its CPUs are powered on.
This section shows how to list power-managed strands and virtual CPUs.
List power-managed strands by doing one of the following.
Use the list -l subcommand.
A dash (---) in the UTIL column of the CPU means the strand is power-managed.
# ldm list -l primary NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv SP 8 4G 4.3% 7d 19h 43m SOFTSTATE Solaris running MAC 00:14:4f:fa:ed:88 HOSTID 0x84faed88 CONTROL failure-policy=ignore DEPENDENCY master= VCPU VID PID UTIL STRAND 0 0 0.0% 100% 1 1 --- 100% 2 2 --- 100% 3 3 --- 100% 4 4 --- 100% 5 5 --- 100% 6 6 --- 100% 7 7 --- 100% .... |
Use the parseable option (-p) to the list -l subcommand.
A blank after util= means the strand is power-managed.
# ldm list -l -p VCPU |vid=0|pid=0|util=0.7%|strand=100 |vid=1|pid=1|util=|strand=100 |vid=2|pid=2|util=|strand=100 |vid=3|pid=3|util=|strand=100 |vid=4|pid=4|util=0.7%|strand=100 |vid=5|pid=5|util=|strand=100 |vid=6|pid=6|util=|strand=100 |vid=7|pid=7|util=|strand=100 |
List power-managed CPUs by doing one of the following.
Use the list-devices -a cpu subcommand.
In the power management (PM) column, a yes means the CPU is power-managed and a no means the CPU is powered on. It is assumed that 100 percent free CPUs are power-managed by default, hence the dash (---) under PM.
# ldm list-devices -a cpu VCPU PID %FREE PM 0 0 no 1 0 yes 2 0 yes 3 0 yes 4 100 --- 5 100 --- 6 100 --- 7 100 --- |
Use the parseable option (-p) to the list-devices -a cpu subcommand.
In the power management (pm=) field, a yes means the CPU is power-managed and a no means the CPU is powered on. It is assumed that 100 percent free CPUs are power-managed by default, hence the blank in that field.
# ldm list-devices -a -p cpu VERSION 1.4 VCPU |pid=0|free=0|pm=no |pid=1|free=0|pm=yes |pid=2|free=0|pm=yes |pid=3|free=0|pm=yes |pid=4|free=0|pm=no |pid=5|free=0|pm=yes |pid=6|free=0|pm=yes |pid=7|free=0|pm=yes |pid=8|free=100|pm= |pid=9|free=100|pm= |pid=10|free=100|pm= |
The following sections describe the restrictions on entering names in the Logical Domains Manager CLI.
First character must be a letter, a number, or a forward slash (/).
Subsequent letters must be letters, numbers, or punctuation.
The names must contain letters, numbers, or punctuation.
The logical domain configuration name (config-name) that you assign to a configuration stored on the service processor (SP) must have no more than 64 characters.
The remainder of the names, such as the logical domain name (ldom), service names (vswitch-name, service-name, vdpcs-service-name, and vcc-name), virtual network name (if-name), and virtual disk name (disk-name), must be in the following format:
First character must be a letter or number.
Subsequent characters must be letters, numbers, or any of the following characters -_+#.:;~().
This section shows the syntax usage for the ldm subcommands, defines some output terms, such as flags and utilization statistics, and provides examples that are similar to what you actually see as output.
If you are creating scripts that use ldm list command output, always use the -p option to produce the machine-readable form of the output. See Generate a Parseable, Machine-Readable List (-p) for more information.
Look at syntax usage for all ldm subcommands.
primary# ldm --help |
For more information about the ldm subcommands, see the ldm(1M) man page.
The following flags can be shown in the output for a domain (ldm list). If you use the long, parseable options (-l -p) for the command, the flags are spelled out; for example, flags=normal,control,vio-service. If not, you see the letter abbreviation; for example -n-cv-. The list flag values are position dependent. Following are the values that can appear in each of the six columns from left to right.
Column 1
s starting or stopping
- placeholder
Column 2
n normal
t transition
Column 3
d delayed reconfiguration
- placeholder
Column 4
c control domain
- placeholder
Column 5
v virtual I/O service domain
- placeholder
Column 6
s source domain in a migration
t target domain in a migration
e error occurred during a migration
- placeholder
The per virtual CPU utilization statistic (UTIL) is shown on the long (-l) option of the ldm list command. The statistic is the percentage of time that the virtual CPU spent executing on behalf of the guest operating system. A virtual CPU is considered to be executing on behalf of the guest operating system except when it has been yielded to the hypervisor. If the guest operating system does not yield virtual CPUs to the hypervisor, the utilization of CPUs in the guest operating system will always show as 100%.
The utilization statistic reported for a logical domain is the average of the virtual CPU utilizations for the virtual CPUs in the domain. A dash (---) in the UTIL column means that the strand is power-managed.
The actual output might vary slightly from what is shown here.
primary# ldm -V Logical Domain Manager (v 1.2) Hypervisor control protocol v 1.3 Using Hypervisor MD v 0.1 System PROM: Hypervisor v. 1.7.0. @(#)Hypervisor 1.7.0. 2008/11/19 10:20 OpenBoot v. 4.30.0. @(#)OBP 4.30.0. 2008/11/18 13:44 |
primary# ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -t-cv 4 1G 0.5% 3d 21h 7m ldg1 active -t--- 5000 8 1G 23% 2m |
primary# ldm list -l NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -t-cv 1 768M 0.0% 0s VCPU VID PID UTIL STRAND 0 0 0.0% 100% MEMORY RA PA SIZE 0x4000000 0x4000000 768M IO DEVICE PSEUDONYM OPTIONS pci@780 bus_a pci@7c0 bus_b bypass=on VCC NAME PORT-RANGE vcc0 5000-5100 VSW NAME MAC NET-DEV DEVICE MODE vsw0 08:00:20:aa:bb:e0 e1000g0 switch@0 prog,promisc vsw1 08:00:20:aa:bb:e1 routed VDS NAME VOLUME OPTIONS DEVICE vds0 myvol-a slice /disk/a myvol-b /disk/b myvol-c ro,slice,excl /disk/c vds1 myvol-d /disk/d VDPCS NAME vdpcs0 vdpcs1 ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldg1 bound ----- 5000 1 512M VCPU VID PID UTIL STRAND 0 1 100% MEMORY RA PA SIZE 0x4000000 0x34000000 512M NETWORK NAME SERVICE DEVICE MAC mynet-b vsw0@primary network@0 08:00:20:ab:9a:12 mynet-a vsw0@primary network@1 08:00:20:ab:9a:11 DISK NAME VOLUME DEVICE SERVER mydisk-a myvol-a@vds0 disk@0 primary mydisk-b myvol-b@vds0 disk@1 primary VDPCC NAME SERVICE myvdpcc-a vdpcs0@primary myvdpcc-b vdpcs0@primary VCONS NAME SERVICE PORT mygroup vcc0@primary 5000 |
primary# ldm list -e NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -t-cv 1 768M 0.0% 0s SOFTSTATE Solaris running MAC 00:14:4f:fa:ed:88 HOSTID 0x84faed88 CONTROL failure-policy=ignore DEPENDENCY master= VCPU VID PID UTIL STRAND 0 0 0.0% 100% MEMORY RA PA SIZE 0x4000000 0x4000000 768M IO DEVICE PSEUDONYM OPTIONS pci@780 bus_a pci@7c0 bus_b bypass=on VLDC NAME primary VCC NAME PORT-RANGE vcc0 5000-5100 VSW NAME MAC NET-DEV DEVICE MODE vsw0 08:00:20:aa:bb:e0 e1000g0 switch@0 prog,promisc vsw1 08:00:20:aa:bb:e1 routed VDS NAME VOLUME OPTIONS DEVICE vds0 myvol-a slice /disk/a myvol-b /disk/b myvol-c ro,slice,excl /disk/c vds1 myvol-d /disk/d VDPCS NAME vdpcs0 vdpcs1 VLDCC NAME SERVICE DESC hvctl primary@primary hvctl vldcc0 primary@primary ds ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldg1 bound ----- 5000 1 512M VCPU VID PID UTIL STRAND 0 1 100% MEMORY RA PA SIZE 0x4000000 0x34000000 512M VLDCC NAME SERVICE DESC vldcc0 primary@primary ds NETWORK NAME SERVICE DEVICE MAC mynet-b vsw0@primary network@0 08:00:20:ab:9a:12 mynet-a vsw0@primary network@1 08:00:20:ab:9a:11 DISK NAME VOLUME DEVICE SERVER mydisk-a myvol-a@vds0 disk@0 primary mydisk-b myvol-b@vds0 disk@1 primary VDPCC NAME SERVICE myvdpcc-a vdpcs0@primary myvdpcc-b vdpcs0@primary VCONS NAME SERVICE PORT mygroup vcc0@primary 5000 |
primary# ldm list -p VERSION 1.0 DOMAIN|name=primary|state=active|flags=-t-cv|cons=|ncpu=1|mem=805306368|util=0.0|uptime=0 DOMAIN|name=ldg1|state=bound|flags=-----|cons=5000|ncpu=1|mem=536870912|util=|uptime= |
Generate output as a subset of resources by entering one or more of the following format options. If you specify more than one format, delimit the items by a comma with no spaces.
console - output contains virtual console (vcons) and virtual console concentrator (vcc) service
cpu - output contains virtual CPU (vcpu) and physical CPU (pcpu)
crypto - cryptographic unit output contains Modular Arithmetic Unit (mau) and any other LDoms-supported cryptographic unit, such as the Control Word Queue (CWQ)
disk - output contains virtual disk (vdisk) and virtual disk server (vds)
domain - output contains variables (var), host ID (hostid), domain state, flags, and software state
memory - output contains memory
network - output contains media access control (mac) address , virtual network switch (vsw), and virtual network (vnet) device
physio - physical input/output contains peripheral component interconnect (pci) and network interface unit (niu)
serial - output contains virtual logical domain channel (vldc) service, virtual logical domain channel client (vldcc), virtual data plane channel client (vdpcc), virtual data plane channel service (vdpcs)
status - output contains status about a domain migration in progress.
The following examples show various subsets of output that you can specify.
# ldm list -o cpu primary NAME primary VCPU VID PID UTIL STRAND 0 0 1.0% 100% 1 1 0.6% 100% 2 2 0.2% 100% 3 3 0.5% 100% |
# ldm list -o domain ldm2 NAME STATE FLAGS ldm2 active -t--- SOFTSTATE Openboot initializing VARIABLES auto-boot?=false boot-device=/virtual-devices@100/channel-devices@200/disk@0 |
# ldm list -o network,memory ldm1 NAME ldm1 MAC 00:14:4f:f9:dd:ae MEMORY RA PA SIZE 0x6800000 0x46800000 1500M NETWORK NAME SERVICE DEVICE MAC MODE PVID VID ldm1-network0 primary-vsw0@primary network@0 00:14:4f:fb:21:0f 1 |
primary# ldm list-variable boot-device ldg1 boot-device=/virtual-devices@100/channel-devices@200/disk@0:a |
primary# ldm list-bindings ldg1 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldg1 bound ----- 5000 1 512M VCPU VID PID UTIL STRAND 0 1 100% MEMORY RA PA SIZE 0x4000000 0x34000000 512M NETWORK NAME SERVICE DEVICE MAC mynet-b vsw0@primary network@0 08:00:20:ab:9a:12 PEER MAC vsw0@primary 08:00:20:aa:bb:e0 mynet-a@ldg1 08:00:20:ab:9a:11 mynet-c@ldg2 08:00:20:ab:9a:22 NAME SERVICE DEVICE MAC mynet-a vsw0@primary network@1 08:00:20:ab:9a:11 PEER MAC vsw0@primary 08:00:20:aa:bb:e0 mynet-b@ldg1 08:00:20:ab:9a:12 mynet-c@ldg2 08:00:20:ab:9a:22 DISK NAME VOLUME DEVICE SERVER mydisk-a myvol-a@vds0 disk@0 primary mydisk-b myvol-b@vds0 disk@1 primary VDPCC NAME SERVICE myvdpcc-a vdpcs0@primary myvdpcc-b vdpcs0@primary VCONS NAME SERVICE PORT mygroup vcc0@primary 5000 |
The ldm list-config command lists the logical domain configurations that are stored on the service processor. When used with the -r option, this command lists those configurations for which autosave files exist on the control domain.
For more information about configurations, see Managing Logical Domains Configurations. For more examples, see the ldm(1M) man page.
primary# ldm list-config factory-default 3guests foo [next poweron] primary reconfig-primary |
The labels to the right of the configuration name mean the following:
[current] - last booted configuration, only as long as it matches the currently running configuration; that is, until you initiate a reconfiguration. After the reconfiguration, the annotation changes to [next poweron].
[next poweron] - configuration to be used at the next powercycle.
primary# ldm list-devices -a VCPU PID %FREE PM 0 0 NO 1 0 YES 2 0 YES 3 0 YES 4 100 --- 5 100 --- 6 100 --- 7 100 --- 8 100 --- 9 100 --- 10 100 --- 11 100 --- 12 100 --- 13 100 --- 14 100 --- 15 100 --- 16 100 --- 17 100 --- 18 100 --- 19 100 --- 20 100 --- 21 100 --- 22 100 --- 23 100 --- 24 100 --- 25 100 --- 26 100 --- 27 100 --- 28 100 --- 29 100 --- 30 100 --- 31 100 --- MAU CPUSET BOUND (0, 1, 2, 3) ldg2 (4, 5, 6, 7) (8, 9, 10, 11) (12, 13, 14, 15) (16, 17, 18, 19) (20, 21, 22, 23) (24, 25, 26, 27) (28, 29, 30, 31) MEMORY PA SIZE BOUND 0x0 512K _sys_ 0x80000 1536K _sys_ 0x200000 62M _sys_ 0x4000000 768M primary 0x34000000 512M ldg1 0x54000000 8M _sys_ 0x54800000 2G ldg2 0xd4800000 29368M IO DEVICE PSEUDONYM BOUND OPTIONS pci@780 bus_a yes pci@7c0 bus_b yes bypass=on |
List the amount of memory available to be allocated.
primary# ldm list-devices mem MEMORY PA SIZE 0x14e000000 2848M |
primary# ldm list-services VDS NAME VOLUME OPTIONS DEVICE primary-vds0 VCC NAME PORT-RANGE primary-vcc0 5000-5100 VSW NAME MAC NET-DEV DEVICE MODE primary-vsw0 00:14:4f:f9:68:d0 e1000g0 switch@0 prog,promisc |
To the Logical Domains Manager, constraints are one or more resources you want to have assigned to a particular domain. You either receive all the resources you ask to be added to a domain or you get none of them, depending upon the available resources. The list-constraints subcommand lists those resources you requested assigned to the domain.
primary# ldm list-constraints ldg1 DOMAIN ldg1 VCPU COUNT 1 MEMORY SIZE 512M NETWORK NAME SERVICE DEVICE MAC mynet-b vsw0 network@0 08:00:20:ab:9a:12 mynet-b vsw0 network@0 08:00:20:ab:9a:12 DISK NAME VOLUME mydisk-a myvol-a@vds0 mydisk-b myvol-b@vds0 VDPCC NAME SERVICE myvdpcc-a vdpcs0@primary myvdpcc-b vdpcs0@primary VCONS NAME SERVICE mygroup vcc0 |
primary# ldm list-constraints -x ldg1 <?xml version="1.0"?> <LDM_interface version="1.0"> <data version="2.0"> <ldom> <ldom_info> <ldom_name>ldg1</ldom_name> </ldom_info> <cpu> <number>8</number> </cpu> <memory> <size>1G</size> </memory> <network> <vnet_name>vnet0</vnet_name> <service_name>primary-vsw0</service_name> <mac_address>01:14:4f:fa:0f:55</mac_address> </network> <disk> <vdisk_name>vdisk0</vdisk_name> <service_name>primary-vds0</service_name> <vol_name>vol0</vol_name> </disk> <var> <name>boot-device</name> <value>/virtual-devices@100/channel-devices@200/disk@0:a</value> </var> <var> <name>nvramrc</name> <value>devalias vnet0 /virtual-devices@100/channel-devices@200/network@0</value> </var> <var> <name>use-nvramrc?</name> <value>true</value> </var> </ldom> </data> </LDM_interface> |
primary# ldm list-constraints -p VERSION 1.0 DOMAIN|name=primary MAC|mac-addr=00:03:ba:d8:b1:46 VCPU|count=4 MEMORY|size=805306368 IO |dev=pci@780|alias= |dev=pci@7c0|alias= VDS|name=primary-vds0 |vol=disk-ldg2|opts=|dev=/ldoms/nv72-ldg2/disk |vol=vol0|opts=|dev=/ldoms/nv72-ldg1/disk VCC|name=primary-vcc0|port-range=5000-5100 VSW|name=primary-vsw0|mac-addr=|net-dev=e1000g0|dev=switch@0 DOMAIN|name=ldg1 VCPU|count=8 MEMORY|size=1073741824 VARIABLES |boot-device=/virtual-devices@100/channel-devices@200/disk@0:a |nvramrc=devalias vnet0 /virtual-devices@100/channel-devices@200/network@0 |use-nvramrc?=true VNET|name=vnet0|dev=network@0|service=primary-vsw0|mac-addr=01:14:4f:fa:0f:55 VDISK|name=vdisk0|vol=vol0@primary-vds0 |
You can connect to a guest console over a network if the listen_addr property is set to the IP address of the control domain in the vntsd(1M) SMF manifest. For example:
$ telnet host-name 5001 |
Enabling network access to a console has security implications. Any user can connect to a console and for this reason it is disabled by default.
A Service Management Facility manifest is an XML file that describes a service. For more information about creating an SMF manifest, refer to the Solaris 10 System Administrator Collection.
To access a non-English OS in a guest domain through the console, the terminal for the console must be in the locale required by the OS.
An ldm stop-domain command can time out before the domain completes shutting down. When this happens, an error similar to the following is returned by the Logical Domains Manager.
LDom ldg8 stop notification failed |
However, the domain could still be processing the shutdown request. Use the ldm list-domain command to verify the status of the domain. For example:
# ldm list-domain ldg8 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldg8 active s---- 5000 22 3328M 0.3% 1d 14h 31m |
The preceding list shows the domain as active, but the s flag indicates that the domain is in the process of stopping. This should be a transitory state.
The following example shows the domain has now stopped.
# ldm list-domain ldg8 NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME ldg8 bound ----- 5000 22 3328M |
This section describes how you can correlate the information that is reported by the Solaris Fault Management Architecture (FMA) with the logical domain resources that are marked as being faulty.
FMA reports CPU errors in terms of physical CPU numbers and memory errors in terms of physical memory addresses.
If you want to determine within which logical domain an error occurred and the corresponding virtual CPU number or real memory address within the domain, then you must perform a mapping.
The domain and the virtual CPU number within the domain, which correspond to a given physical CPU number, can be determined with the following procedures.
Generate a long parseable list for all domains.
primary# ldm list -l -p |
Look for the entry in the list's VCPU sections that has a pid field equal to the physical CPU number.
The domain and the real memory address within the domain, which correspond to a given physical memory address (PA), can be determined as follows.
Generate a long parseable list for all domains.
primary# ldm list -l -p |
Look for the line in the list's MEMORY sections where the PA falls within the inclusive range pa to (pa + size - 1); that is, pa <= PA < (pa + size - 1).
Here pa and size refer to the values in the corresponding fields of the line.
Suppose you have a logical domain configuration as shown in Example 9–17, and you want to determine the domain and the virtual CPU corresponding to physical CPU number 5, and the domain and the real address corresponding to physical address 0x7e816000.
Looking through the VCPU entries in the list for the one with the pid field equal to 5, you can find the following entry under logical domain ldg1.
|vid=1|pid=5|util=29|strand=100 |
Hence, the physical CPU number 5 is in domain ldg1 and within the domain it has virtual CPU number 1.
Looking through the MEMORY entries in the list, you can find the following entry under domain ldg2.
ra=0x8000000|pa=0x78000000|size=1073741824 |
Where 0x78000000 <= 0x7e816000 <= (0x78000000 + 1073741824 - 1); that is, pa <= PA <= (pa + size - 1).Hence, the PA is in domain ldg2 and the corresponding real address is 0x8000000 + (0x7e816000 - 0x78000000) = 0xe816000.
primary# ldm list -l -p VERSION 1.0 DOMAIN|name=primary|state=active|flags=normal,control,vio-service|cons=SP|ncpu=4|mem=1073741824|util=0.6| uptime=64801|softstate=Solaris running VCPU |vid=0|pid=0|util=0.9|strand=100 |vid=1|pid=1|util=0.5|strand=100 |vid=2|pid=2|util=0.6|strand=100 |vid=3|pid=3|util=0.6|strand=100 MEMORY |ra=0x8000000|pa=0x8000000|size=1073741824 IO |dev=pci@780|alias=bus_a |dev=pci@7c0|alias=bus_b VDS|name=primary-vds0|nclients=2 |vol=disk-ldg1|opts=|dev=/opt/ldoms/testdisk.1 |vol=disk-ldg2|opts=|dev=/opt/ldoms/testdisk.2 VCC|name=primary-vcc0|nclients=2|port-range=5000-5100 VSW|name=primary-vsw0|nclients=2|mac-addr=00:14:4f:fb:42:5c|net-dev=e1000g0|dev=switch@0|mode=prog,promisc VCONS|type=SP DOMAIN|name=ldg1|state=active|flags=normal|cons=5000|ncpu=2|mem=805306368|util=29|uptime=903| softstate=Solaris running VCPU |vid=0|pid=4|util=29|strand=100 |vid=1|pid=5|util=29|strand=100 MEMORY |ra=0x8000000|pa=0x48000000|size=805306368 VARIABLES |auto-boot?=true |boot-device=/virtual-devices@100/channel-devices@200/disk@0 VNET|name=net|dev=network@0|service=primary-vsw0@primary|mac-addr=00:14:4f:f9:8f:e6 VDISK|name=vdisk-1|vol=disk-ldg1@primary-vds0|dev=disk@0|server=primary VCONS|group=group1|service=primary-vcc0@primary|port=5000 DOMAIN|name=ldg2|state=active|flags=normal|cons=5001|ncpu=3|mem=1073741824|util=35|uptime=775| softstate=Solaris running VCPU |vid=0|pid=6|util=35|strand=100 |vid=1|pid=7|util=34|strand=100 |vid=2|pid=8|util=35|strand=100 MEMORY |ra=0x8000000|pa=0x78000000|size=1073741824 VARIABLES |auto-boot?=true |boot-device=/virtual-devices@100/channel-devices@200/disk@0 VNET|name=net|dev=network@0|service=primary-vsw0@primary|mac-addr=00:14:4f:f9:8f:e7 VDISK|name=vdisk-2|vol=disk-ldg2@primary-vds0|dev=disk@0|server=primary VCONS|group=group2|service=primary-vcc0@primary|port=5000 |
The virtual network terminal server daemon, vntsd(1M), enables you to provide access for multiple domain consoles using a single TCP port. At the time of domain creation, the Logical Domains Manager assigns a unique TCP port to each console by creating a new default group for that domain's console. The TCP port is then assigned to the console group as opposed to the console itself. The console can be bound to an existing group using the set-vcons subcommand.
Bind the consoles for the domains into one group.
The following example shows binding the console for three different domains (ldg1, ldg2, and ldg3) to the same console group (group1).
primary# ldm set-vcons group=group1 service=primary-vcc0 ldg1 primary# ldm set-vcons group=group1 service=primary-vcc0 ldg2 primary# ldm set-vcons group=group1 service=primary-vcc0 ldg3 |
Connect to the associated TCP port (localhost at port 5000 in this example).
# telnet localhost 5000 primary-vnts-group1: h, l, c{id}, n{name}, q: |
You are prompted to select one of the domain consoles.
List the domains within the group by selecting l (list).
primary-vnts-group1: h, l, c{id}, n{name}, q: l DOMAIN ID DOMAIN NAME DOMAIN STATE 0 ldg1 online 1 ldg2 online 2 ldg3 online |
To re-assign the console to a different group or vcc instance, the domain must be unbound; that is, it has to be in the inactive state. Refer to the Solaris 10 OS vntsd(1M) man page for more information on configuring and using SMF to manage vntsd and using console groups.
This section describes the changes in behavior in using the Solaris OS that occur once a configuration created by the Logical Domains Manager is instantiated; that is, domaining is enabled.
Any discussion about whether domaining is enabled pertains only to Sun UltraSPARC T1–based platforms. Otherwise, domaining is always enabled.
Domaining is enabled once a logical domains configuration created by the Logical Domains Manager is instantiated. If domaining is enabled, the OpenBoot firmware is not available after the Solaris OS has started, because it is removed from memory.
To reach the ok prompt from the Solaris OS, you must halt the domain. You can use the Solaris OS halt command to halt the domain.
Whenever performing any maintenance on a system running LDoms software that requires powercycling the server, you must save your current logical domain configurations to the SP first.
Do not attempt to change an active CPU's operational status in a power-managed domain by using the psradm(1M) command. This only applies if your platform supports power management.
If domaining is not enabled, the Solaris OS normally goes to the OpenBoot prompt after a break is issued. The behavior described in this section is seen in two situations:
You press the L1-A key sequence when the input device is set to keyboard.
You enter the send break command when the virtual console is at the telnet prompt.
If domaining is enabled, you receive the following prompt after these types of breaks.
c)ontinue, s)ync, r)eset, h)alt? |
Type the letter that represents what you want the system to do after these types of breaks.
The following table shows the expected behavior of halting or rebooting the control (primary) domain.
The question in Table 9–1 regarding whether domaining is enabled pertains only to the Sun UltraSPARC T1 processors. Otherwise, domaining is always enabled.
Command |
Domaining Enabled? |
Other Domain Configured? |
Behavior |
---|---|---|---|
halt |
Disabled |
N/A |
For Sun UltraSPARC T1 Processors: Drops to the ok prompt. |
|
Enabled |
Not Configured |
For Sun UltraSPARC T1 Processors: System either resets and goes to the OpenBoot ok prompt or goes to the following prompt: r)eset, o)k prompt, or h)alt? For Sun UltraSPARC T2 Processors: Host powered off and stays off until powered on at the SP. |
|
Enabled |
Configured |
Soft resets and boots up if the variable auto-boot?=true. Soft resets and halts at ok prompt if the variable auto-boot?=false. |
reboot |
Disabled |
N/A |
For Sun UltraSPARC T1 Processors: Powers off and powers on the host. |
|
Enabled |
Not Configured |
For Sun UltraSPARC T1 Processors: Powers off and powers on the host. For Sun UltraSPARC T2 Processors: Reboots the host, no power off. |
|
Enabled |
Configured |
For Sun UltraSPARC T1 Processors: Powers off and powers on the host. For Sun UltraSPARC T2 Processors: Reboots the host, no power off. |
shutdown -i 5 |
Disabled |
N/A |
For Sun UltraSPARC T1 Processors: Powers off the host. |
|
Enabled |
Not Configured |
Host powered off, stays off until powered on at the SP. |
|
Enabled |
Configured |
Soft resets and reboots. |
The section describes information to be aware of in using Advanced Lights Out Manager (ALOM) chip multithreading (CMT) with the Logical Domains Manager. For more information about using the ALOM CMT software, refer to the Advanced Lights Out Management (ALOM) CMT v1.3 Guide.
The ALOM CMT documentation only refers to one domain, the primary domain. So, you must be aware that the Logical Domains environment is introducing multiple domains. Say you have a primary domain that is being used as a service domain to provide virtual device services to other domains. In this case, if the primary domain is restarted, these client domains appear to freeze during the reboot process. After the primary domain has fully restarted, the domains resume normal operation. You only need to shut down all domains when power is going to be removed from the entire server.
An additional option is available to the existing ALOM CMT command.
bootmode [normal|reset_nvram|bootscript=strong|config=”config-name”] |
The config=”config-name” option enables you to set the configuration on the next power on to another configuration, including the factory-default shipping configuration.
You can invoke the command whether the host is powered on or off. It takes effect on the next host reset or power on.
Reset the logical domain configuration on the next power on to the default shipping configuration by executing this command in ALOM CMT software.
sc> bootmode config=”factory-default” |
You also can select other configurations that have been created with the Logical Domains Manager using the ldm add-config command and stored on the service processor (SP). The name you specify in the Logical Domains Manager ldm add-config command can be used to select that configuration with the ALOM CMT bootmode command. For example, assume you stored the configuration with the name ldm-config1.
sc> bootmode config=”ldm-config1” |
Now, you must powercycle the system to load the new configuration.
Refer to the ldm(1M) man page for more information about the ldm add-config command.
A Logical Domains configuration is a complete description of all the domains and their resource allocations within a single system. You can save and store configurations on the service processor (SP) for later use.
When you power up a system, the SP boots the selected configuration. By booting a configuration, the system runs the same set of domains, and uses the same virtualization and partitioning resource allocations that are specified in the configuration. The default configuration is the one that is most recently saved.
Starting with the Logical Domains 1.2 release, a copy of the current configuration is automatically saved on the control domain whenever the Logical Domains configuration is changed.
The autosave operation occurs immediately, even in the following situations:
When the new configuration is not explicitly saved on the SP
When the actual configuration change is not made until after the affected domain reboots
This autosave operation enables you to recover a configuration when the configurations that are saved on the SP are lost. This operation also enables you to recover a configuration when the current configuration was not explicitly saved to the SP when the system powercycled. In these circumstances, the Logical Domains Manager can restore that configuration on restart if it is newer than the configuration marked for the next boot.
Power management, FMA, ASR, and PRI update events do not cause an update to the autosave files.
You can automatically or manually restore autosave files to new or existing configurations. By default, when an autosave configuration is newer than the corresponding running configuration, a message is written to the LDoms log. Thus, you must use the ldm add-spconfig -r command to manually update an existing configuration or create a new one based on the autosave data.
When a delayed reconfiguration is pending, the configuration changes are immediately autosaved. As a result, if you run the ldm list-config -r command, the autosave configuration is shown as being newer than the current configuration.
For information about how to use the ldm *-spconfig commands to manage configurations and to manually recover autosave files, see the ldm(1M) man page.
For information about how to use an ALOM CMT Version 1.3 command to select a configuration to boot, see Using LDoms With ALOM CMT).
The autorecovery policy specifies how to handle the recovery of a configuration when one configuration that is automatically saved on the control domain is newer than the corresponding running configuration. The autorecovery policy is specified by setting the autorecovery_policy property of the ldmd SMF service. The autorecovery_policy property can have the following values:
autorecovery_policy=1 – Logs warning messages when an autosave configuration is newer than the corresponding running configuration. These messages are logged in the ldmd SMF log file. The user must manually perform any configuration recovery. This is the default policy.
autorecovery_policy=2 – Displays a notification message if an autosave configuration is newer than the corresponding running configuration. This notification message is printed in the output of any ldm command the first time an ldm command is issued after each restart of the Logical Domains Manager. The user must manually perform any configuration recovery.
autorecovery_policy=3 – Automatically updates the configuration if an autosave configuration is newer than the corresponding running configuration. This action overwrites the SP configuration that will be used during the next powercycle. This configuration is updated with the newer configuration that is saved on the control domain. This action does not impact the currently running configuration. It only impacts the configuration that will be used during the next powercycle. A message is also logged, which states that a newer configuration has been saved on the SP and that it will be booted the next time the system is powercycled. These messages are logged in the ldmd SMF log file.
Become superuser on the control domain.
View the autorecovery_policy property value.
# svccfg -s ldmd listprop ldmd/autorecovery_policy |
Stop the ldmd service.
# svcadm disable ldmd |
Change the autorecovery_policy property value.
# svccfg -s ldmd setprop ldmd/autorecovery_policy=value |
For example, to set the policy to perform autorecovery, set the property value to 3:
# svccfg -s ldmd setprop ldmd/autorecovery_policy=3 |
Refresh and restart the ldmd service.
# svcadm refresh ldmd # svcadm enable ldmd |
The following example shows how to view the current value of the autorecovery_policy property and change it to a new value. The original value of this property is 1, which means that autosave changes are logged. The svcadm command is used to stop and restart the ldmd service, and the svccfg command is used to view and set the property value.
# svccfg -s ldmd listprop ldmd/autorecovery_policy ldmd/autorecovery_policy integer 1 # svcadm disable ldmd # svccfg -s ldmd setprop ldmd/autorecovery_policy=3 # svcadm refresh ldmd # svcadm enable ldmd |