Go to main content

Oracle® VM Server for SPARC 3.6 Reference Manual

Exit Print View

Updated: August 2018
 
 

ldm (8)

Name

ldm - command-line interface for the Logical Domains Manager

Synopsis

ldm or ldm --help [subcommand] 

ldm -V 

ldm subcommand [option]... [operand]...

Description

The ldm command interacts with the Logical Domains Manager and is used to create and manage logical domains. The Logical Domains Manager runs on the control domain, which is the initial domain created by the service processor. For those platforms that have physical domains, the Logical Domains Manager runs only in the control domain of each physical domain. The control domain is named primary.

A logical domain is a discrete logical grouping with its own operating system, resources, and identity within a single computer system. Each logical domain can be created, destroyed, reconfigured, and rebooted independently, without requiring a power cycle of the server. You can use logical domains to run a variety of applications in different domains and keep them independent for security purposes.

All logical domains are the same and can be distinguished from one another based on the roles that you specify for them. The following are the roles that logical domains can perform:

Control domain

Creates and manages other logical domains and services by communicating with the hypervisor.

Service domain

Provides services to other logical domains, such as a virtual network switch or a virtual disk service.

I/O domain

Has direct access to a physical I/O device, such as a network card in a PCI EXPRESS (PCIe) controller or a single-root I/O virtualization (SR-IOV) virtual function. An I/O domain can own a PCIe root complex, or it can own a PCIe slot or on-board PCIe device by using the direct I/O feature and an SR-IOV virtual function by using the SR-IOV feature.

An I/O domain can share physical I/O devices with other domains in the form of virtual devices when the I/O domain is also used as a service domain.

Root domain

Has a PCIe root complex assigned to it. This domain owns the PCIe fabric and all connected devices, and provides all fabric-related services, such as fabric error handling. A root domain owns all of the SR-IOV physical functions from which you can create virtual functions and assign them to I/O domains. A root domain is also an I/O domain, as it owns and has direct access to physical I/O devices.

The number of root domains that you can have depends on your platform architecture. See your platform documentation for more information.

The default root domain is the primary domain.

Guest domain

Uses services from the I/O and service domains and is managed by the control domain.

You can use the Logical Domains Manager to establish dependency relationships between domains.

Master domain

A domain that has one or more domains that depend on it. A slave domain enacts a failure policy when the master domain fails. For instance, a slave can be left as-is, panicked, rebooted, or stopped when the master domain fails.

Slave domain

A domain that depends on another domain. A domain can specify up to four master domains. When one or more of the master domains fail, the failure policy dictates the slave domain's behavior.

Subcommand Summaries

Following are the supported subcommands along with a description and required authorization for each. For information about setting up authorization for user accounts, see Using Rights Profiles and Roles in Oracle VM Server for SPARC 3.6 Administration Guide.

Subcommand
Description
Authorization
add-resource
Adds a resource to an existing logical domain.
solaris.ldoms.write
add-domain
Creates a logical domain.
solaris.ldoms.write
add-policy
Adds a resource management policy to an existing logical domain.
solaris.ldoms.write
add-spconfig
Adds an SP configuration to the service processor (SP).
solaris.ldoms.write
add-variable
Adds one or more variables to a logical domain.
solaris.ldoms.write
add-vsan-dev
Adds a physical device to a virtual SAN.
solaris.ldoms.write
bind-domain
Binds resources to a created logical domain.
solaris.ldoms.write
cancel-operation
Cancels an operation, such as a delayed reconfiguration (reconf), memory dynamic reconfiguration removal (memdr), or domain migration (migration).
solaris.ldoms.write
cancel-reconf
Cancels a delayed reconfiguration operation on the primary domain.
solaris.ldoms.write
create-vf
Creates one or more virtual functions.
solaris.ldoms.write
destroy-vf
Destroys one or more virtual functions.
solaris.ldoms.write
evict-cmi
Removes virtual CPUs or virtual CPU cores that are associated with a specific CMI device from the logical domain that owns the device.
solaris.ldoms.write
grow-cmi
Adds virtual CPUs or virtual CPU cores that are associated with a specific CMI device to the logical domain that owns the device.
solaris.ldoms.write
grow-socket
Adds virtual CPUs, virtual CPU cores, or virtual memory that is associated with a specific CPU socket to an existing logical domain.
solaris.ldoms.write
init-system
Configures one or more guest domains, the control domain, or both, by using an existing configuration.
solaris.ldoms.write
list-bindings
Lists server bindings for logical domains.
solaris.ldoms.read
list-cmi
Lists devices for logical domains.
solaris.ldoms.read
list-constraints
Lists resource constraints for logical domains.
solaris.ldoms.read
list-dependencies
Lists dependencies.
solaris.ldoms.read
list-devices
Lists devices for logical domains.
solaris.ldoms.read
list-domain
Lists logical domains and their states.
solaris.ldoms.read
list-hba
Lists SCSI host bus adapters (HBAs) for logical domains.
solaris.ldoms.read
list-history
Lists recently issued ldm commands.
solaris.ldoms.read
list-hvdump
Lists hypervisor data collection property values.
solaris.ldoms.read
list-io
Lists I/O devices for logical domains.
solaris.ldoms.read
list-logctl
Lists fine-grained logging characteristics.
solaris.ldoms.read
list-netdev
Lists network devices for logical domains.
solaris.ldoms.read
list-netstat
Lists network device statistics for logical domains.
solaris.ldoms.read
list-permits
Lists CPU core activation information.
solaris.ldoms.read
list-rsrc-group
Lists resource group information.
solaris.ldoms.read
list-services
Lists services for logical domains.
solaris.ldoms.read
list-socket
Lists CPU socket information.
solaris.ldoms.read
list-spconfig
Lists configurations for logical domains.
solaris.ldoms.read
list-variable
Lists variables for logical domains.
solaris.ldoms.read
list-vsan
Lists members of the specified virtual SAN.
solaris.ldoms.read
migrate-domain
Migrates a logical domain from one machine to another.
solaris.ldoms.write
panic-domain
Panics the Oracle Solaris OS on a specified logical domain.
solaris.ldoms.write
remove-resource
Removes a resource from an existing logical domain.
solaris.ldoms.write
remove-domain
Deletes a logical domain.
solaris.ldoms.write
remove-policy
Removes a resource management policy from an existing logical domain.
solaris.ldoms.write
remove-spconfig
Removes an SP configuration from the SP.
solaris.ldoms.write
remove-variable
Removes one or more variables from an existing logical domain.
solaris.ldoms.write
remove-vsan-dev
Removes a physical device from a virtual SAN.
solaris.ldoms.write
rescan-vhba
Synchronizes the set of SCSI devices that are seen by the virtual SCSI HBA and virtual SAN.
solaris.ldoms.read
set-resource
Specifies a resource for an existing logical domain. This can be either a property change or a quantity change. This represents a quantity change when applied to the resources cmi, core, vcpu, or memory. For a quantity change, the subcommand becomes a dynamic or a delayed reconfiguration operation, where the quantity of the specified resource is assigned to the specified logical domain. If there are more resources assigned to the logical domain than are specified in this subcommand, some are removed. If there are fewer resources assigned to the logical domain than are specified in this subcommand, some are added. See RESOURCES for resource definitions.
solaris.ldoms.write
set-domain
Sets properties on a logical domain.
solaris.ldoms.write
set-hvdump
Sets property values for the hypervisor data collection process.
solaris.ldoms.write
set-io
Modifies a physical function or a virtual function.
solaris.ldoms.write
set-logctl
Specifies fine-grained logging characteristics.
solaris.ldoms.write
set-policy
Sets properties for a resource management policy to an existing logical domain.
solaris.ldoms.write
set-socket
Constrains an existing logical domain to use the virtual CPU, virtual CPU core, and virtual memory resources that are associated with the specified CPU sockets.
solaris.ldoms.write
set-spconfig
Specifies an SP configuration to use.
solaris.ldoms.write
set-variable
Sets one or more variables for an existing logical domain.
solaris.ldoms.write
shrink-cmi
Removes virtual CPUs or virtual CPU cores that are associated with a specific CMI device from the logical domain that owns the device.
solaris.ldoms.write
shrink-socket
Removes virtual CPUs, virtual CPU cores, or virtual memory that is associated with a specific CPU socket from an existing logical domain.
solaris.ldoms.write
start-domain
Starts one or more logical domains.
solaris.ldoms.write
start-hvdump
Manually starts the hypervisor data collection process.
solaris.ldoms.write
start-reconf
Enters delayed reconfiguration mode on a root domain.
solaris.ldoms.write
stop-domain
Stops one or more running domains.
solaris.ldoms.write
unbind-domain
Unbinds or releases resources from a logical domain.
solaris.ldoms.write

Note - Not all subcommands are supported on all resource types.

Aliases

This section includes tables that show the short form and long form of the ldm subcommand actions (verbs), resource names (nouns), and full subcommands.

The following table shows the short form and long form of subcommand actions.

Short Form
Long Form
ls
list
rm
remove

The following table shows the short form and long form of resource names.

Short Form
Long Form
config
spconfig
dep
dependencies
dom
domain
group
rsrc-group
mem
memory
var
variable
vcc
vconscon
vcons
vconsole
vds
vdiskserver
vdsdev
vdiskserverdevice
vsw
vswitch

The following table shows the short form and long form of subcommands.

Short Form
Long Form
bind
bind-domain
cancel-op
cancel-operation
create
add-domain
destroy
remove-domain
history
list-history
list
list-domain
migrate
migrate-domain
modify
set-domain
panic
panic-domain
start
start-domain
stop
stop-domain
unbind
unbind-domain

Note - In the syntax and examples in the remainder of this man page, the short forms of the action and resource aliases are used.

The following table shows the short form and long form of generic command-line options.

Short Form
Long Form
–a
–-all
–d
–-domain
–e
–-extended
–f
–-force
–g
–-group
–-help
–-help
–l
–-long
–o
–-oformat
–p
–-parseable
–v
–-version
–x
–-xml

The ldm --help subcommand command shows the long option names for the specified subcommand.

Resources

The following resources are supported:

core

CPU cores.

io

I/O devices, such as PCIe root complexes and their attached adapters and devices. Also direct I/O-assignable devices and PCIe SR-IOV virtual functions.

mem, memory

Default memory size in bytes. Or specify gigabytes (G), kilobytes (K), or megabytes (M). Virtualized memory of the server that can be allocated to guest domains.

vcc, vconscon

Virtual console concentrator service with a specific range of TCP ports to assign to each guest domain at the time it is created.

vcons, vconsole

Virtual console for accessing system-level messages. A connection is achieved by connecting to the vconscon service in the control domain at a specific port.

vcpu

Each virtual CPU represents one CPU thread of a server. See your platform documentation.

vdisk

Virtual disks are generic block devices backed by different types of physical devices, volumes, or files. A virtual disk is not synonymous with a SCSI disk and, therefore, excludes the target ID (tN) in the disk name. Virtual disks in a logical domain have the following format: cNdNsN, where cN is the virtual controller, dN is the virtual disk number, and sN is the slice.

vds, vdiskserver

Virtual disk server that allows you to export virtual disks to other logical domains.

vdsdev, vdiskserverdevice

Device exported by the virtual disk server. The device can be an entire disk, a slice on a disk, a file, or a disk volume.

vhba

Virtual SCSI host bus adapter (HBA) that supports the Sun Common SCSI Architecture (SCSA) interface.

vnet

Virtual network device that implements a virtual Ethernet device and communicates with other vnet devices in the system using the virtual network switch (vsw).

vsan

Virtual storage area network (SAN) service that exports a set of physical SCSI devices under a specified SCSI HBA initiator port.

vsw, vswitch

Virtual network switch that connects the virtual network devices to the external network and also switches packets between them.

Subcommand Usage

This section contains descriptions of every supported command-line interface (CLI) operation, that is, every subcommand and resource combination.

Add, Set, Remove, and Migrate Domains

Add Domains

The add-domain subcommand adds one or more logical domains by specifying one or more logical domain names or by using an XML configuration file. You can also specify property values to customize the domain, such as the MAC address, the host ID, a list of master domains, and a failure policy. If you do not specify these property values, the Logical Domains Manager automatically assigns default values.

Syntax:

ldm add-domain -i file

ldm add-domain [cpu-arch=generic|native|migration-class1|sparc64-class1] [hostid=num]
  [mac-addr=MAC-address] [failure-policy=ignore|panic|reset|stop] [extended-mapin-space=off]
  [boot-policy=enforce|none|warning] [master=master-ldom1,...,master-ldom4]
  [max-cores=[num|unlimited]] [uuid=uuid] [shutdown-group=num] [rc-add-policy=[iov]]
  [perf-counters=counter-set] [fj-software-limit-pagesize=page-size] domain-name

ldm add-domain domain-name...

    where:

  • –i file specifies the XML configuration file to use in creating the logical domain.

  • cpu-arch=generic|native|migration-class1|sparc64-class1 specifies one of the following values:

    • generic configures a guest domain for a CPU-type-independent migration.

    • native configures a guest domain to migrate only between platforms that have the same CPU type. native is the default value.

    • migration-class1 is a cross-CPU migration family for SPARC platforms starting with the SPARC T4, SPARC M5, and SPARC S7 series server. These platforms support hardware cryptography during and after these migrations so that there is a lower bound to the supported CPUs.

      Starting with the Oracle VM Server for SPARC 3.6 software, the migration-class1 definition no longer includes support for a 2-Gbyte page size because this page size is not available on SPARC M8 and SPARC T8 series servers.

      So, any migration that uses migration-class1 on a source machine that runs software prior to Oracle VM Server for SPARC 3.6 is blocked if the target machine is a SPARC M8 or SPARC T8 series server that runs at least the Oracle VM Server for SPARC 3.6 software. If the target machine is not a SPARC M8 or SPARC T8 series server, the migration succeeds and the domain continues to have access to 2-Gbyte pages until any subsequent reboot. As part of this post-migration reboot, the domain inherits the new migration-class1 definition and loses access to 2-Gbyte pages.

      This value is not compatible with Fujitsu M10 servers and Fujitsu SPARC M12 servers.

    • sparc64-class1 is a cross-CPU migration family for SPARC64 platforms. The sparc64-class1 value is based on SPARC64 instructions, so it has a greater number of instructions than the generic value. Therefore, the sparc64-class1 value does not have a performance impact compared to the generic value.

      This value is not compatible with Oracle SPARC T-series servers, Oracle SPARC M-series servers, or Oracle S-series servers.

  • boot-policy=enforce|none|warning specifies the verified boot policy. When the value is enforce, boot blocks and kernel modules are verified. Any incorrectly signed boot blocks and modules are not loaded and the guest domain might not be booted. However, when the value is none, no verification is performed and the guest domain boots. The default value is warning, which issues a warning message about any incorrectly signed boot blocks and kernel modules, but continues to load the modules and boot the guest domain.

  • mac-addr=MAC-address is the MAC address for this domain. The number must be in standard octet notation, for example, 80:00:33:55:22:66.

      You can allocate the following types of MAC addresses:

    • Auto-allocated MAC addresses – Uses MAC addresses in the range of 0x00:14:4f:f8:00:00 - 0x00:14:4f:fb:ff:ff. When you automatically allocate MAC addresses, MAC address collision detection is enabled.

    • User-allocated MAC addresses – Uses MAC addresses outside the range of the auto-allocated MAC addresses. When you use user-allocated MAC addresses, no MAC address collision detection is performed.

  • hostid specifies the host ID for a particular domain. If you do not specify a host ID, the Logical Domains Manager assigns a unique host ID to each domain.

  • failure-policy specifies the failure policy, which controls how slave domains behave when the master domain fails. This property is set on a master domain. The default value is ignore. Following are the valid property values:

    • ignore ignores failures of the master domain (slave domains are unaffected).

    • panic panics any slave domains when the master domain fails (similar to running the ldm panic command).

    • reset stops and restarts any slave domains when the master domain fails (similar to running the ldm stop -f command and then the ldm start command).

    • stop immediately stops any slave domains when the master domain fails (similar to running the ldm stop -f command).

  • extended-mapin-space=off disables the extended mapin space for the specified domain. By default, the extended mapin space is enabled.

  • master specifies the name of up to four master domains for a slave domain. This property is set on a slave domain. By default, there are no masters for the domain. The master domain must exist prior to an ldm add-domain operation.


    Note - The Logical Domains Manager does not permit you to create domain relationships that result in a dependency cycle.
  • rc-add-policy specifies whether to enable or disable the direct I/O and SR-IOV I/O virtualization operations on any root complex that might be added to the specified domain. Valid values are iov and no value (rc-add-policy=). When rc-add-policy=iov, the direct I/O and SR-IOV features are enabled for a root complex that is being added. When rc-add-policy=, the iov property value is cleared to disable the I/O virtualization features for the root complex (unless you explicitly set iov=on by using the add-io command). The default value is no value.

  • perf-counters=counter-set specifies the types of access to grant to the performance counter. If no perf-counters value is specified, the value is htstrand. You can specify the following values for the perf-counters property:

    global

    Grants the domain access to the global performance counters that its allocated resources can access. Only one domain at a time can have access to the global performance counters. You can specify this value alone or with either the strand or htstrand value.

    strand

    Grants the domain access to the strand performance counters that exist on the CPUs that are allocated to the domain. You cannot specify this value and the htstrand value together.

    htstrand

    Behaves the same as the strand value and enables instrumentation of hyperprivilege mode events on the CPUs that are allocated to the domain. You cannot specify this value and the strand value together.

    To disable all access to any of the performance counters, specify perf-counters=.

  • uuid=uuid specifies the universally unique identifier (UUID) for the domain. uuid is a hexadecimal string, such as 12345678-1234-abcd-1234-123456789abc, which consists of five hexadecimal numbers separated by dashes. Each number must have the specified number of hexadecimal digits: 8, 4, 4, 4, and 12, as follows:

    xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
  • max-cores=[num|unlimited] specifies the maximum number of cores that are permitted to be assigned to a domain. If the value is unlimited, there is no constraint on the number of CPU cores that can be allocated.

  • shutdown-group=num specifies the shutdown group number for a domain. This value is used by the SP on a Fujitsu M10 server and a Fujitsu SPARC M12 server when an ordered shutdown is performed.

    When the SP initiates an ordered shutdown, domains are shut down in descending order of their shutdown group number. That is, the domain with the highest number is shut down first, and the domain with the lowest number is shut down last. When more than one domain shares a shutdown group number, the domains shut down concurrently. If a master domain and a slave domain share a shutdown group number, the domains shut down concurrently even though a master-slave relationship exists. Therefore, when establishing a dependency relationship between a master domain and a slave domain, assign a different shutdown group number to each domain.

    Valid values are from 1 to 15. The control domain's shutdown group number is zero (0) and cannot be changed. The default value for any other domain is 15.

    For the new shutdown-group property values to take effect, you must use the ldm add-spconfig command to save the configuration to the SP.

    This property pertains only to the Fujitsu M10 platform and the Fujitsu SPARC M12 platform.

  • fj-software-limit-pagesize specifies the largest page size for a domain. Valid settings for this property are 256 Mbytes (256MB), 2 Gbytes (2GB), or 16 Gbytes (16GB). The fj-software-limit-pagesize property also affects live migration alignment restrictions and the ldmd behavior when the SP invokes the deleteboard command. If no fj-software-limit-pagesize is specified, the platform-specific maximum page size is applied. You can clear the fj-software-limit-pagesize property by setting fj-software-limit-pagesize=.

    This property pertains only to the Fujitsu M10 platform and the Fujitsu SPARC M12 platform.

  • domain-name specifies the logical domain to be added.

Set Options for Domains

The set-domain subcommand enables you to modify properties such as boot-policy, mac-addr, hostid, failure-policy, extended-mapin-space, master, and max-cores for a domain. You cannot use this command to update resources.


Note - If the slave domain is bound, all of its specified master domains must also be bound prior to invoking the ldm set-domain command.

Syntax:

ldm set-domain -i file

ldm set-domain [cpu-arch=generic|native|migration-class1|sparc64-class1] [hostid=num]
  [mac-addr=MAC-address] [failure-policy=ignore|panic|reset|stop]
  [extended-mapin-space=[on|off]] [boot-policy=enforce|none|warning]
  [master=[master-ldom1,...,master-ldom4]] [max-cores=[num|unlimited]] [shutdown-group=num]
  [rc-add-policy=[iov]] [perf-counters=[counter-set]] [fj-software-limit-pagesize=page-size] domain-name

    where:

  • –i file specifies the XML configuration file to use in setting the properties of the logical domain.

    Only the ldom_info nodes specified in the XML file are parsed. Resource nodes, such as vcpu, mau, and memory, are ignored.

    If the hostid property in the XML file is already in use, the ldm set-domain -i command fails with the following error:

    Hostid host-ID is already in use

    Before you re-run the ldm set-domain -i command, remove the hostid entry from the XML file.

  • cpu-arch=generic|native|migration-class1|sparc64-class1 specifies one of the following values:

    • generic configures a guest domain for a CPU-type-independent migration.

    • native configures a guest domain to migrate only between platforms that have the same CPU type. native is the default value.

    • migration-class1 is a cross-CPU migration family for SPARC platforms starting with the SPARC T4, SPARC M5, and SPARC S7 series server. These platforms support hardware cryptography during and after these migrations so that there is a lower bound to the supported CPUs.

      Starting with the Oracle VM Server for SPARC 3.6 software, the migration-class1 definition no longer includes support for a 2-Gbyte page size because this page size is not available on SPARC M8 and SPARC T8 series servers.

      So, any migration that uses migration-class1 on a source machine that runs software prior to Oracle VM Server for SPARC 3.6 is blocked if the target machine is a SPARC M8 or SPARC T8 series server that runs at least the Oracle VM Server for SPARC 3.6 software. If the target machine is not a SPARC M8 or SPARC T8 series server, the migration succeeds and the domain continues to have access to 2-Gbyte pages until any subsequent reboot. As part of this post-migration reboot, the domain inherits the new migration-class1 definition and loses access to 2-Gbyte pages.

      This value is not compatible with Fujitsu M10 servers and Fujitsu SPARC M12 servers.

    • sparc64-class1 is a cross-CPU migration family for SPARC64 platforms. The sparc64-class1 value is based on SPARC64 instructions, so it has a greater number of instructions than the generic value. Therefore, the sparc64-class1 value does not have a performance impact compared to the generic value.

      This value is not compatible with Oracle SPARC T-series servers, Oracle SPARC M-series servers, or Oracle S-series servers.

  • boot-policy=enforce|none|warning specifies the verified boot policy. When the value is enforce, boot blocks and kernel modules are verified. Any incorrectly signed boot blocks and modules are not loaded and the guest domain might not be booted. However, when the value is none, no verification is performed and the guest domain boots. The default value is warning, which issues a warning message about any incorrectly signed boot blocks and kernel modules, but continues to load the modules and boot the guest domain.

    If the domain is active when you change the boot-policy value, you must reboot the domain to make the change take effect.

  • mac-addr=MAC-address is the MAC address for this domain. The number must be in standard octet notation, for example, 80:00:33:55:22:66.

      You can allocate the following types of MAC addresses:

    • Auto-allocated MAC addresses – Uses MAC addresses in the range of 0x00:14:4f:f8:00:00 - 0x00:14:4f:fb:ff:ff. When you automatically allocate MAC addresses, MAC address collision detection is enabled.

    • User-allocated MAC addresses – Uses MAC addresses outside the range of the auto-allocated MAC addresses. When you use user-allocated MAC addresses, no MAC address collision detection is performed.

  • hostid specifies the host ID for a particular domain. If you do not specify a host ID, the Logical Domains Manager assigns a unique host ID to each domain.

  • failure-policy specifies the failure policy, which controls how slave domains behave when the master domain fails. This property is set on a master domain. The default value is ignore. Following are the valid property values:

    • ignore ignores failures of the master domain (slave domains are unaffected).

    • panic panics any slave domains when the master domain fails.

    • reset stops and restarts any slave domains when the master domain fails.

    • stop stops any slave domains when the master domain fails.

  • extended-mapin-space enables or disables the extended mapin space for the specified domain. By default, the extended-mapin-space=on, which is equivalent to setting extended-mapin-space=.

  • master specifies the name of up to four master domains for a slave domain. This property is set on a slave domain. By default, there are no masters for the domain. The master domain must already exist prior to this operation.


    Note - The Logical Domains Manager does not permit you to create domain relationships that result in a dependency cycle.
  • rc-add-policy specifies whether to enable or disable the direct I/O and SR-IOV I/O virtualization operations on any root complex that might be added to the specified domain. Valid values are iov and no value (rc-add-policy=). When rc-add-policy=iov, the direct I/O and SR-IOV features are enabled for a root complex that is being added. When rc-add-policy=, the iov property value is cleared to disable the I/O virtualization features for the root complex (unless you explicitly set iov=on by using the add-io command). The default value is no value.

  • perf-counters=counter-set specifies the types of access to grant to the performance counter. You can specify the following values for the perf-counters property:

    global

    Grants the domain access to the global performance counters that its allocated resources can access. Only one domain at a time can have access to the global performance counters. You can specify this value alone or with either the strand or htstrand value.

    strand

    Grants the domain access to the strand performance counters that exist on the CPUs that are allocated to the domain. You cannot specify this value and the htstrand value together.

    htstrand

    Behaves the same as the strand value and enables instrumentation of hyperprivilege mode events on the CPUs that are allocated to the domain. You cannot specify this value and the strand value together.

    To disable all access to any of the performance counters, specify perf-counters=.

  • max-cores=[num|unlimited] specifies the maximum number of cores that are permitted to be assigned to a domain. If the value is unlimited, there is no constraint on the number of CPU cores that can be allocated.

  • shutdown-group=num specifies the shutdown group number for a domain. This value is used by the SP on a Fujitsu M10 server and a Fujitsu SPARC M12 server when an ordered shutdown is performed.

    When the SP initiates an ordered shutdown, domains are shut down in descending order of their shutdown group number. That is, the domain with the highest number is shut down first, and the domain with the lowest number is shut down last. When more than one domain shares a shutdown group number, the domains shut down concurrently. If a master domain and a slave domain share a shutdown group number, the domains shut down concurrently even though a master-slave relationship exists. Therefore, when establishing a dependency relationship between a master domain and a slave domain, assign a different shutdown group number to each domain.

    Valid values are from 1 to 15. The control domain's shutdown group number is zero (0) and cannot be changed. The default value for any other domain is 15.

    For the new shutdown-group property values to take effect, you must use the ldm add-spconfig command to save the configuration to the SP.

    This property pertains only to the Fujitsu M10 platform and the Fujitsu SPARC M12 platform.

  • fj-software-limit-pagesize specifies the largest page size for a domain. Valid settings for this property are 256 Mbytes (256MB), 2 Gbytes (2GB), or 16 Gbytes (16GB). The fj-software-limit-pagesize property also affects live migration alignment restrictions and the ldmd behavior when the SP invokes the deleteboard command. If no fj-software-limit-pagesize is specified, the platform-specific maximum page size is applied. You can clear the fj-software-limit-pagesize property by setting fj-software-limit-pagesize=.

    This property pertains only to the Fujitsu M10 platform and the Fujitsu SPARC M12 platform.

  • domain-name specifies the name of the logical domain for which you want to set options.

Remove Domains

The remove-domain subcommand removes one or more logical domains.

Syntax:

ldm remove-domain -a

ldm remove-domain domain-name...

    where:

  • –a deletes all logical domains except the control domain.

  • domain-name specifies the logical domain to be deleted.

    In the event that the domain to be destroyed is specified as a master domain, references to this domain are removed from all slave domains.

Migrate Logical Domains

The migrate-domain subcommand migrates a domain from one location to another.

Syntax:

ldm migrate-domain [-f] [-n] [-p filename] [-s [spconfig-name]]
   [mblockmap=phys-addr:phys-addr[,phys-addr:phys-addr,...]] [cidmap=core-ID:core-ID[,core-ID:core-ID,...]]
  source-ldom [user@]target-host[:target-ldom]

ldm migrate-domain -c 
[-f] [-n] [-s [spconfig-name]]
  [mblockmap=phys-addr:phys-addr[,phys-addr:phys-addr,...]] [cidmap=core-ID:core-ID[,core-ID:core-ID,...]]
  source-ldom target-host[:target-ldom]

    where:

  • –f attempts to force the migration of the domain.

  • –n performs a dry run on the migration to determine whether it will succeed. It does not actually migrate the domain.

  • –p filename reads the password needed on the target machine from the first line of filename. This option enables you to perform non-interactive migrations that do not require you to provide the target machine password at a prompt.

    If you plan to store passwords in this manner, ensure that the file permissions are set so that only the root owner or a privileged user can read or write the file (400 or 600).

    The –p option is deprecated in favor of the –c option.

    This option cannot be used with the –c option.

  • –c uses SSL trusted certificates to perform a domain migration. This option cannot be used with the –p filename option. You cannot specify a user name if you use the –c option.

    To use this option, you must first ensure that certificates are installed and configured on the source and target machines. When the –c option is specified, the source machine does not prompt for a password. The migration request is rejected if the target certificate cannot be verified.

    When the SSL trusted certificates are accessed successfully, they are cached for the lifetime of the ldmd instance. When changing or removing the certificates, you must restart the ldmd daemon to make the changes take effect.

  • –s spconfig-name specifies that you save a new SP configuration on the source machine and target machine following a migration. If you specify this option, it overrides the value of the migration_save_spconfig SMF property value.

    Using the –s option with no argument uses the default reserved name to save the configuration on both the source machine and the target machine.

    If you specify the –s option with spconfig-name, a new user SP configuration is created on both the source machine and the target machine with the specified name.

    If the spconfig-name argument matches the name of an existing SP configuration on either the source machine or the target machine, the ldm migrate-domain command rejects the migration request.

    If the target machine runs an older version of the Logical Domains Manager, and you specify the –s option, the ldm migrate-domain rejects the migration request.

  • mblockmap=phys-addr:phys-addr specifies a mapping of how named memory block resources should be bound on the target machine. You must specify a complete mapping for every memory block in the source domain. This property is valid only for domains that are configured with named memory resources.

    The mblockmap property value is a list of comma-separated physical-address pairs. Each value is separated by the colon character. The first value denotes one of the guest domain's memory blocks on the source machine. The second value denotes the physical address on the target machine to which the memory block should be migrated. If a target physical range is specified more than once, or is already bound to a domain, the migration request is rejected.

    Cross-CPU migrations are blocked if your domain uses named memory resources.

    If you do not use named memory resources, specifying the mblockmap property causes the ldm migrate-domain command to issue an error.

    If you migrate a domain that uses named memory blocks and the mblockmap property is not specified, the physical address on the source machine is implicitly mapped to the same physical address on the target machine.

  • cidmap=core-ID:core-ID specifies a mapping of how named core resources on the source machine should be bound on the target machine. You must specify a complete mapping for every core in the source domain. This property is valid only for domains that are configured with named core resources.

    The cidmap property value is a comma-separated list of core ID pairs. Each core ID pair is separated by the colon character. The first core-ID value denotes one of the guest domain's cores on the source machine. The second core-ID value denotes the target machine core to which the strands from the source machine core should be migrated. If a target core is specified more than once, or is already bound to a domain, the migration request is rejected.

    Cross-CPU migrations are blocked if your domain uses named core resources.

    If you do not use named core resources, specifying the cidmap property causes the ldm migrate-domain command to issue an error.

    If you migrate a domain that uses named cores and the cidmap property is not specified, the core ID on the source machine is implicitly mapped to the same core ID on the target machine.

  • source-ldom is the logical domain that you want to migrate.

  • user is the user name that is authorized to run the Logical Domains Manager on the target host. If no user name is specified, the name of the user running the command is used by default.

  • target-host is the host where you want to place the target-ldom.

  • target-ldom is the logical domain name to be used on the target machine. The default is to keep the domain name used on the source domain (source-ldom).

Reconfiguration Operations

    Logical Domains supports the following types of reconfiguration operations:

  • Dynamic reconfiguration operations. Dynamic reconfiguration is the ability to add, set, or remove resources to or from an active domain. The ability to perform dynamic reconfiguration of a particular resource type is dependent on having support in the particular version of the OS running in the logical domain. If a dynamic reconfiguration cannot be performed on the control domain, initiate a delayed reconfiguration operation. Sometimes, the delayed reconfiguration is automatically initiated.

  • Delayed reconfiguration operations. In contrast to dynamic reconfiguration operations that take place immediately, delayed reconfiguration operations take effect after the next reboot of the OS or stop and start of the logical domain if no OS is running. You manually enter delayed reconfiguration mode on the root domain by running the ldm start-reconf primary command. When you initiate a delayed reconfiguration on a non-primary root domain, you can only perform a limited set of I/O operations (add-io, set-io, remove-io, create-vf, and destroy-vf). Other domains must be stopped prior to modifying resources that cannot be dynamically configured.

See Resource Reconfiguration in Oracle VM Server for SPARC 3.6 Administration Guide for more information about dynamic reconfiguration and delayed reconfiguration.

CPU Operations

You can allocate either CPU threads or CPU cores to a domain. To allocate CPU threads, use the add-vcpu, set-vcpu, and remove-vcpu subcommands. To allocate CPU cores, use the add-core, set-core, and remove-core subcommands.

Add CPU Threads

The add-vcpu subcommand adds the specified number of CPU threads or CPU cores to a logical domain. Note that a domain cannot be configured simultaneously with CPU cores and CPU threads. CPU core configurations and CPU thread configurations are mutually exclusive.

Syntax:

ldm add-vcpu CPU-count domain-name

    where:

  • CPU-count is the number of CPU threads to be added to the logical domain.

  • domain-name specifies the logical domain where the CPU threads are to be added.

Set CPU Threads

The set-vcpu subcommand specifies the number of CPU threads or CPU cores to be set in a logical domain. Note that a domain cannot be configured simultaneously with CPU cores and CPU threads. CPU core configurations and CPU thread configurations are mutually exclusive.

Syntax:

ldm set-vcpu CPU-count domain-name

    where:

  • CPU-count is the number of CPU threads to be added to the logical domain.

  • domain-name is the logical domain where the number of CPU threads are to be set.

Remove CPU Threads

The remove-vcpu subcommand removes the specified number of CPU threads or CPU cores from a logical domain. Note that a domain cannot be configured simultaneously with CPU cores and CPU threads. CPU core configurations and CPU thread configurations are mutually exclusive.

Syntax:

ldm remove-vcpu [-f] CPU-count domain-name

    where:

  • –f attempts to force the removal of one or more virtual CPU threads from an active domain.

  • CPU-count is the number of CPU threads to be added to the logical domain.

  • domain-name specifies the logical domain where the CPU threads are to be removed.

Add CPU Cores

The add-core subcommand adds the specified number of CPU cores to a domain. When you specify the number of CPU cores, the cores to be assigned are automatically selected. However, when you specify a core-ID value to the cid property, the specified cores are explicitly assigned.

The cid property should only be used by an administrator who is knowledgeable about the topology of the system to be configured. This advanced configuration feature enforces specific allocation rules and might affect the overall performance of the system.


Note - The ldm add-core command fails with the following error message if the domain does not have the whole-core constraint applied:
Must use set-core to enable the whole-core constraint

Note that you cannot use the ldm add-core command to add named core resources to a domain that already uses automatically assigned (anonymous) core resources.

Syntax:

ldm add-core num domain-name

ldm add-core cid=core-ID[,core-ID[,...]] domain-name

    where:

  • num specifies the number of CPU cores to assign to a domain.

  • cid=core-ID[,...] specifies one or more physical CPU cores to assign to a domain.

  • domain-name specifies the domain to which the CPU cores are assigned.

Set CPU Cores

The set-core subcommand specifies the number of CPU cores to assign to a domain. When you specify the number of CPU cores, the cores to be assigned are automatically selected. However, when you specify a core-ID value to the cid property, the specified cores are explicitly assigned.

Syntax:

ldm set-core num domain-name

ldm set-core cid=[core-ID[,core-ID[,...]]] domain-name

    where:

  • num specifies the number of CPU cores to assign to a domain.

  • cid=core-ID[,...] specifies one or more physical CPU cores to assign to a domain. cid= removes all named CPU cores.

  • domain-name specifies the domain to which the CPU cores are assigned.

Remove CPU Cores

The remove-core subcommand specifies the number of CPU cores to remove from a domain. When you specify the number of CPU cores, the cores to be removed are automatically selected. However, when you specify a core-ID value to the cid property, the specified cores are explicitly removed.

When you specify a resource group by using the –g option, the cores that are selected for removal all come from that resource group.

Syntax:

ldm remove-core [-f] num domain-name

ldm remove-core cid=[core-ID[,core-ID[,...]]] domain-name

ldm remove-core -g resource-group [-n number-of-cores] domain-name

    where:

  • –f attempts to force the removal of one or more cores from an active domain.

  • –g resource-group specifies that the operation is performed on the resources in the specified resource group.

  • –n number-of-cores specifies the number of cores to remove. If this option is not specified, all cores are removed from the specified resource group that belongs to the specified domain. This option can be used only when the –g option is specified.

  • num specifies the number of CPU cores to remove from a domain.

  • cid=core-ID[,...] specifies one or more physical CPU cores to remove from a domain.

  • domain-name specifies the domain from which the CPU cores are removed.

Memory Operations

Add Memory

The add-memory subcommand adds the specified amount of memory to a domain. When you specify a memory block size, the memory block to be assigned is automatically selected. However, when you specify a PA-start:size value to the mblock property, the specified memory blocks are explicitly assigned.

The mblock property should only be used by an administrator who is knowledgeable about the topology of the system to be configured. This advanced configuration feature enforces specific allocation rules and might affect the overall performance of the system.

Syntax:

ldm add-memory [--auto-adj] size[unit] domain-name

ldm add-memory mblock=PA-start:size[,PA-start:size[,...]] domain-name

    where:

  • –-auto-adj specifies that the amount of memory to be added to an active domain is automatically 256Mbyte-aligned, which might increase the requested memory size. If the domain is inactive, bound, or in a delayed reconfiguration, this option automatically aligns the resulting size of the domain by rounding up to the next 256-Mbyte boundary.

  • size is the size of memory in bytes to be set in the logical domain.

    If you want a different unit of measurement, specify unit as one of the following values using either uppercase or lowercase:

    • G for gigabytes

    • K for kilobytes

    • M for megabytes

  • mblock=PA-start:size specifies one or more physical memory blocks to assign to a domain. PA-start specifies the starting physical address of the memory block in hexadecimal format. size is the size of the memory block, including a unit, to be assigned to the domain. Note that you cannot use this property to specify the physical addresses of DIMMs.

  • domain-name specifies the logical domain where the memory is to be added.

Set Memory

The set-memory subcommand sets a specific amount of memory in a domain. Depending on the amount of memory specified, this subcommand is treated as an add-memory or remove-memory operation.

When you specify a memory block size, the memory block to be assigned is automatically selected. However, when you specify a PA-start:size value to the mblock property, the specified memory blocks are explicitly assigned.

Syntax:

ldm set-memory [--auto-adj] size[unit] domain-name

ldm set-memory mblock=PA-start:size[,PA-start:size[,...]] domain-name

    where:

  • –-auto-adj specifies that the amount of memory to be added to or removed from an active domain is automatically 256Mbyte-aligned, which might increase the requested memory size. If the domain is inactive, bound, or in a delayed reconfiguration, this option automatically aligns the resulting size of the domain by rounding up to the next 256-Mbyte boundary.

  • size is the size of memory in bytes to be set in the logical domain.

    If you want a different unit of measurement, specify unit as one of the following values using either uppercase or lowercase:

    • G for gigabytes

    • K for kilobytes

    • M for megabytes

  • mblock=PA-start:size specifies one or more physical memory blocks to assign to a domain. PA-start specifies the starting physical address of the memory block in hexadecimal format. size is the size of the memory block, including a unit, to be assigned to the domain. Note that you cannot use this property to specify the physical addresses of DIMMs.

  • domain-name specifies the logical domain where the memory is to be modified.

Remove Memory

The remove-memory subcommand removes the specified amount of memory from a logical domain. When you specify a memory block size, the memory block to be removed is automatically selected. However, when you specify a PA-start:size value to the mblock property, the specified memory blocks are explicitly removed.

When you specify a resource group by using the –g option, the memory that is selected for removal all comes from that resource group.

Syntax:

ldm remove-memory [--auto-adj] size[unit] domain-name

ldm remove-memory mblock=PA-start:size[,PA-start:size[,...]] domain-name

ldm remove-memory -g resource-group [-s size[unit]] domain-name

    where:

  • –-auto-adj specifies that the amount of memory to be removed from an active domain is automatically 256Mbyte-aligned, which might increase the requested memory size. If the domain is inactive, bound, or in a delayed reconfiguration, this option automatically aligns the resulting size of the domain by rounding up to the next 256-Mbyte boundary.

  • size is the size of memory in bytes to be set in the logical domain.

    If you want a different unit of measurement, specify unit as one of the following values using either uppercase or lowercase:

    • G for gigabytes

    • K for kilobytes

    • M for megabytes

  • mblock=PA-start:size specifies one or more physical memory blocks to remove from a domain. PA-start specifies the starting physical address of the memory block in hexadecimal format. size is the size of the memory block, including a unit, to be removed from the domain. Note that you cannot use this property to specify the physical addresses of DIMMs.

  • –g resource-group specifies that the operation is performed on the resources in the specified resource group.

  • –s size[unit] specifies the amount of memory to remove. If this option is not specified, the command attempts to remove all memory from the specified resource group that is bound to the specified domain. This option can be used only when the –g option is specified.

  • domain-name specifies the logical domain where the memory is to be removed.

Enter Delayed Reconfiguration Mode

The start-reconf subcommand enables the domain to enter delayed reconfiguration mode. Only root domains support delayed reconfiguration.


Note - When a non-primary root domain is in a delayed reconfiguration, you can perform only the add-io, set-io, remove-io, create-vf, and destroy-vf operations.

Syntax:

ldm start-reconf domain-name

Cancel a Delayed Reconfiguration Operation

The cancel-reconf subcommand cancels a delayed reconfiguration. Only root domains support delayed reconfiguration.

Syntax:

ldm cancel-reconf domain-name

Cancel Operations

The cancel-operation subcommand cancels a delayed reconfiguration (reconf), memory dynamic reconfiguration removal (memdr), or domain migration (migration) for a logical domain. Only root domains support the reconf operation.

Syntax:

ldm cancel-operation migration domain-name

ldm cancel-operation reconf domain-name

ldm cancel-operation memdr domain-name

Input/Output Devices

Add Input/Output Device

The add-io subcommand attempts to dynamically add a PCIe bus, device, or virtual function to the specified logical domain. If the domain does not support dynamic configuration, the command fails, and you must initiate a delayed reconfiguration or stop the domain before you can add the device.

If you add a root complex to the root domain when iov=off, you cannot successfully use the create-vf, destroy-vf, add-io, or remove-io subcommand to assign direct I/O and SR-IOV devices.

Syntax:

ldm add-io [iov=on|off] bus domain-name

ldm add-io (device | vf-name) domain-name

    where:

  • iov=on|off enables or disables I/O virtualization (direct I/O and SR-IOV) operations on the specified PCIe bus (root complex). When enabled, I/O virtualization is supported for devices in that bus. The ldm add-io command rebinds the specified PCIe bus to the root domain. The default value is off.

    Note that this command fails if the PCIe bus that you want to add is already bound to a domain.

  • bus, device, and vf-name are a PCIe bus, a direct I/O-assignable device, and a PCIe SR-IOV virtual function, respectively. Although the operand can be specified as a device path or as a pseudonym, using the device pseudonym is recommended. The pseudonym is based on the ASCII label that is printed on the chassis to identify the corresponding I/O card slot and is platform specific.

      The following are examples of the pseudonyms that are associated with the device names:

    • PCIe bus. The pci_0 pseudonym matches the pci@400 device path.

    • Direct I/O-assignable device. The /SYS/MB/PCIE1 pseudonym matches the pci@400/pci@0/pci@c device path.

    • PCIe SR-IOV virtual function. The /SYS/MB/NET0/IOVNET.PF0.VF0 pseudonym matches the pci@400/pci@2/pci@0/pci@6/network@0 device path.

  • domain-name specifies the logical domain where the bus or device is to be added.

Set a Property for a Virtual Function

The set-io subcommand modifies the current configuration of a virtual function by changing the property values or by passing new properties. This command can modify both the class-specific properties and the device-specific properties.

    You can change most network class-specific properties without requiring a reboot of the root domain. However, to change the mtu and mac-addresses properties of a virtual function that is bound to a domain, you must first stop the domain or initiate a delayed reconfiguration on the root domain.

  • All device-specific properties initiate a delayed reconfiguration so that those properties can be updated during the attach operation of the physical function device driver. As a result, the root domain must be rebooted.

  • This command only succeeds when the physical function driver can successfully validate the resulting configuration.

    Fibre Channel virtual functions are permitted to have the following types of WWNs:

  • Auto-allocated WWNs: Use values in the range of 00:14:4F:F8:00:00-00:14:4F:FB:FF:FF. The prefix for a port-wwn is 10:00:00:14:4F:FC:00:01. The prefix for a node-wwn is 20:00:00:14:4F:FC:00:01.

    When auto-allocated WWNs are used, collision detection is enabled and the information is not saved in the constraints database or in XML, so no recovery is possible.

  • User-allocated WWNs: Uses user-allocated port-wwn and node-wwn values. These values must not fall in the auto-allocated WWN range.

    When user-allocated WWNs are used, no collision detection is performed, but the value is saved for recovery.

Syntax:

ldm set-io property-name=value [property-name=value...] pf-name

ldm set-io [property-name=value...] [name=user-assigned-name] vf-name

ldm set-io iov=on|off bus

ldm set-io [mac-addr=MAC-address] [alt-mac-addrs=[auto|MAC-address,[auto|MAC-address,...]]] 
  [pvid=[pvid]] [vid=[vid1,vid2,...]] [mtu=size] [property-name=value...] net-vf-name

ldm set-io property-name=[value...] ib-pf-name

ldm set-io [bw-percent=[value]] [port-wwn=value node-wwn=value] fc-vf-name

    where:

  • name=user-assigned-name specifies a name that you assign to the virtual function.

  • property-name=value enables you to set a class-specific or device-specific property for the target device. property-name is the name of the class-specific or device-specific property.

  • mac-addr=MAC-address is the MAC address for this device. The number must be in standard octet notation, for example, 80:00:33:55:22:66.

      You can allocate the following types of MAC addresses:

    • Auto-allocated MAC addresses – Uses MAC addresses in the range of 0x00:14:4f:f8:00:00 - 0x00:14:4f:fb:ff:ff. When you automatically allocate MAC addresses, MAC address collision detection is enabled.

    • User-allocated MAC addresses – Uses MAC addresses outside the range of the auto-allocated MAC addresses. When you use user-allocated MAC addresses, no MAC address collision detection is performed.

  • alt-mac-addrs=auto|MAC-address,[auto|MAC-address,...] is a comma-separated list of alternate MAC addresses. Valid values are numeric MAC addresses and the auto keyword, which can be used one or more times to request that the system generate an alternate MAC address. The auto keyword can be mixed with numeric MAC addresses. The numeric MAC address must be in standard octet notation, for example, 80:00:33:55:22:66.

    You cannot change this property value on a virtual network device in a bound domain. You must first stop the domain or initiate a delayed reconfiguration on the root domain.

    You can assign one or more alternate MAC addresses to create one or more virtual NIC (VNICs) on this device. Each VNIC uses one alternate MAC address, so the number of MAC addresses assigned determines the number of VNICs that can be created on this device. If no alternate MAC addresses are specified, attempts to create VNICs on this device fail. For more information, see the Oracle Solaris 11 networking documentation and Chapter 13, Using Virtual Networks in Oracle VM Server for SPARC 3.6 Administration Guide.

  • iov=on|off enables or disables I/O virtualization (direct I/O and SR-IOV) operations on the specified PCIe bus (root complex). When enabled, I/O virtualization is supported for devices in that bus. The default value is off.

    To modify the iov property value, the root complex must be bound to the domain and the domain must be in a delayed reconfiguration.

  • bw-percent=[value] specifies the percentage of the bandwidth to be allocated to the Fibre Channel virtual function. Valid values are from 0 to 100. The total bandwidth value assigned to a Fibre Channel physical function's virtual functions cannot exceed 100. The default value is 0 so that the virtual function gets a fair share of the bandwidth that is not already reserved by other virtual functions that share the same physical function.

  • node-wwn=value specifies the node world-wide name for the Fibre Channel virtual function. Valid values are non-zero. By default, this value is allocated automatically. If you manually specify this value, you must also specify a value for the port-wwn property.

    The IEEE format is a two-byte header followed by an embedded MAC-48 or EUI-48 address that contains the OUI. The first two bytes are either hexadecimal 10:00 or 2x:xx where x is vendor-specified) followed by the three-byte OUI and three-byte vendor-specified serial number.

  • port-wwn=value specifies the port world-wide name for the Fibre Channel virtual function. Valid values are non-zero. By default, this value is allocated automatically. If you manually specify this value, you must also specify a value for the node-wwn property.

    The IEEE format is a two-byte header followed by an embedded MAC-48 or EUI-48 address that contains the OUI. The first two bytes are either hexadecimal 10:00 or 2x:xx where x is vendor-specified) followed by the three-byte OUI and three-byte vendor-specified serial number.

  • pf-name is the name of the physical function.

  • bus is the name of the PCIe bus.

  • net-vf-name is the name of the network virtual function.

  • ib-pf-name is the name of the InfiniBand physical function.

  • fc-vf-name is the name of the Fibre Channel virtual function.

Set a Property for a Physical Function

The set-io subcommand modifies the physical function configuration. Only the physical function device-specific properties are supported. Any change to the properties causes a delayed reconfiguration because the properties are applied during the attach operation of the physical function device driver.

The property values must be an integer or a string. Run the ldm list-io -d command to determine the property value type and whether a particular property can be set.

Note that the ldm set-io command succeeds only when the physical function driver successfully validates the resulting configuration.

Syntax:

ldm set-io property-name=value [property-name=value...] pf-name

    where:

  • property-name=value enables you to set a class-specific or device-specific property for the target device. property-name is the name of the class-specific or device-specific property.

  • pf-name is the name of the physical function.

Remove Input/Output Device

The remove-io subcommand removes a PCIe bus, device, or virtual function from a specified domain.

Syntax:

ldm remove-io [-n] (bus | device | vf-name) domain-name

    where:

  • –n performs a dry run of the command to determine whether it will succeed. It does not actually remove the I/O device.

  • bus, device, and vf-name are a PCIe bus, a direct I/O-assignable device, and a PCIe SR-IOV virtual function, respectively. Although the operand can be specified as a device path or as a pseudonym, using the device pseudonym is recommended. The pseudonym is based on the ASCII label that is printed on the chassis to identify the corresponding I/O card slot and is platform specific.

      The following are examples of the pseudonyms that are associated with the device names:

    • PCIe bus. The pci_0 pseudonym matches the pci@400 device path.

    • Direct I/O-assignable device. The /SYS/MB/PCIE1 pseudonym matches the pci@400/pci@0/pci@c device path.

    • PCIe SR-IOV virtual function. The /SYS/MB/NET0/IOVNET.PF0.VF0 pseudonym matches the pci@400/pci@2/pci@0/pci@6/network@0 device path.

    The specified guest domain must be in the inactive or bound state. If you specify the primary domain, this command initiates a delayed reconfiguration.

  • domain-name specifies the logical domain where the bus or device is to be removed.

Virtual Network Server

Add a Virtual Switch

The add-vsw subcommand adds a virtual switch to a specified logical domain.

Syntax:

ldm add-vsw [-q] [default-vlan-id=VLAN-ID] [pvid=port-VLAN-ID] [vid=VLAN-ID1,VLAN-ID2,...] 
  [linkprop=phys-state] [mac-addr=MAC-address] [net-dev=device] [mode=sc] [mtu=size]
  [id=switch-ID] [inter-vnet-link=auto|on|off]] [vsw-relay-mode=local|remote] vswitch-name domain-name

    where:

  • –q disables the validation of the path to the network device that is specified by the net-dev property. This option enables the command to run more quickly, especially if the logical domain is not fully configured.

  • default-vlan-id=VLAN-ID specifies the default VLAN to which a virtual switch and its associated virtual network devices belong to implicitly, in untagged mode. It serves as the default port VLAN ID (pvid) of the virtual switch and virtual network devices. Without this option, the default value of this property is 1. Normally, you would not need to use this option. It is provided only as a way to change the default value of 1.

  • pvid=port-VLAN-ID specifies the VLAN to which the virtual switch device needs to be a member, in untagged mode. This property also applies to the set-vsw subcommand. See Using VLAN Tagging in Oracle VM Server for SPARC 3.6 Administration Guide.

  • linkprop=phys-state specifies whether the virtual device reports its link status (status, speed, and duplex) based on the underlying physical network device. When linkprop=phys-state is specified on the command line, the virtual device link properties reflect physical link properties. By default, the value is phys-state, which takes effect only if the underlying physical device reports its link status.

  • vid=VLAN-ID specifies one or more VLANs to which a virtual network device or virtual switch needs to be a member, in tagged mode. This property also applies to the set-vsw subcommand. See Using VLAN Tagging in Oracle VM Server for SPARC 3.6 Administration Guide for more information.

  • mac-addr=MAC-address is the MAC address to be used by this switch. The number must be in standard octet notation, for example, 80:00:33:55:22:66. If you do not specify a MAC address, the switch is automatically assigned an address from the range of auto-allocated MAC addresses managed by the Logical Domains Manager.

      You can allocate the following types of MAC addresses:

    • Auto-allocated MAC addresses – Uses MAC addresses in the range of 0x00:14:4f:f8:00:00 - 0x00:14:4f:fb:ff:ff. When you automatically allocate MAC addresses, MAC address collision detection is enabled.

    • User-allocated MAC addresses – Uses MAC addresses outside the range of the auto-allocated MAC addresses. When you use user-allocated MAC addresses, no MAC address collision detection is performed.

  • net-dev=device is the path to the network device or aggregation over which this switch operates. The system validates that the path references an actual network device unless the –q option is specified.

    When setting this property on a path that includes VLANs, do not use the path name that has any VLAN tags.

  • mode=sc enables virtual networking support for prioritized processing of Oracle Solaris Cluster heartbeat packets in a logical domains environment. Applications like Oracle Solaris Cluster need to ensure that high priority heartbeat packets are not dropped by congested virtual network and switch devices. This option prioritizes Oracle Solaris Cluster heartbeat frames and ensures that they are transferred in a reliable manner.

    You must set this option when running Oracle Solaris Cluster in a logical domains environment and using guest domains as Oracle Solaris Cluster nodes. Do not set this option when you are not running Oracle Solaris Cluster software in guest domains because you could impact virtual network performance.

  • mtu=size specifies the maximum transmission unit (MTU) of a virtual switch device. Valid values are up to 16000 bytes. Ensure that the specified MTU value is within the range supported by the backend device.

  • id=switch-ID is the ID of a new virtual switch device. By default, ID values are generated automatically, so set this property if you need to match an existing device name in the OS.

  • inter-vnet-link=auto|on|off specifies whether to assign a channel between each pair of virtual network devices that are connected to the same virtual switch. This behavior improves guest-to-guest performance.

    When the value is on, inter-vnet LDC channels are assigned. When the value is off, no inter-vnet LDC channels are assigned. When the value is auto, inter-vnet LDC channels are assigned unless the number of virtual networks in a virtual switch has grown beyond eight. When the default number of eight virtual networks are exceeded, the inter-vnet LDC channels are disabled. You can change the default number of virtual networks by modifying the ldmd/auto_inter_vnet_link_limit SMF property value. The default value is auto.

  • vsw-relay-mode=local|remote specifies how to exchange the network traffic between domains. This property value aids in enforcing network policies feature such as access control lists and packet monitoring that you can configure on the external switch. The switch must support this capability and must be manually set.

      The value is one of the following:

    • local enables the network traffic between domains on the same physical NIC to be exchanged internally. This is the default mode.

    • remote enables the network traffic between domains on the same physical NIC to be exchanged through the external switch. You must manually disable the inter-vnet-link property for all the virtual networks that are connected on this switch. If you reset the value of the vsw-relay-mode property to local, you should re-enable the inter-vnet-link property or set the value to auto.

      You can specify vsw-relay-mode=remote only on a physical network device. Other devices such as a virtual NIC, an IOV-enabled SR-IOV device, an aggregation, InfiniBand ports, and IPMP are not supported.

  • vswitch-name is the unique name of the switch that is to be exported as a service. Clients (network) can attach to this service.

  • domain-name specifies the logical domain in which to add a virtual switch.

Set Options for a Virtual Switch

The set-vsw subcommand modifies the properties of a virtual switch that has already been added.

Syntax:

ldm set-vsw [-q] [pvid=[port-VLAN-ID]] [vid=[[+|-]VLAN-ID1,VLAN-ID2,...]] [mac-addr=MAC-address]
  [net-dev=[device]] [linkprop=[phys-state]] [mode=[sc]] [mtu=[size]]
  [inter-vnet-link=auto|on|off] [vsw-relay-mode=local|remote] vswitch-name

    where:

  • –q disables the validation of the path to the network device that is specified by the net-dev property. This option enables the command to run more quickly, especially if the logical domain is not fully configured.

  • pvid=port-VLAN-ID specifies the VLAN to which the virtual switch device needs to be a member, in untagged mode. See Using VLAN Tagging in Oracle VM Server for SPARC 3.6 Administration Guide.

  • [vid=[[+|-]VLAN-ID1,VLAN-ID2,...]] specifies one or more VLANs to which a virtual network device or virtual switch needs to be a member, in tagged mode. Use the optional + character to add one or more VLAN IDs to the list. Use the optional - character to remove one or more VLAN IDs from the list. See Using VLAN Tagging in Oracle VM Server for SPARC 3.6 Administration Guide.

  • mac-addr=MAC-address is the MAC address used by the switch. The number must be in standard octet notation, for example, 80:00:33:55:22:66.

      You can allocate the following types of MAC addresses:

    • Auto-allocated MAC addresses – Uses MAC addresses in the range of 0x00:14:4f:f8:00:00 - 0x00:14:4f:fb:ff:ff. When you automatically allocate MAC addresses, MAC address collision detection is enabled.

    • User-allocated MAC addresses – Uses MAC addresses outside the range of the auto-allocated MAC addresses. When you use user-allocated MAC addresses, no MAC address collision detection is performed.

  • net-dev=device is the path to the network device or aggregation over which this switch operates. The system validates that the path references an actual network device unless the –q option is specified.

    When setting this property on a path that includes VLANs, do not use the path name that has any VLAN tags.

    Note that using the ldm set-vsw command to specify or update the net-dev property value causes the primary domain to enter a delayed reconfiguration.

  • linkprop=phys-state specifies whether the virtual device reports its link status (status, speed, and duplex) based on the underlying physical network device. When linkprop=phys-state is specified on the command line, the virtual device link properties reflect physical link properties. By default, the value is phys-state, which takes effect only if the underlying physical device reports its link status. You can clear the linkprop property value by setting linkprop=.

  • mode=sc enables virtual networking support for prioritized processing of Oracle Solaris Cluster heartbeat packets in a logical domains environment. Applications like Oracle Solaris Cluster need to ensure that high priority heartbeat packets are not dropped by congested virtual network and switch devices. This option prioritizes Oracle Solaris Cluster heartbeat frames and ensures that they are transferred in a reliable manner.

    mode= (left blank) stops special processing of heartbeat packets.

    You must set this option when running Oracle Solaris Cluster in a logical domains environment and using guest domains as Oracle Solaris Cluster nodes. Do not set this option when you are not running Oracle Solaris Cluster software in guest domains because you could impact virtual network performance.

  • mtu=size specifies the maximum transmission unit (MTU) of a virtual switch device. Valid values are up to 16000 bytes. Ensure that the specified MTU value is within the range supported by the backend device.

  • inter-vnet-link=auto|on|off specifies whether to assign a channel between each pair of virtual network devices that are connected to the same virtual switch. This behavior improves guest-to-guest performance.

    When the value is on, inter-vnet LDC channels are assigned. When the value is off, no inter-vnet LDC channels are assigned. When the value is auto, inter-vnet LDC channels are assigned unless the number of virtual networks in a virtual switch has grown beyond eight. When eight virtual networks are exceeded, the inter-vnet LDC channels are disabled. The default value is auto.

  • vsw-relay-mode=local|remote specifies how to exchange the network traffic between domains. This property value aids in enforcing network policies feature such as access control lists and packet monitoring that you can configure on the external switch. The switch must support this capability and must be manually set.

      The value is one of the following:

    • local enables the network traffic between domains on the same physical NIC to be exchanged internally. This is the default mode.

    • remote enables the network traffic between domains on the same physical NIC to be exchanged through the external switch. You must manually disable the inter-vnet-link property for all the virtual networks that are connected on this switch. If you reset the value of the vsw-relay-mode property to local, you should re-enable the inter-vnet-link property or set the value to auto only if necessary.

      You can specify vsw-relay-mode=remote only on a physical network device. Other devices such as a virtual NIC, an IOV-enabled SR-IOV device, an aggregation, InfiniBand ports, and IPMP are not supported.

  • vswitch-name is the unique name of the switch that is to exported as a service. Clients (network) can be attached to this service.

Remove a Virtual Switch

The remove-vsw subcommand removes a virtual switch.

Syntax:

ldm remove-vsw [-f] vswitch-name

    where:

  • –f attempts to force the removal of a virtual switch. The removal might fail.

  • vswitch-name is the name of the switch that is to be removed as a service.

Virtual Network – Client

Add a Virtual Network Device

The add-vnet subcommand adds a virtual network device to the specified logical domain.


Note - The ldm add-vnet command fails if the number of virtual networks in the domain exceeds the 999 limit.

Syntax:

ldm add-vnet [mac-addr=MAC-address] [pvid=port-VLAN-ID] [pvlan=secondary-vid,pvlan-type]
  [protection=protection-type[,protection-type],...] [auto-alt-mac-addrs=num]
  [custom=enable|disable] [custom/max-vlans=num] [custom/max-mac-addrs=num]
  [allowed-ips=IP-address[,IP-address]...] [priority=high|medium|low] [cos=0-7]
  [allowed-dhcp-cids=[MAC-address|hostname,MAC-address|hostname,...]]
  [alt-mac-addrs=auto|MAC-address[,auto|MAC-address,...]] [vid=VLAN-ID1,VLAN-ID2,...]
  [linkprop=phys-state] [id=network-ID] [mtu=size] [maxbw=value] if-name vswitch-name
  domain-name

    where:

  • custom=[enable|disable] enables or disables custom settings for the maximum number of VLANs and MAC addresses that can be assigned to a virtual network device from a trusted host. When custom=enabled, you cannot specify alternate MAC addresses by using the alt-mac-addrs property or VIDs by using the vid property. You cannot set custom=enable if either VLAN IDs or alternate MAC addresses are configured. So, to enable custom settings, clear the alt-mac-addrs and vid property values first. PVLANs cannot be enabled simultaneously. Valid values are enable and disable. The default value is disable.

    Note that you can set custom=enable dynamically, but you must stop the domain to set custom=disable.

  • custom/max-vlans=num specifies the maximum number of VLANs that can be assigned to a virtual network device from a trusted host. The default value is 4096.

    Note that you cannot reduce the value of the custom/max-vlans property dynamically.

  • custom/max-mac-addrs=num specifies the maximum number of MAC addresses that can be assigned to a virtual network device from a trusted host. The default value is 4096.

  • mac-addr=MAC-address is the MAC address for this network device. The number must be in standard octet notation, for example, 80:00:33:55:22:66.

      You can allocate the following types of MAC addresses:

    • Auto-allocated MAC addresses – Uses MAC addresses in the range of 0x00:14:4f:f8:00:00 - 0x00:14:4f:fb:ff:ff. When you automatically allocate MAC addresses, MAC address collision detection is enabled.

    • User-allocated MAC addresses – Uses MAC addresses outside the range of the auto-allocated MAC addresses. When you use user-allocated MAC addresses, no MAC address collision detection is performed.

  • alt-mac-addrs=auto|MAC-address,[auto|MAC-address,...] is a comma-separated list of alternate MAC addresses. Valid values are numeric MAC addresses and the auto keyword, which can be used one or more times to request that the system generate an alternate MAC address. The auto keyword can be mixed with numeric MAC addresses. The numeric MAC address must be in standard octet notation, for example, 80:00:33:55:22:66.

    You can assign one or more alternate MAC addresses to create one or more virtual NIC (VNICs) on this device. Each VNIC uses one alternate MAC address, so the number of MAC addresses assigned determines the number of VNICs that can be created on this device. If no alternate MAC addresses are specified, attempts to create VNICs on this device fail. For more information, see the Oracle Solaris 11 networking documentation and Chapter 13, Using Virtual Networks in Oracle VM Server for SPARC 3.6 Administration Guide.

  • auto-alt-mac-addrs=num specifies the number of automatic alternate MAC addresses to be configured for a virtual network.

  • pvid=port-VLAN-ID specifies the VLAN to which the virtual network device needs to be a member, in untagged mode. See Using VLAN Tagging in Oracle VM Server for SPARC 3.6 Administration Guide.

  • pvlan=secondary-vid,pvlan-type configures a private VLAN (PVLAN). A primary VLAN forwards traffic downstream to its secondary VLANs, which can be either isolated or community. You must also specify the pvid property. The pvlan property specifies a PVLAN's secondary-vid, which is a value from 1–4094, and a pvlan-type, which is one of the following values:

    • isolated The ports that are associated with an isolated PVLAN are isolated from all of the peer virtual networks and Oracle Solaris virtual NICs on the back-end network device. The packets reach only the external network based on the values you specified for the PVLAN.

    • community The ports that are associated with a community PVLAN can communicate with other ports that are in the same community PVLAN but are isolated from all other ports. The packets reach the external network based on the values you specified for the PVLAN.

  • [vid=[VLAN-ID1,VLAN-ID2,...]] specifies one or more VLANs to which a virtual network device needs to be a member, in tagged mode. See Using VLAN Tagging in Oracle VM Server for SPARC 3.6 Administration Guide.

  • mtu=size specifies the maximum transmission unit (MTU) of a virtual network device. Valid values are up to 16000 bytes. Ensure that the specified MTU value is within the range supported by the backend device.

  • linkprop=phys-state specifies whether the virtual network device reports its link status (status, speed, and duplex) based on the underlying physical network device. When linkprop=phys-state is specified on the command line, the virtual network device link properties reflect physical link properties. By default, the value is phys-state, which takes effect only if the underlying physical device reports its link status.

  • maxbw=value specifies the maximum bandwidth limit for the specified port in megabits per second. This limit ensures that the bandwidth from the external network (specifically the traffic that is directed through the virtual switch) does not exceed the specified value. This bandwidth limit does not apply to the traffic on the inter-vnet links. You can set the bandwidth limit to any high value. The value is ignored when it is higher than the bandwidth supported by the network back-end device.

  • id=network-ID is the ID of a new virtual network device. By default, ID values are generated automatically, so set this property if you need to match an existing device name in the OS.

  • allowed-dhcp-cids=MAC-address|hostname,MAC-address|hostname,...

    Specifies a comma-separated list of MAC addresses or host names. hostname can be a host name or a fully qualified host name with a domain name. This name must begin with an alphabetic character. MAC-address is the numeric MAC address in standard octet notation, for example, 80:00:33:55:22:66. For more information, see dhcp_nospoof.

  • allowed-ips=IP-address[,IP-address,...]

    Specifies a comma-separated list of IP addresses. For more information, see ip_nospoof.

  • cos=0-7

    Specifies the class of service (802.1p) priority that is associated with outbound packets on the link. When this property is set, all outbound packets on the link have a VLAN tag with its priority field set to this property value. Valid values are 0-7, where 7 is the highest class of service and 0 is the lowest class of service. The default value is 0.

  • priority=value

    Specifies the relative priority of the link, which is used for packet processing scheduling within the system. Valid values are high, medium, and low. The default value is medium.

  • protection=protection-type[,protection-type]...]

    Specifies the types of protection (protection-type) in the form of a bit-wise OR of the protection types. By default, no protection types are used. The following values are separated by commas:

    • mac_nospoof enables MAC address anti-spoofing. An outbound packet's source MAC address must match the link's configured MAC address. Non-matching packets are dropped. This value includes datalink MAC configuration protection.

    • ip_nospoof enables IP address anti-spoofing. This protection type works in conjunction with the allowed-ips link property, which specifies one or more IP addresses (IPv4 or IPv6). An outbound IP packet can pass if its source address is specified in the allowed-ips list. An outbound ARP packet can pass if its sender protocol address is in the allowed-ips list. This value includes IP address configuration protection.

    • dhcp_nospoof enables DHCP client ID (CID) and hardware address anti-spoofing. By default, this value enables anti-spoofing for the configured MAC address of the device port node. If the allowed-dhcp-cids property is specified, DHCP anti-spoofing is enabled for the DHCP client IDs for that node.

    • restricted enables packet restriction, which restricts outgoing packet types to only IPv4, IPv6, and ARP packets.

  • if-name is a unique interface name to the logical domain, which is assigned to this virtual network device instance for reference on subsequent set-vnet or remove-vnet subcommands.

  • vswitch-name is the name of an existing network service (virtual switch) to which to connect.

  • domain-name specifies the logical domain to which to add the virtual network device.

Set Options for a Virtual Network Device

The set-vnet subcommand sets options for a virtual network device in the specified logical domain.

Syntax:

ldm set-vnet [mac-addr=MAC-address] [vswitch=vswitch-name] [mode=] [pvid=[port-VLAN-ID]]
  [pvlan=[secondary-vid,pvlan-type]] [protection=[[+|-]protection-type[,protection-type],...]]
  [allowed-ips=[[+|-]IP-address[,IP-address]...]] [priority=high|medium|low] [cos=0-7]
  [allowed-dhcp-cids=[[+|-]MAC-address|hostname,MAC-address|hostname,...]]
  [alt-mac-addrs=[[+|-]auto|MAC-address[,auto|MAC-address,...]]] [linkprop=[phys-state]] 
  [mtu=[size]] [vid=[[+|-]VLAN-ID1,VLAN-ID2,...]] [auto-alt-mac-addrs=[+]num]
  [custom=enable|disable] [custom/max-vlans=[num]] [custom/max-mac-addrs=[num]]
  [maxbw=[value]] if-name domain-name

    where:

  • custom=[enable|disable] enables or disables custom settings for the maximum number of VLANs and MAC addresses that can be assigned to a virtual network device from a trusted host. When custom=enabled, you cannot specify alternate MAC addresses by using the alt-mac-addrs property or VIDs by using the vid property. You cannot set custom=enable if either VLAN IDs or alternate MAC addresses are configured. So, to enable custom settings, clear the alt-mac-addrs and vid property values first. PVLANs cannot be enabled simultaneously. Valid values are enable and disable. The default value is disable.

    Note that you can set custom=enable dynamically, but you must stop the domain to set custom=disable.

  • custom/max-vlans=num specifies the maximum number of VLANs that can be assigned to a virtual network device from a trusted host. The default value is 4096.

    Note that you cannot reduce the value of the custom/max-vlans property dynamically.

  • custom/max-mac-addrs=num specifies the maximum number of MAC addresses that can be assigned to a virtual network device from a trusted host. The default value is 4096.

  • mac-addr=MAC-address is the MAC address for this network device. The number must be in standard octet notation, for example, 80:00:33:55:22:66.

      You can allocate the following types of MAC addresses:

    • Auto-allocated MAC addresses – Uses MAC addresses in the range of 0x00:14:4f:f8:00:00 - 0x00:14:4f:fb:ff:ff. When you automatically allocate MAC addresses, MAC address collision detection is enabled.

    • User-allocated MAC addresses – Uses MAC addresses outside the range of the auto-allocated MAC addresses. When you use user-allocated MAC addresses, no MAC address collision detection is performed.

  • auto-alt-mac-addrs=[+]num specifies the number of automatic alternate MAC addresses to be configured for a virtual network. Use the optional + character to add one or more to the maximum number of alternate MAC addresses.

  • alt-mac-addrs=[[+|-]auto|MAC-address,[auto|MAC-address,...]] is a comma-separated list of alternate MAC addresses. Valid values are numeric MAC addresses and the auto keyword, which can be used one or more times to request that the system generate an alternate MAC address. The auto keyword can be mixed with numeric MAC addresses. The numeric MAC address must be in standard octet notation, for example, 80:00:33:55:22:66.

    You can assign one or more alternate MAC addresses to create one or more virtual NIC (VNICs) on this device. Each VNIC uses one alternate MAC address, so the number of MAC addresses assigned determines the number of VNICs that can be created on this device. If no alternate MAC addresses are specified, attempts to create VNICs on this device fail. Use the optional + character to add one or more alternate MAC addresses to the list. Use the optional - character to remove one or more alternate MAC addresses from the list. For more information, see the Oracle Solaris 11 networking documentation and Chapter 13, Using Virtual Networks in Oracle VM Server for SPARC 3.6 Administration Guide.

  • vswitch=vswitch-name is the name of an existing network service (virtual switch) to which to connect.

  • mode= is a property that is used to enable the deprecated hybrid mode. You can no longer create a hybrid I/O configuration. You can only clear this property's value.

  • pvid=port-VLAN-ID specifies the VLAN to which the virtual network device needs to be a member, in untagged mode. See Using VLAN Tagging in Oracle VM Server for SPARC 3.6 Administration Guide.

  • pvlan=secondary-vid,pvlan-type configures a PVLAN. A PVLAN forwards traffic downstream to its secondary VLANs, which can be either isolated or community. You must have at least one pvid specified. The pvlan property specifies a PVLAN's secondary-vid, which is a value from 1–4094, and a pvlan-type, which is one of the following values:

    • isolated The ports that are associated with an isolated PVLAN are isolated from all of the peer virtual networks and Oracle Solaris virtual NICs on the back-end network device. The packets reach only the external network based on the values you specified for the PVLAN.

    • community The ports that are associated with a community PVLAN can communicate with other ports that are in the same community PVLAN but are isolated from all other ports. The packets reach the external network based on the values you specified for the PVLAN.

  • linkprop=phys-state specifies whether the virtual device reports its link status (status, speed, and duplex) based on the underlying physical network device. When linkprop=phys-state is specified on the command line, the virtual device link properties reflect physical link properties. By default, the value is phys-state, which takes effect only if the underlying physical device reports its link status. You can clear the linkprop property value by setting linkprop=.

  • [vid=[[+|-]VLAN-ID1,VLAN-ID2,...]] specifies one or more VLANs to which a virtual network device needs to be a member, in tagged mode. Use the optional + character to add one or more VLAN IDs to the list. Use the optional - character to remove one or more VLAN IDs from the list. See Using VLAN Tagging in Oracle VM Server for SPARC 3.6 Administration Guide.

  • mtu=size specifies the maximum transmission unit (MTU) of a virtual network device. Valid values are up to 16000 bytes. Ensure that the specified MTU value is within the range supported by the backend device.

  • maxbw=value specifies the maximum bandwidth limit for the specified port in megabits per second. This limit ensures that the bandwidth from the external network (specifically the traffic that is directed through the virtual switch) does not exceed the specified value. This bandwidth limit does not apply to the traffic on the inter-vnet links. You can set the bandwidth limit to any high value. The value is ignored when it is higher than the bandwidth supported by the network back-end device.

  • allowed-dhcp-cids=[[+|-]MAC-address|hostname,MAC-address|hostname,...]

    Specifies a comma-separated list of MAC addresses or host names. hostname can be a host name or a fully qualified host name with a domain name. This name must begin with an alphabetic character. MAC-address is the numeric MAC address in standard octet notation, for example, 80:00:33:55:22:66. Use the optional + character to add one or more MAC addresses or host names to the list. Use the optional - character to remove one or more MAC addresses or host names from the list. For more information, see dhcp_nospoof.

  • allowed-ips=[[+|-]IP-address[,IP-address,...]]

    Specifies a comma-separated list of IP addresses. Use the optional + character to add one or more IP addresses to the list. Use the optional - character to remove one or more IP addresses from the list. For more information, see ip_nospoof.

  • cos=0-7

    Specifies the class of service (802.1p) priority that is associated with outbound packets on the link. When this property is set, all outbound packets on the link have a VLAN tag with its priority field set to this property value. Valid values are 0-7, where 7 is the highest class of service and 0 is the lowest class of service. The default value is 0.

  • priority=value

    Specifies the relative priority of the link, which is used for packet processing scheduling within the system. Valid values are high, medium, and low. The default value is medium.

  • protection=[[+|-]protection-type[,protection-type]...]

    Specifies the types of protection (protection-type) in the form of a bit-wise OR of the protection types. By default, no protection types are used. Use the optional + character to add one or more protection types to the list. Use the optional - character to remove one or more protection types from the list. The following values are separated by commas:

    • mac_nospoof enables MAC address anti-spoofing. An outbound packet's source MAC address must match the link's configured MAC address. Non-matching packets are dropped. This value includes datalink MAC configuration protection.

    • ip_nospoof enables IP address anti-spoofing. This protection type works in conjunction with the allowed-ips link property, which specifies one or more IP addresses (IPv4 or IPv6). An outbound IP packet can pass if its source address is specified in the allowed-ips list. An outbound ARP packet can pass if its sender protocol address is in the allowed-ips list. This value includes IP address configuration protection.

    • dhcp_nospoof enables DHCP client ID (CID) and hardware address anti-spoofing. By default, this value enables anti-spoofing for the configured MAC address of the device port node. If the allowed-dhcp-cids property is specified, DHCP anti-spoofing is enabled for the DHCP client IDs for that node.

    • restricted enables packet restriction, which restricts outgoing packet types to only IPv4, IPv6, and ARP packets.

  • if-name is the unique interface name assigned to the virtual network device that you want to set.

  • domain-name specifies the logical domain in which to modify the virtual network device.

Remove a Virtual Network Device

The remove-vnet subcommand removes a virtual network device from the specified logical domain.

Syntax:

ldm remove-vnet [-f] if-name domain-name

    where:

  • –f attempts to force the removal of a virtual network device from a logical domain. The removal might fail.

  • if-name is the unique interface name assigned to the virtual network device that you want to remove.

  • domain-name specifies the logical domain from which to remove the virtual network device.

Virtual Disk – Service

Add a Virtual Disk Server

The add-vds subcommand adds a virtual disk server to the specified logical domain.

Syntax:

ldm add-vds service-name domain-name

    where:

  • service-name is the service name for this instance of the virtual disk server. The service-name must be unique among all virtual disk server instances on the server.

  • domain-name specifies the logical domain in which to add the virtual disk server.

Remove a Virtual Disk Server

The remove-vds subcommand removes a virtual disk server.

Syntax:

ldm remove-vds [-f] service-name

    where:

  • –f attempts to force the removal of a virtual disk server. The removal might fail.

  • service-name is the unique service name for this instance of the virtual disk server.


Caution

Caution  - The –f option attempts to unbind all clients before removal, which might cause loss of disk data if writes are in progress.


Add a Device to a Virtual Disk Server

The add-vdsdev subcommand adds a device to a virtual disk server. The device can be an entire disk, a slice on a disk, a file, or a disk volume. See Chapter 11, Using Virtual Disks in Oracle VM Server for SPARC 3.6 Administration Guide.

Syntax:

ldm add-vdsdev [-f] [-q] [options={ro,slice,excl}] [mpgroup=mpgroup] backend
  volume-name@service-name

    where:

  • –f attempts to force the creation of an additional virtual disk server when specifying a block device path that is already part of another virtual disk server. If specified, the –f option must be the first in the argument list.

  • –q disables the validation of the virtual disk back end that is specified by the backend operand. This option enables the command to run more quickly, especially if the logical domain or the back end is not fully configured.

  • options= are as follows:

    • ro – Specifies read-only access

    • slice – Exports a back end as a single slice disk

    • excl – Specifies exclusive disk access

    Omit the options= argument to have the default values of disk, not exclusive, and read/write. If you add the options= argument, you must specify one or more of the options for a specific virtual disk server device. Separate two or more options with commas and no spaces, such as ro,slice,excl.

  • mpgroup=mpgroup is the disk multipath group name used for virtual disk failover support. You can assign the virtual disk several redundant paths in case the link to the virtual disk server device currently in use fails. To do this, you would group multiple virtual disk server devices (vdsdev) into one multipath group (mpgroup), all having the same mpgroup name. When a virtual disk is bound to any virtual disk server device in a multipath group, the virtual disk is bound to all the virtual disk server devices that belong to the mpgroup.

  • backend is the location where data of a virtual disk are stored. The back end can be a disk, a disk slice, a file, a volume (including ZFS, Solaris Volume Manager, or VxVM), or any disk pseudo device. The disk label can be SMI VTOC, EFI, or no label at all. A back end appears in a guest domain either as a full disk or as single slice disk, depending on whether the slice option is set when the back end is exported from the service domain. When adding a device, the volume-name must be paired with the backend. The system validates that the location specified by backend exists and can be used as a virtual disk back end unless the –q option is specified.

  • volume-name is a unique name that you must specify for the device being added to the virtual disk server. The volume-name must be unique for this virtual disk server instance because this name is exported by this virtual disk server to the clients for adding. When adding a device, the volume-name must be paired with the backend.

  • service-name is the name of the virtual disk server to which to add this device.

Set Options for a Virtual Disk Server Device

The set-vdsdev subcommand sets options for a virtual disk server. See the Oracle VM Server for SPARC 3.6 Administration Guide.

Syntax:

ldm set-vdsdev [-f] options=[{ro,slice,excl}] [mpgroup=mpgroup]
  volume-name@service-name

    where:

  • –f removes the read-only restriction when multiple volumes in the same logical domain are sharing an identical block device path in read-only mode (option=ro). If specified, the –f option must be the first in the argument list.

  • options= are as follows:

    • ro – Specifies read-only access

    • slice – Exports a back end as a single slice disk

    • excl – Specifies exclusive disk access

    • Leave the options= argument blank to turn off any previous options specified. You can specify all or a subset of the options for a specific virtual disk server device. Separate two or more options with commas and no spaces, such as ro,slice,excl.

  • mpgroup=mpgroup is the disk multipath group name used for virtual disk failover support. You can assign the virtual disk several redundant paths in case the link to the virtual disk server device currently in use fails. To do this, you would group multiple virtual disk server devices (vdsdev) into one multipath group (mpgroup), all having the same mpgroup name. When a virtual disk is bound to any virtual disk server device in a multipath group, the virtual disk is bound to all the virtual disk server devices that belong to the mpgroup.

  • volume-name is the name of an existing volume exported by the service named by service-name.

  • service-name is the name of the virtual disk server being modified.

Remove a Device From a Virtual Disk Server

The remove-vdsdev subcommand removes a device from a virtual disk server.

Syntax:

ldm remove-vdsdev [-f] volume-name@service-name

    where:

  • –f attempts to force the removal of the virtual disk server device. The removal might fail.

  • volume-name is the unique name for the device being removed from the virtual disk server.

  • service-name is the name of the virtual disk server from which to remove this device.


Caution

Caution  - Without the –f option, the remove-vdsdev subcommand does not allow a virtual disk server device to be removed if the device is busy. Using the –f option can cause data loss for open files.


Virtual Disk – Client

Add a Virtual Disk

The add-vdisk subcommand adds a virtual disk to the specified logical domain. An optional timeout property allows you to specify a timeout for a virtual disk if it cannot establish a connection with the virtual disk server.

    When disk-name is an mpgroup disk, the ldm add-vdisk command does the following:

  • Adds the virtual disk to the specified domain

  • Selects volume-name@service-name as the first path to access the virtual disk

Syntax:

ldm add-vdisk [timeout=seconds] [id=disk-ID] disk-name volume-name@service-name domain-name

    where:

  • timeout=seconds is the number of seconds for establishing a connection between a virtual disk client (vdc) and a virtual disk server (vds). If there are multiple virtual disk (vdisk) paths, then the vdc can try to connect to a different vds, and the timeout ensures that a connection to any vds is established within the specified amount of time.

    Omit the timeout= argument or set timeout=0 to have the virtual disk wait indefinitely.

  • id=disk-ID is the ID of a new virtual disk device. By default, ID values are generated automatically, so set this property if you need to match an existing device name in the OS.

  • disk-name is the name of the virtual disk.

  • volume-name is the name of the existing virtual disk server device to which to connect.

  • service-name is the name of the existing virtual disk server to which to connect.

  • domain-name specifies the logical domain in which to add the virtual disk.

Set Options for a Virtual Disk

The set-vdisk subcommand sets options for a virtual disk in the specified logical domain. An optional timeout property allows you to specify a timeout for a virtual disk if it cannot establish a connection with the virtual disk server.

Except when used for mpgroup disks, this command can be used only when the domain is bound or inactive.

When disk-name is an mpgroup disk, you can use the ldm set-vdisk command to specify the first path to the virtual disk as the value of the volume property. The path that you specify as the selected path must already belong to the mpgroup.

Dynamic path selection is available when updated virtual disk drivers are running. To determine the version of the Oracle Solaris OS that contains these updated drivers, see Oracle VM Server for SPARC 3.6 Administration Guide.

Dynamic path selection occurs when the first path in an mpgroup disk is changed by using the ldm set-vdisk command to set the volume property to a value in the form volume-name@service-name. Only an active domain that supports dynamic path selection can switch to the selected path. If the updated drivers are not running, this path is selected when the Oracle Solaris OS reloads the disk instance or at the next domain reboot.

Syntax:

ldm set-vdisk [timeout=seconds] [volume=volume-name@service-name] disk-name domain-name

    where:

  • timeout=seconds is the number of seconds for establishing a connection between a virtual disk client (vdc) and a virtual disk server (vds). If there are multiple virtual disk (vdisk) paths, then the vdc can try to connect to a different vds, and the timeout ensures that a connection to any vds is established within the specified amount of time.

    Set timeout=0 to disable the timeout.

    Do not specify a timeout= argument to have the virtual disk wait indefinitely.

  • volume=volume-name is the name of the virtual disk server device to which to connect. service-name is the name of the virtual disk server to which to connect.

  • disk-name is the name of the existing virtual disk.

  • domain-name specifies the existing logical domain where the virtual disk was previously added.

Set Options for a Virtual Disk

The remove-vdisk subcommand removes a virtual disk from the specified logical domain.

Syntax:

ldm remove-vdisk [-f] disk-name domain-name

    where:

  • –f attempts to force the removal of the virtual disk. The removal might fail.

  • disk-name is the name of the virtual disk to be removed.

  • domain-name specifies the logical domain from which to remove the virtual disk.

Virtual SCSI Host Bus Adapter – Client

The subcommands described in the following sections affect the vHBA resource, which is a virtual SCSI host bus adapter (HBA) that supports the Sun Common SCSI Architecture (SCSA) interface. A vHBA sends SCSI commands from a SCSI target driver in the client to a SCSI device that is managed by a virtual storage area network (SAN) vSAN service.

Create a Virtual SCSI Host Bus Adapter

The add-vhba subcommand creates a virtual SCSI HBA on the specified logical domain.

Syntax:

ldm add-vhba [id=vHBA-ID] [timeout=seconds] vHBA-name vSAN-name domain-name

    where:

  • id=vHBA-ID specifies the ID of a new virtual SCSI HBA device. By default, ID values are generated automatically, so set this property only if you need to match an existing device name in the OS.

  • timeout=seconds specifies the number of seconds to wait to establish a connection between a virtual SCSI HBA instance and a virtual SAN server. If the timeout expires, outstanding I/O requests are terminated gracefully. Setting timeout=0 or timeout= causes the virtual SCSI HBA instance to wait indefinitely for the virtual SAN service to be usable.

  • vHBA-name specifies the name of the virtual SCSI HBA.

  • vSAN-name specifies the name of the existing virtual SAN server device with which to connect.

  • domain-name specifies the name of the logical domain.

Modify a Virtual SCSI Host Bus Adapter

The set-vhba subcommand enables you to specify a timeout value for the virtual SCSI HBA on the specified logical domain.

Syntax:

ldm set-vhba [timeout=seconds] vHBA-name domain-name

    where:

  • timeout=seconds specifies how long the specified virtual SCSI HBA instance waits before timing out and failing SCSA commands.

  • vHBA-name specifies the name of the virtual SCSI HBA.

  • domain-name specifies the name of the logical domain.

Rescan a Virtual SCSI Host Bus Adapter

The rescan-vhba subcommand causes the specified virtual SCSI HBA to query the associated virtual SAN for the current set of SCSI devices that are known to the virtual SAN.

Use this subcommand when a SCSI device is created on or removed from the physical SCSI HBA that is managed by the vSAN. The ldm rescan-vhba command synchronizes the set of SCSI devices that are seen by the virtual SCSI HBA and virtual SAN.

Syntax:

ldm rescan-vhba vHBA-name domain-name
Remove a Virtual SCSI Host Bus Adapter

The remove-vhba subcommand removes a virtual SCSI HBA from a logical domain.

Syntax:

ldm remove-vhba vHBA-name domain-name

Virtual SCSI Host Bus Adapter – Service

The subcommands described in the following sections affect the vSAN resource, which is a virtual storage area network (SAN) service that exports the set of physical SCSI devices under a specified SCSI HBA initiator port. The physical devices of the vSAN are accessed from the client through a vHBA device.

    The mask property controls how vSAN instances are created. You can create vSAN instances in the following ways:

  • mask=off. Create a vSAN instance that represents the set of all physical devices that are reachable by the specified initiator port. This method is used by default to create vSAN instances.

  • mask=on. Create a vSAN instance that represents a subset (zero or more) of the physical devices that are reachable by the specified initiator port.

You can use the ldm add-vsan command or the ldm set-vsan command to specify the value of the mask property.

Create a Virtual Storage Area Network

The add-vsan subcommand creates a virtual SAN server on a logical domain. The virtual SAN manages all SCSI devices that are children of the specified SCSI HBA initiator port.

Syntax:

ldm add-vsan [-q] [mask=on|off] iport-path vSAN-name domain-name

    where:

  • –q does not validate the I/O device in the service domain.

  • mask=on|off specifies how to create a vSAN instance.

      You can create vSAN instances in the following ways:

    • mask=off. Create a vSAN instance that represents the set of all physical devices that are reachable by the specified initiator port. This method is used by default to create vSAN instances.

    • mask=on. Create a vSAN instance that represents a subset (zero or more) of the physical devices that are reachable by the specified initiator port.

  • iport-path specifies the SCSI HBA initiator port to associate with the virtual SAN.

    Use the ldm list-hba command to obtain the iport-path value.

  • vSAN-name specifies the name of the virtual SAN.

  • domain-name specifies the logical domain in which the specified SCSI HBA initiator port resides.

Add a Physical Device to a Virtual Storage Area Network

The add-vsan-dev subcommand adds a physical device with the specified WWN to the specified virtual SAN instance.

Run this command for each physical device that you want to add to the virtual SAN instance. Run this command only when the specified virtual SAN instance has its mask property set to on.

Syntax:

ldm add-vsan-dev vSAN-name WWN

    where:

  • vSAN-name specifies the name of the virtual SAN instance.

  • WWN specifies the WWN of the physical device to add to the specified virtual SAN instance.

Dynamically Specify the Virtual Storage Area Network Mask

The set-vsan subcommand enables you to specify the value of the mask property. The virtual SAN automatically notifies the virtual SCSI HBA instance when you change the property value.

If you change the mask property value to off, all devices that are reachable by the vSAN's initiator port become members of the virtual SAN.

If you change the mask property value to on, the content of the vSAN's mask property value is reset to remove all physical device identification data. To populate the mask property with identification data, run the ldm add-vsan-dev command for each physical device you want to add to the vSAN.


Note - When you issue the ldm set-vsan command, any I/O commands that are running are terminated gracefully. Subsequent I/O requests to a previously known vSAN member return an error stating that the device is no longer reachable.

Syntax:

ldm set-vsan [mask=on|off] vSAN-name

    where:

  • mask=on|off specifies how to create a vSAN instance.

      You can create vSAN instances in the following ways:

    • mask=off. Create a vSAN instance that represents the set of all physical devices that are reachable by the specified initiator port.

    • mask=on. Create a vSAN instance that represents a subset (zero or more) of the physical devices that are reachable by the specified initiator port.

  • vSAN-name specifies the name of the virtual SAN instance.

Remove a Virtual Storage Area Network

The remove-vsan subcommand removes a virtual SAN from a logical domain.

Syntax:

ldm remove-vsan vSAN-name

where vSAN-name specifies the name of the virtual SAN.

Remove a Physical Device From a Virtual Storage Area Network

The remove-vsan-dev subcommand removes a physical device with the specified WWN from the virtual SAN.

You can run this command for each physical device that you want to remove from the virtual SAN instance.

Run this command only when the specified virtual SAN instance has its mask property set to on.

Syntax:

ldm remove-vsan-dev vSAN-name WWN

    where:

  • vSAN-name specifies the name of the virtual SAN instance.

  • WWN specifies the WWN of the physical device to remove from the specified virtual SAN instance.

Virtual Console

Add a Virtual Console Concentrator

The add-vcc subcommand adds a virtual console concentrator to the specified logical domain.

Syntax:

ldm add-vcc port-range=x-y vcc-name domain-name

    where:

  • port-range=x-y is the range of TCP ports to be used by the virtual console concentrator for console connections.

  • vcc-name is the name of the virtual console concentrator that is to be added.

  • domain-name specifies the logical domain to which to add the virtual console concentrator.

Set Options for a Virtual Console Concentrator

The set-vcc subcommand sets options for a specific virtual console concentrator.

Syntax:

ldm set-vcc port-range=x-y vcc-name

    where:

  • port-range=x-y is the range of TCP ports to be used by the virtual console concentrator for console connections. Any modified port range must encompass all the ports assigned to clients of the concentrator.

  • vcc-name is the name of the virtual console concentrator that is to be set.

Remove a Virtual Console Concentrator

The remove-vcc subcommand removes a virtual console concentrator from the specified logical domain.

Syntax:

ldm remove-vcc [-f] vcc-name

    where:

  • –f attempts to force the removal of the virtual console concentrator. The removal might fail.

  • vcc-name is the name of the virtual console concentrator that is to be removed.


Caution

Caution  - The –f option attempts to unbind all clients before removal.


Set Options for a Virtual Console

The set-vcons subcommand sets a specific port number and group in the specified logical domain. You can also set the attached console's service. This subcommand can be used only when a domain is inactive.

Syntax:

ldm set-vcons [port=[port-num]] [group=group] [service=vcc-server] [log=[on|off]] domain-name

    where:

  • port=port-num is the specific port to use for this console. Leave the port-num blank to have the Logical Domains Manager automatically assign the port number.

  • group=group is the new group to which to attach this console. The group argument allows multiple consoles to be multiplexed onto the same TCP connection. Refer to the Oracle Solaris OS vntsd(8) man page for more information about this concept. When a group is specified, a service must also be specified.

  • service=vcc-server is the name for the existing virtual console concentrator that should handle the console connection. A service must be specified when a group is specified.

  • log=[on|off] enables or disables virtual console logging. Valid values are on to enable logging, off to disable logging, and a null value (log=) to reset to the default value. The default value is on.

    Log data is saved to a file called /var/log/vntsd/domain-name/console-log on the service domain that provides the virtual console concentrator service. Console log files are rotated by using the logadm command. See the logadm(8) and logadm.conf(5) man pages.

  • domain-name specifies the logical domain in which to set the virtual console.

    You can enable virtual console logging for any guest domain that runs the Oracle Solaris 10 OS or Oracle Solaris 11 OS. The service domain must run the Oracle Solaris 11.1 OS.

Physical Functions and Virtual Functions

Virtual Functions

The PCIe single-root I/O virtualization (SR-IOV) standard enables the efficient sharing of PCIe devices among I/O domains. This standard is implemented in the hardware to achieve near-native I/O performance. SR-IOV creates a number of virtual functions that are virtualized instances of the physical device or function. The virtual functions are directly assigned to I/O domains so that they can share the associated physical device and perform I/O without CPU and hypervisor overhead.

PCIe physical functions have complete access to the hardware and provide the SR-IOV capability to create, configure, and manage virtual functions. A PCIe component on the system board or a PCIe plug-in card can provide one or more physical functions. An Oracle Solaris driver interacts with the physical functions that provide access to the SR-IOV features.

PCIe virtual functions contain the resources that are necessary for data movement. An I/O domain that has a virtual function can access hardware and perform I/O directly by means of an Oracle Solaris virtual function driver. This behavior avoids the overhead and latency that is involved in the virtual I/O feature by removing any bottlenecks in the communication path between the applications that run in the I/O domain and the physical I/O device in the root domain.

Some of these commands require that you specify an identifier for a physical function or virtual function as follows:

pf-name ::= pf-pseudonym | pf-path
vf-name ::= vf-pseudonym | vf-path

Use the pseudonym form when referring to a corresponding device. This is the form of the name that is shown in the NAME column of the ldm list-io output. When you run the ldm list-io -l command, the path form of the name appears in the output.

The ldm list-io -p output shows the pseudonym form as the value of the alias= token and the path form as the value of the dev= token.

Create a Virtual Function

The create-vf subcommand creates a virtual function from a specified physical function by incrementing the number of virtual functions in the specified physical function by one. The new virtual function is assigned the highest number in the sequence of virtual function numbers.

To dynamically create virtual functions, ensure that you set the iov property for the parent root complex.

Network class virtual functions must have a MAC address assigned, which is assigned by default. To override the default MAC address value, specify another value for the mac-addr property.

You can also set class-specific properties and device-specific properties when you create a virtual function. This command succeeds only when the physical function driver successfully validates the resulting configuration. By default, a new virtual function is not assigned to any domain. After you assign a virtual function to an I/O domain, you cannot create more virtual functions from that physical function. So, plan ahead by determining whether you want to create multiple virtual functions. If you do, create all of the virtual functions that the parent physical function requires before you assign any of them to an I/O domain.

The device-specific properties depend on the properties that are exported by the physical function driver. For more information, use the ldm list-io -d command.

Syntax:

ldm create-vf [-n number | max] [name=user-assigned-name] pf-name

ldm create-vf [property-name=value ...] [name=user-assigned-name] pf-name

ldm create-vf [alt-mac-addrs=[auto|MAC-address,[auto|MAC-address,...]]] [pvid=pvid]
  [mac-addr=MAC-address] [vid=vid1,vid2,...] [mtu=size] [property-name=value...] net-pf-name

ldm create-vf [property-name=value...] ib-pf-name

ldm create-vf [port-wwn=value node-wwn=value] [bw-percent=[value]] fc-pf-name

    where:

  • –n creates number virtual functions. If you specify max instead of number, the maximum number of virtual functions are created for the specified physical function.

  • name=user-assigned-name specifies a name that you assign to the virtual function.

    If you use the –n option and name property together, user-assigned-name is used as a base name for the generated virtual function names. A period and a sequence number are appended to the base name to indicate the virtual function instance.

  • mac-addr=MAC-address is the primary MAC address of the Ethernet virtual function.

  • alt-mac-addrs=auto|MAC-address,[auto|MAC-address,...] is a comma-separated list of alternate MAC addresses for the Ethernet virtual function. Valid values are numeric MAC addresses and the auto keyword, which can be used one or more times to request that the system generate an alternate MAC address. The auto keyword can be mixed with numeric MAC addresses. The numeric MAC address must be in standard octet notation, for example, 80:00:33:55:22:66.

    You can assign one or more alternate MAC addresses to create one or more virtual NIC (VNICs) on this device. Each VNIC uses one alternate MAC address, so the number of MAC addresses assigned determines the number of VNICs that can be created on this device. If no alternate MAC addresses are specified, attempts to create VNICs on this device fail. For more information, see the Oracle Solaris 11 networking documentation and Chapter 13, Using Virtual Networks in Oracle VM Server for SPARC 3.6 Administration Guide.

  • pvid=port-VLAN-ID is the port VLAN ID (no default value) for the Ethernet virtual function

  • vid=VLAN-ID1,VLAN-ID2... is a comma-separated list of integer VLAN IDs for the Ethernet virtual function.

  • mtu=size is the maximum transmission unit (in bytes) for the Ethernet virtual function.

  • property-name=value enables you to set a class-specific or device-specific property for the target device. property-name is the name of the class-specific or device-specific property.

  • bw-percent=[value] specifies the percentage of the bandwidth to be allocated to the Fibre Channel virtual function. Valid values are from 0 to 100. The total bandwidth value assigned to a Fibre Channel physical function's virtual functions cannot exceed 100. The default value is 0 so that the virtual function gets a fair share of the bandwidth that is not already reserved by other virtual functions that share the same physical function.

  • node-wwn=value specifies the node world-wide name for the Fibre Channel virtual function. Valid values are non-zero. By default, this value is allocated automatically. If you manually specify this value, you must also specify a value for the port-wwn property.

    The IEEE format is a two-byte header followed by an embedded MAC-48 or EUI-48 address that contains the OUI. The first two bytes are either hexadecimal 10:00 or 2x:xx where x is vendor-specified) followed by the three-byte OUI and three-byte vendor-specified serial number.

  • port-wwn=value specifies the port world-wide name for the Fibre Channel virtual function. Valid values are non-zero. By default, this value is allocated automatically. If you manually specify this value, you must also specify a value for the node-wwn property.

    The IEEE format is a two-byte header followed by an embedded MAC-48 or EUI-48 address that contains the OUI. The first two bytes are either hexadecimal 10:00 or 2x:xx where x is vendor-specified) followed by the three-byte OUI and three-byte vendor-specified serial number.

  • pf-name is the name of the physical function.

  • net-pf-name is the name of the network physical function.

  • ib-pf-name is the name of the InfiniBand physical function.

  • fc-pf-name is the name of the Fibre Channel physical function.

Destroy a Virtual Function

    The destroy-vf subcommand destroys a virtual function from the specified physical function. This command succeeds only if the following are true:

  • The specified virtual function is not currently assigned to any domain.

  • The specified virtual function is the last virtual function in the corresponding physical function.

  • The resulting configuration is successfully validated by the physical function driver.

  • A successful operation triggers a delayed reconfiguration, as the change to the number of virtual functions can only be done as part of rebooting. See the create-vf subcommand for more information.

Syntax:

ldm destroy-vf vf-name

ldm destroy-vf -n number | max pf-name

    where:

  • –n destroys number virtual functions. If you specify max instead of number, the maximum number of virtual functions are destroyed for the specified physical function.

  • pf-name is the name of the physical function.

  • vf-name is the name of the virtual function.

Variables

Add Variable

The add-variable subcommand adds one or more variables for a logical domain.

Syntax:

ldm add-variable var-name=[value]... domain-name

    where:

  • var-name=value is the name-value pair of a variable to add. The value is optional.

  • domain-name specifies the logical domain in which to add the variable.

Set Variable

The set-variable subcommand sets variables for a logical domain.


Note - The ldm add-variable command sets the value if the variable already exists.

Syntax:

ldm set-variable var-name=[value]... domain-name

    where:

  • var-name=value is the name-value pair of a variable to set. The value is optional.

  • domain-name specifies the logical domain in which to set the variable.


Note - Leaving value blank, sets var-name to no value.
Remove Variable

The remove-variable subcommand removes a variable for a logical domain.

Syntax:

ldm remove-variable var-name... domain-name

    where:

  • var-name is the name of a variable to remove.

  • domain-name specifies the logical domain from which to remove the variable.

Other Operations

Start Domains

The start-domain subcommand starts one or more logical domains.

Syntax:

ldm start-domain -a

ldm start-domain -i file

ldm start-domain [-f] [-m] domain-name...

    where:

  • –a starts all bound logical domains.

  • –i file specifies an XML configuration file to use in starting the logical domain.

  • –f starts a guest domain even when its root service domain or root service domains are not running.

    Note that the guest domain might not boot successfully if the missing I/O services are required to boot the OS. Also, the guest domain's applications might not function properly if the missing I/O services are required for the applications' operation.

  • –m performs a MAC address collision check on all MAC addresses, including alternate MAC addresses of the virtual networks and virtual functions, that are assigned to the domain. Performing this validation might delay the start of the domain. The ldm start-domain command fails if a MAC address collision is found.

  • domain-name specifies one or more logical domains to start.

Stop Domains

    The stop-domain subcommand stops one or more running domains by doing one of the following:

  • Sending a shutdown request to a domain if it runs the appropriate Logical Domains agent

  • Sending a uadmin request to a domain if the Oracle Solaris OS is booted

By default, the command first attempts to use shutdown to stop the domain. However, if the appropriate Logical Domains agent is not available, the command uses uadmin to stop the domain. See the shutdown(8) and uadmin(8) man pages.

You can change this default behavior by setting the ldmd/default_quick_stop SMF property. You can also specify the amount of time the ldm stop-domain command waits for shutdown to finish by setting the ldmd/shutdown_timeout SMF property. See the ldmd(8) man page.

Syntax:

ldm stop-domain [[-f | -q] | [[-h | -r | -t sec] [-m msg]]] (-a | domain-name...)

    where:

  • –a stops all running logical domains except the control domain.

  • –f attempts to force a running logical domain to stop. Use only if the domain cannot be stopped by any other means. This option is mutually exclusive with the –h, –q, –r, and –t options.

  • –h uses the shutdown command to halt the operating system. This option does not fall back to using the uadmin command. This option is mutually exclusive with the –f, –q, –r, and –t options.

  • –m msg specifies the message to send to the domains to be shut down or rebooted.

    Enclose msg within single or double quotation marks if the string contains white space. If you do not specify the –m msg option, the command issues the Graceful shutdown requested by the domain manager message. This option is incompatible with the –f and –q options.

  • –q issues a quick stop of the specified domain. This option is mutually exclusive with the –f, –h, –r, and –t options.

  • –r uses the shutdown command to stop and reboot the operating system. This option is mutually exclusive with the –f, –h, –q, and –t options.

  • –t sec waits at least sec seconds at the end of the domain shutdown sequence before reissuing the command with the –q option to shut down any specified domains that are still running. If the domain does not stop after the timeout expires, the command issues the Graceful shutdown timeout exhausted for domain-name and then retries with the –q option. The command is only reissued if the domain shutdown request does not complete in time. sec must be a value greater than 0. This option is mutually exclusive with the –f, –h, –q, and –r options.

    Note that if the shutdown request cannot be performed for a particular domain, the command immediately falls back to the –q option for that domain.

  • domain-name specifies one or more running logical domains to stop.

To perform a graceful Oracle Solaris shutdown on a domain that is not running the supporting Logical Domains agent version, perform a shutdown or init operation in the domain itself. See the init(8) man page. The ldm stop-domain -h command executes a graceful shutdown and reports an error for any domain on which a graceful shutdown is not available.

Panic Oracle Solaris OS

The panic-domain subcommand panics the Oracle Solaris OS on a specified logical domain, which provides a back trace and crash dump if you configure the Oracle Solaris OS to do that. The dumpadm(8) command provides the means to configure the crash dump.

Syntax:

ldm panic-domain domain-name

Where domain-name specifies the logical domain to panic.

Provide Help Information

The ldm --help command provides usage for all subcommands or the subcommand that you specify. You can also use the ldm command alone to provide usage for all subcommands.

Syntax:

ldm --help [subcommand]

subcommand specifies the ldm subcommand about which you want usage information.

Provide Version Information

The ldm --version command provides version information.

ldm --version

ldm -V
Bind Resources to a Domain

The bind-domain subcommand binds, or attaches, configured resources to a logical domain.

Syntax:

ldm bind-domain [-f] [-q] -i file

ldm bind-domain [-f] [-q] domain-name

    where:

  • –f attempts to force the binding of the domain even if invalid network or disk back-end devices are detected.

  • –q disables the validation of network or disk back-end devices so that the command runs more quickly.

  • –i file specifies an XML configuration file to use in binding the logical domain.

  • domain-name specifies the logical domain to which to bind resources.

Unbind Resources From a Domain

The unbind-domain subcommand releases resources bound to configured logical domains.

Syntax:

ldm unbind-domain domain-name

domain-name specifies the logical domain from which to unbind resources.

SP Configuration Operations

Add an SP Configuration

The add-spconfig subcommand adds an SP configuration, either based on the currently active configuration or on a previously autosaved configuration. The configuration is stored on the SP.

Syntax:

 

ldm add-spconfig config-name

ldm add-spconfig -r autosave-name [new-config-name]

    where:

  • config-name is the name of the SP configuration to add.

  • –r autosave-name applies the autosave configuration data to one of the following:

    • Configuration on the SP that has the same name

    • Newly created configuration, new-config-name, which does not exist on the SP

    If the target configuration does not exist on the SP, a configuration of that name is created and saved to the SP based on the contents of the corresponding autosave configuration. After the autosave configuration data is applied, those autosave files are deleted from the control domain. If autosave-name does not represent the currently selected configuration, or if new-config-name is specified, the state of the current configuration on the SP and any autosave files for it on the control domain are unaffected.

    Updates the specified configuration based on the autosave information. Note that you must perform a power cycle after using this command to instantiate the updated configuration.

  • new-config-name is the name of the SP configuration to add.

Set an SP Configuration

The set-spconfig subcommand enables you to specify the SP configuration to use at the next system power cycle. The configuration is stored on the SP.

Syntax:

ldm set-spconfig config-name

config-name is the name of the SP configuration to use.

The default configuration name is factory-default. To specify the default configuration, use the following:

primary# ldm set-spconfig factory-default
Remove an SP Configuration

The remove-spconfig subcommand removes an SP configuration that is stored on the SP, as well as any corresponding autosave configuration from the control domain.

Syntax:

ldm remove-spconfig [-r] config-name

    where:

  • –r only removes autosave configurations from the control domain.

  • config-name is the name of the SP configuration to remove.

List SP Configurations

The ldm list-spconfig command lists SP configurations and SP configurations that have autosave files on the control domain.

Syntax:

ldm list-spconfig [-r] [config-name]

    where:

  • –r lists those SP configurations that have autosave files on the control domain.

    Specifying the –r option with config-name tests whether the specified SP configuration with autosave files exists. If you do not specify config-name, the ldm list-spconfig command lists any SP configurations that have autosave files on the control domain.


    Note - When a delayed reconfiguration is pending, the SP configuration changes are immediately autosaved to the control domain. As a result, if you run the ldm list-spconfig -r command, the SP configuration with autosave files is shown as being newer than the current SP configuration.
  • config-name specifies the name of an SP configuration. This optional operand enables you to test for the existence of an SP configuration or of an SP configuration that has autosave files. The command exits with 0 if the configuration or the configuration with autosave files exists. The command exits with 1 if the configuration or the configuration with autosave files does not exist.

Command History

Use the ldm list-history command to view the Oracle VM Server for SPARC command history log. This log captures ldm commands and commands that are issued through the XMPP interface. By default, the number of the commands shown by the ldm list-history command is ten.

To change the number of commands output by the ldm list-history command, use the ldm set-logctl command to set the history property value. If you set history=0, the saving of command history is disabled. You can re-enable this feature by setting the history property to a non-zero value.

Syntax:

ldm list-history

Logging Operations

Oracle VM Server for SPARC logs messages to its standard log, /var/svc/log/ldoms-ldmd:default.log.

Control Logging Operations

The set-logctl subcommand specifies the fine-grained logging characteristics that control the messages written to the log. Note that you cannot disable the logging of fatal or warning messages.

Syntax:

ldm set-logctl [cmd=[on|off|resp]] [debug=[on|off]] [history=num] [info=[on|off]] [notice=[on|off]] [defaults]

    where:

  • cmd=[on|off|resp] specifies how to treat command messages. Specify the on value to enable, the off value to disable, or the resp value to log the command and its command response.

  • debug=[on|off] specifies how to treat debug messages. Specify the on value to enable or the off value to disable these messages.

  • info=[on|off] specifies how to treat informational messages. Specify the on value to enable or the off value to disable these messages.

  • notice=[on|off] specifies how to treat messages that indicate that an event requires user attention. Specify the on value to enable or the off value to disable these messages.

  • history=num specifies the number of commands output by the ldm list-history command. Setting the value to 0 disables the saving of command history.

  • defaults resets the logging capabilities to the default values.

View Logging Capabilities

The list-logctl subcommand shows you the current behavior of the logging types. When no option is specified, the output shows the logging capability values for all logging types.

Syntax:

ldm list-logctl [-a] [-d] [logging-type...]

    where:

  • –a shows the logging capability values for all logging types and the number of commands output by the ldm list-history command.

  • –d shows the default logging capability values for the logging types.

  • logging-type specifies one or more of the following logging types:

    fatal

    Fatal error condition (always logged)

    warning

    Event requiring user attention (always logged)

    notice

    Event that might require user attention

    info

    Informational message only

    cmd

    CLI/XML command line logging

    debug

    Debug messages

List Operations

Flags in list Subcommand Output

The following flags can be shown in the output for a domain (ldm list). If you use the long, parseable options (–l –p) for the command, the flags are spelled out; for example, flags=normal,control,vio-service. If not, you see the letter abbreviation; for example -n-cv-. The list flag values are position dependent. Following are the values that can appear in each of the six columns from left to right.

    Column 1 – Starting or stopping domains

  • s starting or stopping

    Column 2 – Domain status

  • n normal

  • t transition

  • d degraded domain that cannot be started due to missing resources

    Column 3 – Reconfiguration status

  • d delayed reconfiguration

  • r memory dynamic reconfiguration

    Column 4 – Control domain

  • c control domain

    Column 5 – Service domain

  • v virtual I/O service domain

    Column 6 – Migration status

  • s source domain in a migration

  • t target domain in a migration

  • e error occurred during a migration

List Domains and States

The list-domain subcommand lists logical domains and their states. If you do not specify a logical domain, all logical domains are listed.

Syntax:

ldm list-domain [-e] [-l] [-o format] [-p] [-S] [domain-name...]

    where:

  • –e generates an extended listing containing services and devices that are automatically set up, that is, not under your control.

  • –l generates a long listing.

  • –o limits the output format to one or more of the following subsets. If you specify more than one format, delimit each format by a comma with no spaces.

    • cmi – Output shows information about CMI devices, which includes the associated virtual CPUs and cores that are bound to the domain.

    • console – Output shows the virtual console (vcons) and virtual console concentrator (vcc) service.

    • core – Output shows information about cores, core ID and physical CPU set.

    • cpu – Output shows information about the CPU thread (vcpu), physical CPU (pcpu), and core ID (cid).

    • disk – Output shows the virtual disk (vdisk) and virtual disk server (vds).

    • domain – Output shows the variables (var), host ID (hostid), domain state, flags, universally unique identifier (UUID), software state, utilization percentage, normalized utilization percentage, a slave's master domains, and the master domain's failure policy.

    • hba – Output shows the virtual SCSI HBA, the virtual SAN (vSAN), and the domain of the virtual SAN.

    • memory – Output shows memory.

    • network – Output shows the media access control (mac) address, virtual network switch (vsw), and virtual network (vnet) device.

    • physio – Physical I/O output shows the peripheral component interconnect (pci) and network interface unit (niu).

    • resmgmt – Output shows resource management policy information, indicates which policy is currently running, and indicates whether the whole-core, and max-core constraints are enabled.

    • san – Output shows the name of the virtual SAN and the device path of the SCSI HBA initiator port with which the virtual SAN is associated.

    • serial – Output shows the virtual logical domain channel (vldc) service, virtual logical domain channel client (vldcc).

    • status – Output shows the status of a migrating domain and a memory dynamic reconfiguration operation.

      You can use the –o status option to show the status of any migration operations or DR operations that are in progress. This information is derived from the flags in the FLAGS field. The –o status option does not relate to the STATE field.

  • –p generates the list in a parseable, machine-readable format.

  • –S generates status information about CPU-related and memory-related resources. Status values are ok to indicate that the resource is operating normally and fail to indicate that the resource is faulty.

    This status is only determined for CPU and memory resources on the Fujitsu M10 platform and the Fujitsu SPARC M12 platform. On all other platforms, the status field is only shown in parseable output when the –p option is used. The status on these platforms is always shown as status=NA.

  • domain-name is the name of the logical domain for which to list state information.

List Bindings for Domains

The list-bindings subcommand lists bindings for logical domains. If no logical domains are specified, all logical domains are listed.

If you specify the name of a domain, any alternate MAC addresses for a virtual network device are shown after the MAC address of the control domain. The following command shows the three alternate MAC addresses for vnet1 on the ldg1 domain:

primary# ldm list-bindings ldg1
...
NETWORK
NAME  SERVICE              ID DEVICE    MAC               MODE PVID VID MTU LINKPROP
vnet1 primary-vsw0@primary 0  network@0 00:14:4f:f8:0c:80      1        1500
				00:14:4f:fa:3a:f9
				00:14:4f:f9:06:ab
				00:14:4f:fb:3d:af

PEER                 MAC               MODE PVID VID MTU LINKPROP
primary-vsw0@primary 00:14:4f:fa:94:60      1        1500
vnet2@ldg2           00:14:4f:f9:38:d1      1        1500
vnet3@ldg3           00:14:4f:fa:60:27      1        1500
vnet4@ldg4           00:14:4f:f8:0f:41      1        1500
...

The following command shows the three alternate MAC addresses for vnet1 on the ldg1 domain in parseable output:

primary# ldm list-bindings -p ldg1
...
VNET|name=vnet1|dev=network@0|service=primary-vsw0@primary|mac-addr=00:14:4f:f8:0c:80
|mode=|pvid=1|vid=|mtu=1500|linkprop=|id=0
|alt-mac-addr=00:14:4f:fa:3a:f9,00:14:4f:f9:06:ab,00:14:4f:fb:3d:af
|peer=primary-vsw0@primary|mac-addr=00:14:4f:fa:94:60|mode=|pvid=1|vid=|mtu=1500
|peer=vnet2@ldg2|mac-addr=00:14:4f:f9:38:d1|mode=|pvid=1|vid=|mtu=1500|linkprop=
|peer=vnet3@ldg3|mac-addr=00:14:4f:fa:60:27|mode=|pvid=1|vid=|mtu=1500|linkprop=
|peer=vnet4@ldg4|mac-addr=00:14:4f:f8:0f:41|mode=|pvid=1|vid=|mtu=1500|linkprop=
...

    The ldm list-bindings command shows the following information about mpgroup disks:

  • The STATE column shows one of the following states for each mpgroup path:

    • active indicates the current active path of the mpgroup

    • standby indicates that the path is not currently used

    • unknown indicates that the disk is unattached or in the midst of changing state, or that the specified domain does not run an OS that supports dynamic path selection

  • The list of paths are shown in the order used for the driver to choose the active path (the first path listed is chosen first)

  • The volume that is associated with the disk is the selected mpgroup path and is listed first

In this example, the selected path is vol-ldg2@opath-ldg2. The ldm list-bindings output shows that the active path is vol-ldg1@opath-vds instead of the selected path. This situation might occur if the selected path failed for some reason and the driver chose the second path from the list to be active. Even if your selected path becomes available in the meantime, the driver-chosen path continues as the active path. To make the first path active again, re-issue the ldm set-vdisk command to set the volume property to the name of the path you want, vol-ldg1@opath-vds.

primary# ldm list-bindings
DISK
NAME       VOLUME                 TOUT ID DEVICE SERVER  MPGROUP
disk       disk-ldg4@primary-vds0      0  disk@0 primary
tdiskgroup vol-ldg2@opath-ldg2         1  disk@1 ldg2    testdiskgroup
PORT MPGROUP VOLUME        MPGROUP SERVER STATE
2    vol-ldg2@opath-ldg2   ldg2           standby
0    vol-ldg1@opath-vds    ldg1           active
1    vol-prim@primary-vds0 primary        standby

Syntax:

ldm list-bindings [-e] [-o [network|net]] [-p] [domain-name...]

    where:

  • –e generates an extended listing containing services and devices that are automatically set up, that is, not under your control.

  • –o [network|net] generates output for the virtual network configuration, including virtual switches and virtual networks.

  • –p generates the list in a parseable, machine-readable format.

  • domain-name is the name of the logical domain for which you want binding information.

List Services for Domains

The list-services subcommand lists all the services exported by logical domains. If no logical domains are specified, all logical domains are listed.

Syntax:

ldm list-services [-e] [-p] [domain-name...]

    where:

  • –e generates an extended listing containing services and devices that are automatically set up, that is, not under your control.

  • –p generates the list in a parseable, machine-readable format.

  • domain-name is the name of the logical domain for which you want services information.

List Constraints for Domains

The list-constraints subcommand lists the constraints for the creation of one or more logical domains. If no logical domains are specified, all logical domains are listed.

Any resource that has been evacuated from the physical domain by a recovery mode operation has an asterisk (*) in front of its resource identifier.

Syntax:

ldm list-constraints [-x] [domain-name...]

ldm list-constraints [-e] [-p] [domain-name...]

    where:

  • –x writes the constraint output in XML format to the standard output (stdout) format. This output can be used as a backup.

  • domain-name is the name of the logical domain for which you want to list constraints.

  • –e generates an extended listing containing services and devices that are automatically set up, that is, not under your control.

  • –p writes the constraint output in a parseable, machine-readable form.

List CPU Core Activation Information

The list-permits subcommand lists CPU core activation information on the Fujitsu M10 platform and the Fujitsu SPARC M12 platform. The PERMITS column shows the total number of CPU core activations that have been issued. This total includes all permanent CPU core activations and pay-per-use CPU core activations. A permanent CPU core activation is a permit for a resource that can be used for an unlimited amount of time. A pay-per-use CPU core activation is a permit for a resource that can be used for a limited amount of time. The number of issued permanent CPU core activations is shown in the PERMANENT column. The IN USE column shows the number of issued CPU core activations that are in use. The REST column shows the number of CPU core activations that are available for use.

Syntax:

ldm list-permits
List Devices

The list-devices subcommand lists either free (unbound) resources or all server resources. The default is to list all free resources.

Syntax:

ldm list-devices [-a] [-B] [-p] [-S] [cmi] [core] [cpu] [io] [memory]

    where:

  • –a lists all server resources, bound and unbound.

  • –B generates blacklisted and evacuation-pending core and memory resource information.

  • –p writes the constraint output in a parseable, machine-readable form.

  • –S generates status information about CPU-related and memory-related resources. Status values are ok to indicate that the resource is operating normally and fail to indicate that the resource is faulty.

    This status is only determined for CPU and memory resources on the Fujitsu M10 platform and the Fujitsu SPARC M12 platform. On all other platforms, the status field is only shown in parseable output when the –p option is used. The status on these platforms is always shown as status=NA.

  • cmi lists information about CMI devices, which includes any unallocated virtual CPUs and cores that are associated with those devices.

  • core lists information about cores, the core ID and physical CPU set, and specifies which CPUs in the core are still unallocated.

  • cpu lists CPU thread and physical CPU resources.

  • memory lists only memory resources.

  • io lists only input/output resources, such as a PCI bus, a network, or direct I/O-assignable devices.

Note that resource IDs might have gaps in their numbering. The following example indicates that core 2 is unavailable or might have been disabled:

primary# ldm list-devices -a core
CORE
ID      %FREE   CPUSET
0       0      (0, 1, 2, 3, 4, 5, 6, 7)
1       100    (8, 9, 10, 11, 12, 13, 14, 15)
3       100    (24, 25, 26, 27, 28, 29, 30, 31)
4       100    (32, 33, 34, 35, 36, 37, 38, 39)
5       100    (40, 41, 42, 43, 44, 45, 46, 47)
6       100    (48, 49, 50, 51, 52, 53, 54, 55)
List I/O Devices

The list-io subcommand lists the I/O devices that are configured on the system. The list of devices includes I/O buses (including NIUs) and direct I/O-assignable devices.

    The output is divided into the following sections:

  • I/O bus information. The IO column lists the device path of the bus or network device, and the PSEUDONYM column shows the associated pseudonym for the bus or network device. The DOMAIN column indicates the domain to which the device is currently bound.

  • Direct I/O-assignable devices. The PCIE column lists the device path of the device, and the PSEUDONYM column shows the associated pseudonym for the device.

      The STATUS column applies to slots that accept plug-in cards as well as to devices on a motherboard and can have one of the following values:

    • UNK The device in the slot has been detected by the firmware, but not by the OS.

    • OCC The device has been detected on the motherboard or is a PCIe card in a slot.

    • IOV The bus has been initialized to share its IOV resources.

    • INV The slot, virtual function, or physical function is in an invalid state and cannot be used.

    • EMP The slot is empty.

    Slots that represent on-board devices always have the status of OCC. If the root domain does not support direct I/O, the slot status is UNK.

Syntax:

ldm list-io [-l] [-p] [bus | device | pf-name]

ldm list-io -d pf-name

Any resource that has been evacuated from the physical domain by a recovery mode operation has an asterisk (*) in front of its resource identifier.

    where:

  • –l lists information about subdevices that are hosted by direct I/O-assignable devices. Note that this output indicates which devices will be loaned with the direct I/O-assignable device to the receiving domain. The subdevice names cannot be used for command input.

  • –p writes the output in a parseable, machine-readable form.

  • –d pf-name lists information about the specified physical function.

  • bus, device, and pf-name are the names of a PCIe bus, a direct I/O-assignable device, and a PCIe SR-IOV physical function, respectively.

List Variables

The list-variable subcommand lists one or more variables for a logical domain. To list all variables for a domain, leave the var-name blank.

Syntax:

ldm list-variable [var-name...] domain-name

    where:

  • var-name is the name of the variable to list. If you do not specify any name, all variables will be listed for the domain.

  • domain-name is the name of the logical domain for which to list one or more variables.

List Physical SCSI Host Bus Adapters

The list-hba subcommand lists the physical SCSI HBA initiator ports available in all domains or in the specified domain. After identifying a logical domain's SCSI HBA initiator ports, you can use the ldm add-vsan command to create a virtual SAN by specifying the name of the initiator port.

Syntax:

ldm list-hba [-d | -u] [-l] [-p] [-t] [domain-name]

    where:

  • –d shows the SCSI devices under each initiator port. This option is mutually exclusive with the –u option.

  • –l shows more detailed output.

  • –p shows output in a parseable form.

  • –t shows the SCSI transport medium type, such as Fibre Channel.

  • –u shows device-specific attributes such as the worldwide number (WWN). This option is mutually exclusive with the –d option.

  • domain-name shows information only about the specified domain.

List Virtual Storage Area Network Devices

The list-vsan subcommand lists the members of the specified virtual SAN.

If mask=on, the output shows the WWN of each virtual SAN member. If mask=off, the output states that the mask property value is off.

Syntax:

ldm list-vsan [-p] [vSAN-name]

    where:

  • –p generates the list in a parseable, machine-readable format.

  • vSAN-name specifies the name of the virtual SAN instance.

List Network Devices

The list-netdev subcommand lists the network devices that are configured on the system. The information provided about the device includes the following:

  • CLASS One of the following network device types:

    • AGGR Network aggregation

    • EOIB Ethernet over InfiniBand

    • ESTUB Ethernet stub

    • IPMP IP network multipathing group

    • PART InfiniBand partition

    • PHYS Physical network device

    • VLAN Virtual local area network

    • VNET Virtual network device

    • VNIC Virtual network interface card

    • VSW Virtual switch device

    • VXLAN Virtual extended LAN

  • MEDIA Network media type, which can be ETHER for Ethernet or IB for InfiniBand

  • STATE State of the network device, which can be up, down, or unknown

  • SPEED Speed of the network device in megabits per second

  • OVER Physical device over which the network device is mapped

  • LOC Location of the network device

Syntax:

ldm list-netdev [-b] [-l] [-p] [-o net-device] [domain-name]

    where:

  • –b enables you to list only the valid virtual switch backend devices.

  • –l lists information about network devices, virtual switch devices, and aggregations.

  • –p writes the output in a parseable, machine-readable form.

  • –o net-device lists information about the specified network device.

  • domain-name specifies the logical domain for which to list network device information.

List Network Device Statistics

The list-netstat subcommand lists statistics for the network devices that are configured on the system. Statistical information is shown in the following fields:

  • IPACKETS shows the inbound packets

  • OPACKETS shows the outbound packets

  • RBYTES shows the received (inbound) byte count

  • OBYTES shows the transmitted (outbound) byte count

Syntax:

ldm list-netstat [-c count] [-o net-device] 
  [-p] [-u unit] [-t interval] [domain-name...]

    where:

  • –c count specifies the number of times to report statistics. A value of 0 reports statistics indefinitely.

  • –o net-device lists information about the specified network device.

  • –p writes the output in a parseable, machine-readable form.

  • –t interval specifies an interval in seconds at which statistics are refreshed. The default value is one second.

  • –u unit specifies the unit in which to show output. Valid values are:

    • R specifies raw bytes

    • K specifies kilobytes

    • M specifies megabytes

    • G specifies gigabytes

  • domain-name specifies one or more domains for which to list network device information.

List Dependencies

The list-dependencies subcommand lists the dependencies within domains. When you specify no options, this command outputs a list of domains and the domains on which they depend.

Syntax:

ldm list-dependencies [-l] [-p] [-r] [domain-name]

    where:

  • –l lists detailed information about dependencies.

  • –p writes the output in a parseable, machine-readable form.

  • –r shows dependents grouped by their dependencies.

  • domain-name specifies the logical domain for which to list dependency information. If domain-name is not specified, dependency information is listed for all domains.

List Resource Groups

The list-rsrc-group subcommand shows information about a resource group. When you specify no options, this command produces a short listing for all resource groups in the system.

Syntax:

ldm list-rsrc-group [-a] [-d domain-name] [-l] [-o core|memory|io] [-p] [resource-group]

    where:

  • –a lists information about all resources for each resource group. The output includes resources that are not bound to any domain.

  • –d domain-name shows information only about the specified domain.

  • –l lists detailed information about each resource group.

  • –o core|memory|io lists information only about the specified resource type: core, memory, or I/O.

  • –p writes the output in a parseable, machine-readable form.

  • resource-group specifies the resource group.

List CMI Devices

The list-cmi subcommand shows information about the CMI devices that are configured on a Fujitsu M10 server and a Fujitsu SPARC M12 server. If no CMI devices are configured on the system, no output is shown.

The output is divided into the following sections:

  • Allocation of CMI devices and the associated virtual CPUs and cores.

    • Bound. Shows domains that have CMI devices and the associated virtual CPUs and cores bound to those domains. These domains are targets for the grow-cmi and shrink-cmi subcommands.

    • Tenant. Shows domains that do not own any CMI devices and the associated virtual CPUs and cores bound to those domains. These domains are targets for the evict-cmi subcommand.

    • Free. Shows unallocated CMI devices and any unallocated virtual CPUs or cores that are associated with those devices.

  • Message queue information.

When you specify the –l option, additional information is shown for the virtual CPUs and cores that are associated with CMI devices. Furthermore, information about all virtual CPUs and cores that are associated with each CMI device is prepended to the output, unless domain-name is specified.

Syntax:

ldm list-cmi [-l] [-p] [cmi_id=ID[,ID[,...]]] [domain-name...]

    where:

  • –l lists physical CPU sets and core IDs.

  • –p writes the output in a parseable, machine-readable form.

  • cmi_id=ID[,...] specifies one or more CMI devices for which to list information.

  • domain-name specifies one or more logical domains for which to list information.

List CPU Sockets

The list-socket subcommand shows information about CPU sockets on a Fujitsu M10 server and a Fujitsu SPARC M12 server.

    The output is divided into the following sections:

  • CPU socket constraints specified for logical domains.

  • Allocation of each CPU socket's virtual CPUs and cores.

    • Tenant. Shows logical domains and the virtual CPUs and cores bound to those domains.

    • Free. Shows any unallocated virtual CPUs or cores.

  • Allocation of each CPU socket's memory.

  • Allocation of each CPU socket's I/O buses.

When you specify the –l option, additional information is shown for virtual CPUs and cores. Furthermore, information about all virtual CPUs and cores in each CPU socket is prepended to the output, unless domain-name is specified.

Syntax:

ldm list-socket [--free] [-l] [-o format] [-p] [socket_id=ID[,ID[,...]]] [domain-name...]

    where:

  • –-free lists free resources only.

  • –l lists physical CPU sets and core IDs.

  • –o limits the output format to one or more of the following subsets. If you specify more than one format, delimit each format by a comma with no spaces.

    • raw – Output shows information about the physical CPU set and cores IDs for all virtual CPUs and cores in a CPU socket. No output is shown if domain-name is specified.

    • constraint – Output shows CPU socket constraints. No output is shown if the –-free option is specified.

    • cpu – Output shows information about virtual CPUs and cores.

    • memory – Output shows information about memory.

    • physio – Output shows information about I/O buses.

  • –p writes the output in a parseable, machine-readable form.

  • socket_id=ID[,...] specifies one or more CPU sockets for which to list information.

  • domain-name specifies one or more logical domains for which to list information.

Add, Set, and Remove Resource Management Policies

Add a Resource Management Policy

The add-policy subcommand enables you to add a resource management policy for one or more logical domains. A resource management policy consists of optional properties and their values.

Syntax:

ldm add-policy [enable=yes|no] [priority=value] [attack=value] [decay=value]
  [elastic-margin=value] [sample-rate=value] [tod-begin=hh:mm[:ss]] [tod-end=hh:mm[:ss]]
  [util-lower=percent] [util-upper=percent] [vcpu-min=value] [vcpu-max=value]
  name=policy-name domain-name...

    where:

  • attack=value specifies the maximum number of resources to be added during any one resource control cycle. If the number of available resources is less than the specified value, all of the available resources are added. By default, the attack is unlimited so that you can add as many CPU threads as are available. If the domain is whole-core constrained, the value must be a multiple of whole cores (typically 8). Valid values are from 1 to the number of free CPU threads on the system minus 1.

  • decay=value specifies the maximum number of resources to be removed during any one resource control cycle. Only the number of currently bound CPU threads minus the value of vcpu-min can be removed even if the value specified by this property is larger. By default, the value is >unlimited. If the domain is whole-core constrained, the value must be a multiple of whole cores (typically 8). Valid values are from 1 to the total number of CPU threads on the system minus 1.

  • elastic-margin=value specifies the amount of buffer space between util-lower and the number of free CPU threads to avoid oscillations at low CPU thread counts. This value cannot be greater than util-upper. Valid values are from 0 to 100. If the domain is whole-core constrained, the default value is 15. If the domain is not whole-core constrained, the default value is 5.

  • enable=yes|no enables or disables resource management for an individual domain. By default, enable=yes.

  • name=policy-name specifies the resource management policy name.

  • priority=value specifies a priority for dynamic resource management (DRM) policies. Priority values are used to determine the relationship between DRM policies in a single domain and between DRM-enabled domains in a single system. Lower numerical values represent higher (better) priorities. Valid values are between 1 and 9999. The default value is 99.

      The behavior of the priority property depends on whether a pool of free CPU resources is available, as follows:

    • Free CPU resources are available in the pool. In this case, the priority property determines which DRM policy will be in effect when more than one overlapping policy is defined for a single domain.

    • No free CPU resources are available in the pool. In this case, the priority property specifies whether a resource can be dynamically moved from a lower-priority domain to a higher-priority domain in the same system. The priority of a domain is the priority specified by the DRM policy that is in effect for that domain.

      For example, a higher-priority domain can acquire CPU resources from another domain that has a DRM policy with a lower priority. This resource-acquisition capability pertains only to domains that have DRM policies enabled. Domains that have equal priority values are unaffected by this capability. So, if the default priority is used for all policies, domains cannot obtain resources from lower-priority domains. To take advantage of this capability, adjust the priority property values so that they have unequal values.

  • sample-rate=value specifies the cycle time, in seconds, which is the sample rate for DRM. Valid values are from 1 to 9999. The default and recommended value is 10.

  • tod-begin=hh:mm[:ss] specifies the effective start time of a policy in terms of hour, minute, and optional second. This time must be earlier than the time specified by tod-end in a period that begins at midnight and ends at 23:59:59. The default value is 00:00:00.

  • tod-end=hh:mm[:ss] specifies the effective stop time of a policy in terms of hour, minute, and optional second. This time must be later than the time specified by tod-begin in a period that begins at midnight and ends at 23:59:59. The default value is 23:59:59.

  • util-lower=percent specifies the lower utilization level at which policy analysis is triggered. Valid values are from 1 to util-upper minus 1. The default value is 30.

  • util-upper=percent specifies the upper utilization level at which policy analysis is triggered. Valid values are from util-lower plus 1 to 99. The default value is 70.

  • vcpu-max=value specifies the maximum number of CPU thread resources for a domain. By default, the maximum number of CPU threads is unlimited. If the domain is whole-core constrained, the value must be a multiple of whole cores (typically 8). Valid values are from vcpu-min plus 1 to the total number of free CPU threads on the system.

  • vcpu-min=value specifies the minimum number of CPU thread resources for a domain. If the domain is whole-core constrained, the value must be a multiple of whole cores (typically 8). Valid values are from 2 to vcpu-max minus 1. The default value is 2.

  • domain-name specifies the logical domain for which to add a resource management policy.

Modify a Resource Management Policy

The set-policy subcommand enables you to modify a resource management policy for one or more logical domains by specifying values for optional properties.

Syntax:

ldm set-policy [enable=[yes|no]] [priority=[value]] [attack=[value]] [decay=[value]]
  [elastic-margin=[value]] [sample-rate=[value]] [tod-begin=[hh:mm:ss]]
  [tod-end=[hh:mm:ss]] [util-lower=[percent]] [util-upper=[percent]] [vcpu-min=[value]]
  [vcpu-max=[value]] name=policy-name domain-name...

    where:

  • The properties are described in the Add a Resource Management Policy section.

  • domain-name specifies the logical domain for which to modify the resource management policy.

Remove a Resource Management Policy

The remove-policy subcommand enables you to remove a resource management policy from a logical domain by specifying one or more policy names.

Syntax:

ldm remove-policy [name=]policy-name... domain-name

    where:

  • The name property specifies the name of the resource management policy, policy-name.

  • domain-name specifies the logical domain on which to remove the resource management policy.

Configure or Reconfigure a Domain From an XML File

The init-system subcommand enables you to use an existing configuration to configure one or more guest domains, the control domain, or both types of domains. The ldm init-system command takes an XML file (such as the output of ldm list-constraints -x) as input, configures the specified domains, and reboots the control domain. Run this command with the factory default configuration.

Syntax:

ldm init-system [-frs] -i file

    where:

  • –i file specifies the XML configuration file to use to create the logical domain.

  • –f skips the factory-default configuration check and continues irrespective of what was already configured on the system.

    Use the –f option with caution. ldm init-system assumes that the system is in the factory-default configuration, and so directly applies the changes that are specified by the XML file. Using –f when the system is in a configuration other than the factory default will likely result in a system that is not configured as specified by the XML file. One or more changes might fail to be applied to the system depending on the combination of changes in the XML file and the initial configuration.

  • –r reboots the system after configuration.

  • –s restores only the virtual services configuration (vds, vcc, and vsw).

Collect Hypervisor Dump Data

The hypervisor dump data collection subcommands apply to the process that collects data from a hypervisor dump on the Fujitsu M10 platform and the Fujitsu SPARC M12 platform only.

When a hypervisor abort event occurs, the contents of the hypervisor memory are preserved by the firmware, and the system is rebooted with the factory-default configuration. The ldmd daemon copies the preserved contents of hypervisor memory to a file on the control domain that is called /var/opt/SUNWldm/hvdump.N.gz. N is a number in the range 0-7, inclusive. This file is a binary dump of the contents of hypervisor memory at the time the hypervisor abort occurred.

List Hypervisor Dump Data

The list-hvdump subcommand shows the values of the hvdump and hvdump-reboot properties that govern the hypervisor data collection process that can be used on the Fujitsu M10 platform and the Fujitsu SPARC M12 platform.

Syntax:

ldm list-hvdump
Set Property Values for the Hypervisor Data Collection Process

The set-hvdump subcommand modifies the Fujitsu M10 and Fujitsu SPARC M12 hypervisor data collection properties. You can set properties that enable or disable the automatic hypervisor data collection process. You can also set properties that enable or disable an automatic reboot to restore the original configuration after collecting the data.

Syntax:

ldm set-hvdump [hvdump=on|off] [hvdump-reboot=on|off]

    where:

  • hvdump=on|off enables or disables the hypervisor data collection process. The default value is on.

  • hvdump-reboot=on|off enables or disables an automatic system reboot after the hypervisor data collection process completes. The default value is off.

Manually Start the Hypervisor Data Collection Process

The start-hvdump subcommand manually starts the Fujitsu M10 and Fujitsu SPARC M12 hypervisor data collection process if the automatic collection fails.

Syntax:

ldm start-hvdump

Perform CMI Operations

The CMI-related subcommands pertain only to the Fujitsu M10 platform and the Fujitsu SPARC M12 platform.

In all of the CMI-related subcommands, specifying the number of CMI devices automatically selects the CMI resources to be assigned or removed. To explicitly assign or remove CMI resources, provide CMI ID values with the cmi_ID property.

When you perform CMI-related operations, the domain must be inactive or, if it is the primary domain, the domain must be in a delayed reconfiguration.

Add CMI Devices

The add-cmi subcommand adds the specified number of CMI devices to a domain.

If you bind an inactive domain with a CMI constraint, unspecified virtual CPU, core, and memory constraints are automatically generated from the CMI constraint and available resources.

Syntax:

ldm add-cmi num domain-name

ldm add-cmi cmi_id=ID[,ID[,...]] domain-name

    where:

  • num specifies the number of CMI resources to assign to the domain.

  • cmi_id=ID[,...] specifies one or more CMI devices to add to a domain.

  • domain-name specifies one or more logical domains to which to assign the CMI devices.

Modify CMI Devices

The set-cmi subcommand specifies the number of CMI devices to assign to a domain.

You can use the –f option to clear all existing virtual CPU, core, and memory constraints. When you bind an inactive domain with a CMI constraint, these constraints are then automatically generated from the CMI constraint and available resources.

Syntax:

ldm set-cmi [-f] num domain-name

ldm set-cmi [-f] cmi_id=[ID[,ID[,...]]] domain-name

    where:

  • –f clears all existing virtual CPU, core, and memory constraints.

  • num specifies the number of CMI resources to assign to the domain.

  • cmi_id=ID[,...] specifies one or more CMI devices to add to a domain. cmi_id= removes all specified CMI devices.

  • domain-name specifies the domain to which the CMI devices are assigned.

Remove CMI Devices

The remove-cmi subcommand specifies the number of CMI devices to remove from a domain.

Syntax:

ldm remove-cmi [-f] num domain-name

ldm remove-cmi [-f] cmi_id=ID[,ID[,...]] domain-name

    where:

  • num specifies the number of CMI resources to remove from the domain.

  • cmi_id=ID[,...] specifies one or more CMI devices to remove from a domain.

  • domain-name specifies the domain from which the CMI devices are removed.

Add CMI Device CPU Threads or CPU Cores

The grow-cmi subcommand enables you to add virtual CPUs or cores associated with a particular CMI device to a domain with one or more CMI resources. The specified CMI device must be assigned to the domain, and the domain must be bound or active.

Syntax:

ldm grow-cmi vcpus=num cmi_id=ID domain-name

ldm grow-cmi cores=num cmi_id=ID domain-name

    where:

  • vcpus=num specifies the number of virtual CPUs to add to a domain.

  • cores=num specifies the number of cores to add to a domain.

  • cmi_id=ID specifies a CMI device that is owned by the domain.

  • domain-name specifies the domain to which the virtual CPUs or cores are added.

Remove CMI Device CPU Threads or CPU Cores

The shrink-cmi subcommand enables you to remove virtual CPUs or cores associated with a particular CMI device from a domain with one or more CMI resources. The specified CMI device must be assigned to the domain, and the domain must be bound or active.

Syntax:

ldm shrink-cmi vcpus=num cmi_id=ID domain-name

ldm shrink-cmi cores=num cmi_id=ID domain-name

    where:

  • vcpus=num specifies the number of virtual CPUs to remove from a domain.

  • cores=num specifies the number of cores to remove from a domain.

  • cmi_id=ID specifies a CMI device that is owned by the domain.

  • domain-name specifies the domain to which the virtual CPUs or cores are removed.

Evict CMI Device CPU Threads or CPU Cores

The evict-cmi subcommand enables you to remove virtual CPUs or cores associated with a particular CMI device from a bound or active domain that has no CMI devices assigned to it.

Run the list-cmi subcommand to determine the allocation of CMI devices and their associated virtual CPUs and cores.

Syntax:

ldm evict-cmi vcpus=num cmi_id=ID domain-name

ldm evict-cmi cores=num cmi_id=ID domain-name

    where:

  • vcpus=num specifies the number of virtual CPUs to remove from a domain.

  • cores=num specifies the number of cores to remove from a domain.

  • cmi_id=ID specifies a CMI device.

  • domain-name specifies the domain from which the virtual CPUs or cores are removed.

Perform CPU Socket Operations

The CPU-socket-related commands pertain only to the Fujitsu M10 platform and the Fujitsu SPARC M12 platform.

Add CPU Socket Threads, Cores, or Memory

The grow-socket subcommand enables you to add virtual CPUs, cores, or memory associated with a particular CPU socket to a domain. The domain must be bound or active.

If the domain has any CMI devices assigned to it, use the grow-cmi subcommand to add virtual CPUs or cores to the domain.

Syntax:

ldm grow-socket vcpus=num socket_id=ID domain-name

ldm grow-socket cores=num socket_id=ID domain-name

ldm grow-socket memory=size[unit] socket_id=ID domain-name

    where:

  • vcpus=num specifies the number of virtual CPUs to add to a domain.

  • cores=num specifies the number of cores to add to a domain.

  • memory=num specifies the amount of memory to add to a domain. The default amount is size in bytes. If you want a different unit of measurement, specify unit as one of the following values using either uppercase or lowercase:

    • G for gigabytes

    • K for kilobytes

    • M for megabytes

  • socket_id=ID specifies the CPU socket.

  • domain-name specifies the domain to which the virtual CPUs, cores, or memory are added.

Specify CPU Socket Constraints

The set-socket subcommand specifies the CPU sockets from which a domain can allocate virtual CPUs, cores, and memory.

When you bind an inactive domain with CPU socket constraints, virtual CPUs, cores, and memory are selected only from the specified CPU sockets. If no virtual CPU, core, or memory constraint is specified for the domain, the resource constraint is automatically generated from the CPU socket constraint and available resources. Use the –f option to clear all existing virtual CPU, core, and memory constraints.

When you specify CPU socket constraints for a bound domain, existing virtual CPU, core, and memory bindings are updated to be consistent with the specified CPU socket constraints.

When you specify CPU socket constraints for an active domain, active virtual CPU resources and real memory ranges are remapped so that the underlying physical resources are consistent with the specified CPU socket constraints.

If physical resources are removed from the system, CPU socket constraints might be degraded. After the physical resources are restored, you can restore the original CPU socket constraints.

Syntax:

ldm set-socket [-f] [--remap] socket_id=[ID[,ID[,...]]] domain-name

ldm set-socket [-f] [--remap] --restore-degraded domain-name

    where:

  • –f clears all existing virtual CPU, core, and memory constraints for an inactive domain. If the domain is bound or active, the resource constraints, the degraded CPU socket constraints, or both might be modified if the specified CPU sockets have fewer resources than the constraints.

  • –-remap moves active virtual resources that are bound to one physical resource to another physical resource while the domain that owns the virtual resource is running.

  • –-restore-degraded restores a domain to its original CPU socket constraints after the addition of physical resources.

  • socket_id=ID specifies one or more CPU sockets to which the domain is constrained. Specifying socket_id= removes all CPU socket constraints from the domain.

  • domain-name specifies the domain to which the CPU socket constraints are added.

Remove CPU Socket Threads, Cores, or Memory

The shrink-socket subcommand enables you to remove virtual CPUs, cores, or memory associated with a particular CPU socket from a domain. The domain must be bound or active.

If the domain has any CMI devices assigned to it, use the shrink-cmi subcommand to remove virtual CPUs or cores from the domain.

Syntax:

ldm shrink-socket vcpus=num socket_id=ID domain-name

ldm shrink-socket cores=num socket_id=ID domain-name

ldm shrink-socket memory=size[unit] socket_id=ID domain-name

    where:

  • vcpus=num specifies the number of virtual CPUs to remove from a domain.

  • cores=num specifies the number of cores to remove from a domain.

  • memory=num specifies the amount of memory to remove from a domain. The default amount is size in bytes. If you want a different unit of measurement, specify unit as one of the following values using either uppercase or lowercase:

    • G for gigabytes

    • K for kilobytes

    • M for megabytes

  • socket_id=ID specifies the CPU socket.

  • domain-name specifies the domain from which the virtual CPUs, cores, or memory are removed.

Examples

Example 1 Create Default Services

Set up the three default services, virtual disk server, virtual switch, and virtual console concentrator so that you can export those services to the guest domains.

primary# ldm add-vds primary-vds0 primary
primary# ldm add-vsw net-dev=net0 primary-vsw0 primary
primary# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
Example 2 List Services

You can list services to ensure they have been created correctly or to see what services you have available.

primary# ldm list-services primary
VCC
    NAME         LDOM    PORT-RANGE
    primary-vcc0 primary 5000-5100
VSW
    NAME         LDOM    MAC             NET-DEV   DEVICE     DEFAULT-VLAN-ID PVID VID MODE
    primary-vsw0 primary 00:14:4f:f9:68:d0 net0  switch@0 1               1
VDS
    NAME         LDOM    VOLUME         OPTIONS      MPGROUP   DEVICE
    primary-vds0 primary
Example 3 Set Up the Control Domain Initially

The control domain, named primary, is the initial domain that is present when you install the Logical Domains Manager. The control domain has a full complement of resources, and those resources depend on what server you have. Set only those resources you want the control domain to keep so that you can allocate the remaining resources to the guest domains. Then save the configuration on the service processor. You must reboot so the changes take effect.

You must enable the virtual network terminal server daemon, vntsd(8), to use consoles on the guest domains.

primary# ldm start-reconf primary
primary# ldm set-vcpu 8 primary
primary# ldm set-memory 8G primary
primary# ldm add-spconfig initial
primary# shutdown -y -g0 -i6
primary# svcadm enable vntsd
Example 4 List Bindings

You can list bindings to see if the control domain has the resources you specified, or what resources are bound to any domain.

primary# ldm list-bindings primary
NAME     STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM UPTIME
primary  active     -n-cv-  UART    8     16G      0.2%  0.2% 1d 18h 5m

UUID
d8d2db22-21b9-e5e6-d635-92036c711e65

MAC
00:21:28:c1:3f:3c

HOSTID
0x84c13f3c

CONTROL
failure-policy=ignore
extended-mapin-space=on
cpu-arch=native
rc-add-policy=
shutdown-group=0
perf-counters=global,htstrand 

DEPENDENCY
master=

CORE
CID    CPUSET
0      (0, 1, 2, 3, 4, 5, 6, 7)

VCPU
VID    PID    CID    UTIL NORM STRAND
0      0      0      0.4% 0.4%   100%
1      1      0      0.2% 0.2%   100%
2      2      0      0.1% 0.1%   100%
3      3      0      0.1% 0.1%   100%
4      4      0      0.2% 0.2%   100%
5      5      0      0.5% 0.5%   100%
6      6      0      0.2% 0.2%   100%
7      7      0      1.2% 1.2%   100%

MEMORY
RA               PA               SIZE
0x20000000       0x20000000       8G
0x400000000      0x400000000      8G

VARIABLES
pm_boot_policy=disabled=1;ttfc=0;ttmr=0;

IO
DEVICE                           PSEUDONYM        OPTIONS
pci@400                          pci_0
niu@480                          niu_0
pci@400/pci@1/pci@0/pci@8        /SYS/MB/RISER0/PCIE0
pci@400/pci@2/pci@0/pci@8        /SYS/MB/RISER1/PCIE1
pci@400/pci@1/pci@0/pci@6        /SYS/MB/RISER2/PCIE2
pci@400/pci@2/pci@0/pci@c        /SYS/MB/RISER0/PCIE3
pci@400/pci@1/pci@0/pci@0        /SYS/MB/RISER1/PCIE4
pci@400/pci@2/pci@0/pci@a        /SYS/MB/RISER2/PCIE5
pci@400/pci@1/pci@0/pci@4        /SYS/MB/SASHBA0
pci@400/pci@2/pci@0/pci@4        /SYS/MB/SASHBA1
pci@400/pci@2/pci@0/pci@6        /SYS/MB/NET0
pci@400/pci@2/pci@0/pci@7        /SYS/MB/NET2

VCC
NAME             PORT-RANGE
primary-vcc0     5000-5100

VSW
NAME             MAC               NET-DEV   ID   DEVICE   LINKPROP   
primary-vsw0     00:14:4f:fa:0b:57 net0      0    switch@0

DEFAULT-VLAN-ID PVID VID                  MTU   MODE INTER-VNET-LINK
1               1                         1500       on

VDS
NAME             VOLUME         OPTIONS          MPGROUP        DEVICE
primary-vds0

VCONS
NAME             SERVICE                     PORT   LOGGING
UART
Example 5 List Network-Related Bindings

You can list bindings to obtain information about a domain's virtual network configuration.

primary# ldm list-bindings -o network ldg3
NAME
ldg3

MAC
00:14:4f:fb:7d:03

VSW
NAME             MAC               NET-DEV   DVID|PVID|VIDs
----             ---               -------   --------------
vsw-ldg3         00:14:4f:fa:0b:57 -         1|1|--

NETWORK
    NAME     SERVICE               MACADDRESS          PVID|PVLAN|VIDs
    ----     -------               ----------          ---------------
    vnet3    primary-vsw0@primary  00:14:4f:fa:e2:a1   1|--|--


        PEER                       MACADDRESS          PVID|PVLAN|VIDs
        ----                       ----------          ---------------
        primary-vsw0@primary       00:14:4f:fb:e8:d8   1|--|--     

    NAME     SERVICE               MACADDRESS          PVID|PVLAN|VIDs
    ----     -------               ----------          ---------------
    vnet1    primary-vsw1@primary  00:14:4f:f8:48:e5   1|--|--     

        PEER                       MACADDRESS          PVID|PVLAN|VIDs
        ----                       ----------          ---------------
        primary-vsw1@primary       00:14:4f:fa:10:db   1|--|--     
        vnet2@ldg3                 00:14:4f:fb:fe:ec   1|--|--     
        vnet4@ldg3                 00:14:4f:f9:91:d0   1|--|--     
        vnet50@ldg3                00:14:4f:f8:71:10   1|--|--
Example 6 Create a Logical Domain

Ensure that you have the resources to create the desired guest domain configuration, add the guest domain, add the resources and devices that you want the domain to have, set boot parameters to tell the system how to behave on startup, bind the resources to the domain, and save the guest domain configuration in an XML file for backup. You also might want to save the primary and guest domain configurations on the SP. Then you can start the domain, find the TCP port of the domain, and connect to it through the default virtual console service.

primary# ldm list-devices
primary# ldm add-domain ldg1
primary# ldm add-vcpu 8 ldg1
primary# ldm add-memory 8g ldg1
primary# ldm add-vnet vnet1 primary-vsw0 ldg1
primary# ldm add-vdsdev /dev/dsk/c0t1d0s2 vol1@primary-vds0
primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldg1
primary# ldm set-variable auto-boot\?=false ldg1
primary# ldm set-variable boot-device=vdisk1 ldg1
primary# ldm bind-domain ldg1
primary# ldm list-constraints -x ldg1 > ldg1.xml
primary# ldm add-spconfig ldg1_8cpu_1G
primary# ldm start-domain ldg1
primary# ldm list -l ldg1
primary# telnet localhost 5000
Example 7 Use One Terminal for Many Guest Domains

Normally, each guest domain you create has its own TCP port and console. Once you have created the first guest domain (ldg1 in this example), you can use the ldm set-vcons command to attach all the other domains (second domain is ldg2 in this example) to the same console port. Note that the set-vcons subcommand works only on an inactive domain.

primary# ldm set-vcons group=ldg1 service=primary-vcc0 ldg2

If you use the ldm list -l command after performing the set-vcons commands on all guest domains except the first, you can see that all domains are connected to the same port. See the vntsd(8) man page for more information about using consoles.

Example 8 Add a Virtual PCI Bus to a Logical Domain

I/O domains are a type of service domain that have direct ownership of and direct access to physical I/O devices. The I/O domain then provides the service to the guest domain in the form of a virtual I/O device. This example shows how to add a virtual PCI bus to a logical domain when IOV is enabled.

primary# ldm add-io pci@7c0 ldg1
Example 9 Cancel Delayed Reconfiguration Operations for a Control Domain

A delayed reconfiguration operation blocks configuration operations on all other domains. There might be times when you want to cancel delayed configuration operations for a control domain. For example, you might do this so that you can perform other configuration commands on that domain or other domains. With this command, you can undo the delayed reconfiguration operation and do other configuration operations on this or other domains.

primary# ldm cancel-operation reconf primary
Example 10 Migrate a Domain

You can migrate a logical domain to another machine. This example shows a successful migration.

primary# ldm migrate-domain ldg1 root@dt90-187:ldg
Target password:
Example 11 List Configurations

The following examples show how to view the configurations. The first command shows the configurations that are stored on the SP. The second command shows the configurations on the SP as well as information about the autosave configurations on the control domain.

primary# ldm list-spconfig
factory-default
3guests [current]
data1
reconfig_primary
split1
primary# ldm list-spconfig -r
3guests [newer]
data1 [newer]
reconfig_primary
split1
unit

Both the current 3guests configuration and the data1 configuration have autosaved changes that have not been saved to the SP. If the system performed a power cycle while in this state, the Logical Domains Manager would perform the 3guests autosave based on the specified policy. The autosave action is taken for 3guests because it is marked as current.

The reconfig_primary and split1 autosave configurations are identical to the versions on the SP, not newer versions.

The unit configuration only exists as an autosave configuration on the control domain. There is no corresponding configuration for unit on the SP. This situation might occur if the configuration was lost from the SP. A configuration can be lost if the SP is replaced or if a problem occurred with the persistent version of the configuration on the SP. Note that using the remove-spconfig command to explicitly remove a configuration also removes the autosave version on the control domain. As a result, no remnants of the configuration remain on either the control domain or on the SP.

Example 12 List I/O Devices

The following example lists the I/O devices on the system.

primary# ldm list-io
NAME                                      TYPE   BUS      DOMAIN   STATUS
----                                      ----   ---      ------   ------
pci_0                                     BUS    pci_0    primary  IOV
niu_0                                     NIU    niu_0    primary
/SYS/MB/RISER0/PCIE0                      PCIE   pci_0    primary  EMP
/SYS/MB/RISER1/PCIE1                      PCIE   pci_0    primary  EMP
/SYS/MB/RISER2/PCIE2                      PCIE   pci_0    primary  EMP
/SYS/MB/RISER0/PCIE3                      PCIE   pci_0    primary  OCC
/SYS/MB/RISER1/PCIE4                      PCIE   pci_0    primary  OCC
/SYS/MB/RISER2/PCIE5                      PCIE   pci_0    primary  EMP
/SYS/MB/SASHBA0                           PCIE   pci_0    primary  OCC
/SYS/MB/SASHBA1                           PCIE   pci_0    primary  OCC
/SYS/MB/NET0                              PCIE   pci_0    primary  OCC
/SYS/MB/NET2                              PCIE   pci_0    primary  OCC
/SYS/MB/RISER0/PCIE3/IOVIB.PF0            PF     pci_0    primary
/SYS/MB/RISER1/PCIE4/IOVIB.PF0            PF     pci_0    primary
/SYS/MB/NET0/IOVNET.PF0                   PF     pci_0    primary
/SYS/MB/NET0/IOVNET.PF1                   PF     pci_0    primary
/SYS/MB/NET2/IOVNET.PF0                   PF     pci_0    primary
/SYS/MB/NET2/IOVNET.PF1                   PF     pci_0    primary
/SYS/MB/RISER0/PCIE3/IOVIB.PF0.VF0        VF     pci_0    primary
/SYS/MB/RISER0/PCIE3/IOVIB.PF0.VF1        VF     pci_0    primary
/SYS/MB/RISER0/PCIE3/IOVIB.PF0.VF2        VF     pci_0    iodom1
/SYS/MB/RISER0/PCIE3/IOVIB.PF0.VF3        VF     pci_0    iodom1
/SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF0        VF     pci_0    primary
/SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF1        VF     pci_0    primary
/SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF2        VF     pci_0    iodom1
/SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF3        VF     pci_0    iodom1
Example 13 List CPU Core Activation Information

The following example shows information about the CPU core activations on a Fujitsu M10 server and a Fujitsu SPARC M12 server. The PERMITS column shows that 10 CPU core activations have been issued. This total includes all permanent and pay-per-use CPU core activations. The PERMANENT column shows that there are 10 permanent CPU core activations, which means that there are no issued pay-per-use CPU core activations. The IN USE column shows that only two of the CPU core activations are currently in use. The REST column shows that eight CPU core activations are available for use.

primary# ldm list-permits
CPU CORE
PERMITS (PERMANENT)   IN USE      REST
10      (10)          2           8
Example 14 Adding a Virtual SCSI HBA and a Virtual SAN

The following example shows how to create a virtual SAN for a specific SCSI HBA initiator port and how to associate that virtual SAN with a virtual SCSI HBA.

Identify the physical SCSI HBA initiator ports in the ldg1 domain.

primary# ldm list-hba -l ldg1
NAME                                                 VSAN
----                                                 ----
/SYS/MB/SASHBA0/HBA0/PORT1
[/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@1]
/SYS/MB/SASHBA0/HBA0/PORT2
[/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@2]
/SYS/MB/SASHBA0/HBA0/PORT4
[/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@4]
/SYS/MB/SASHBA0/HBA0/PORT8
[/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@8]
/SYS/MB/PCIE1/HBA0/PORT0,0
[/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0/fp@0,0]
/SYS/MB/PCIE1/HBA0,1/PORT0,0
[/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,1/fp@0,0]

Create a virtual SAN in the ldg1 logical domain to manage all SCSI devices associated with the last initiator port in the list.

primary# ldm add-vsan /SYS/MB/PCIE1/HBA0,1/PORT0,0 port0 ldg1
/SYS/MB/PCIE1/HBA0,1/PORT0,0 resolved to device: /pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,1/fp@0,0

Create a virtual SCSI HBA in the ldg2 logical domain that will cooperate with the virtual SAN to send I/O requests to the physical SCSI devices.

primary# ldm add-vhba port0_vhba port0 ldg2

Verify the presence of the newly created virtual SCSI HBA and virtual SAN devices.

primary# ldm list -o san,hba ldg1 ldg2
NAME
ldg1

VSAN
NAME             TYPE   DEVICE IPORT
port0            VSAN [/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,1/fp@0,0]

------------------------------------------------------------------------------
NAME
ldg2

VHBA
NAME             VSAN                        DEVICE TOUT SERVER
port0_vhba       port0                              0    ldg1
Example 15 List Network Devices

The following example shows network device information for the ldg1 domain.

primary# ldm list-netdev ldg1
DOMAIN
ldg1

NAME               CLASS    MEDIA    STATE    SPEED    OVER     LOC
----               -----    -----    -----    -----    ----     ---
net0               VNET     ETHER    up       1000     --       primary-vsw0/vnet0_ldg1
net3               PHYS     ETHER    up       10000    --       /SYS/MB/RISER1/PCIE4
net4               VSW      ETHER    up       10000    --       ldg1-vsw1
net1               PHYS     ETHER    up       10000    --       /SYS/MB/RISER1/PCIE4
net5               VNET     ETHER    up       10000    --       ldg1-vsw1/vnet1_ldg1
net6               VNET     ETHER    up       10000    --       ldg1-vsw1/vnet2_ldg1
aggr2              AGGR     ETHER    unknown  0        net1,net3 --
ldoms-vsw0.vport3  VNIC     ETHER    unknown  0        --       ldg1-vsw1/vnet2_ldg1
ldoms-vsw0.vport2  VNIC     ETHER    unknown  0        --       ldg1-vsw1/vnet1_ldg1
ldoms-vsw0.vport1  VNIC     ETHER    unknown  0        --       ldg1-vsw1/vnet2_ldg3
ldoms-vsw0.vport0  VNIC     ETHER    unknown  0        --       ldg1-vsw1/vnet2_ldg2

The following example shows a detailed listing of network devices on the ldg1 domain by specifying the –l option.

primary# ldm list-netdev -l ldg1
DOMAIN
ldg1

NAME               CLASS    MEDIA    STATE    SPEED    OVER     LOC
----               -----    -----    -----    -----    ----     ---
net0               VNET     ETHER    up       0        --       primary-vsw0/vnet0_ldg1
[/virtual-devices@100/channel-devices@200/network@0]
MTU       : 1500 [1500-1500]
IPADDR    : 10.129.241.200/255.255.255.0
MAC_ADDRS : 00:14:4f:fb:9c:df

net3               PHYS     ETHER    up       10000    --       /SYS/MB/RISER1/PCIE4
[/pci@400/pci@1/pci@0/pci@0/network@0]
MTU       : 1500 [576-15500]
MAC_ADDRS : a0:36:9f:0a:c5:d2

net4               VSW      ETHER    up       10000    --       ldg1-vsw1
[/virtual-devices@100/channel-devices@200/virtual-network-switch@0]
MTU       : 1500 [1500-1500]
IPADDR    : 192.168.1.2/255.255.255.0
MAC_ADDRS : 00:14:4f:fb:61:6e

net1               PHYS     ETHER    up       10000    --       /SYS/MB/RISER1/PCIE4
[/pci@400/pci@1/pci@0/pci@0/network@0,1]
MTU       : 1500 [576-15500]
MAC_ADDRS : a0:36:9f:0a:c5:d2

net5               VNET     ETHER    up       0        --       ldg1-vsw1/vnet1_ldg1
[/virtual-devices@100/channel-devices@200/network@1]
MTU       : 1500 [1500-1500]
IPADDR    : 0.0.0.0  /255.0.0.0
      : fe80::214:4fff:fef8:5062/ffc0::
MAC_ADDRS : 00:14:4f:f8:50:62

net6               VNET     ETHER    up       0        --       ldg1-vsw1/vnet2_ldg1
[/virtual-devices@100/channel-devices@200/network@2]
MTU       : 1500 [1500-1500]
IPADDR    : 0.0.0.0  /255.0.0.0
      : fe80::214:4fff:fef8:af92/ffc0::
MAC_ADDRS : 00:14:4f:f8:af:92

aggr2              AGGR     ETHER    unknown  0        net1,net3 --
MODE      : TRUNK
POLICY    : L2,L3
LACP_MODE : ACTIVE
MEMBER    : net1 [PORTSTATE = attached]
MEMBER    : net3 [PORTSTATE = attached]
MAC_ADDRS : a0:36:9f:0a:c5:d2

ldoms-vsw0.vport3  VNIC     ETHER    unknown  0        --       ldg1-vsw1/vnet2_ldg1
MTU       : 1500 [576-1500]
MAC_ADDRS : 00:14:4f:f8:af:92

ldoms-vsw0.vport2  VNIC     ETHER    unknown  0        --       ldg1-vsw1/vnet1_ldg1
MTU       : 1500 [576-1500]
MAC_ADDRS : 00:14:4f:f8:50:62

ldoms-vsw0.vport1  VNIC     ETHER    unknown  0        --       ldg1-vsw1/vnet2_ldg3
MTU       : 1500 [576-1500]
MAC_ADDRS : 00:14:4f:f9:d3:88

ldoms-vsw0.vport0  VNIC     ETHER    unknown  0        --       ldg1-vsw1/vnet2_ldg2
MTU       : 1500 [576-1500]
MAC_ADDRS : 00:14:4f:fa:47:f4
      : 00:14:4f:f9:65:b5
      : 00:14:4f:f9:60:3f
Example 16 List Network Device Statistics

The following example shows the default network statistics for all the domains in the system.

primary# ldm list-netstat
DOMAIN
primary

NAME               IPACKETS     RBYTES       OPACKETS     OBYTES
----               --------     ------       --------     ------
net3               0            0            0            0
net0               2.72M        778.27M      76.32K       6.01M
net4               2.72M        778.27M      76.32K       6.01M
net6               2            140          1.30K        18.17K
net7               0            0            0            0
net2               0            0            0            0
net1               0            0            0            0
aggr1              0            0            0            0
ldoms-vsw0.vport0  935.40K      74.59M       13.15K       984.43K
ldoms-vsw0.vport1  933.26K      74.37M       11.42K       745.15K
ldoms-vsw0.vport2  933.24K      74.37M       11.46K       747.66K
ldoms-vsw1.vport1  202.26K      17.99M       179.75K      15.69M
ldoms-vsw1.vport0  202.37K      18.00M       189.00K      16.24M
------------------------------------------------------------------------------
DOMAIN
ldg1

NAME               IPACKETS     RBYTES       OPACKETS     OBYTES
----               --------     ------       --------     ------
net0               5.19K        421.57K      68           4.70K
net3               0            0            2.07K        256.93K
net4               0            0            4.37K        560.17K
net1               0            0            2.29K        303.24K
net5               149          31.19K       78           17.00K
net6               147          30.51K       78           17.29K
aggr2              0            0            0            0
ldoms-vsw0.vport3  162          31.69K       52           14.11K
ldoms-vsw0.vport2  163          31.74K       51           13.76K
ldoms-vsw0.vport1  176          42.99K       25           1.50K
ldoms-vsw0.vport0  158          40.19K       45           4.42K
------------------------------------------------------------------------------
DOMAIN
ldg2

NAME               IPACKETS     RBYTES       OPACKETS     OBYTES
----               --------     ------       --------     ------
net0               5.17K        418.90K      71           4.88K
net1               2.70K        201.67K      2.63K        187.01K
net2               132          36.40K       1.51K        95.07K
------------------------------------------------------------------------------
DOMAIN
ldg3

NAME               IPACKETS     RBYTES       OPACKETS     OBYTES
----               --------     ------       --------     ------
net0               5.16K        417.43K      72           4.90K
net1               2.80K        206.12K      2.67K        190.36K
net2               118          35.00K       1.46K        87.78K
Example 17 List Dependencies

The following example shows detailed domain dependency information by specifying the –l option.

primary# ldm list-dependencies -l
DOMAIN         DEPENDENCY      TYPE      DEVICE
primary
svcdom
ldg0           primary         VDISK     primary-vds0/vdisk0
                               VNET      primary-vsw0/vnet0
               svcdom          VDISK     svcdom-vds0/vdisk1
                               VNET      svcdom-vsw0/vnet1
ldg1           primary         VDISK     primary-vds0/vdisk0
                               VNET      primary-vsw0/vnet0
                               IOV       /SYS/MB/NET0/IOVNET.PF0.VF0
               svcdom          VDISK     svcdom-vds0/vdisk1
                               VNET      svcdom-vsw0/vnet1
                               IOV       /SYS/MB/NET2/IOVNET.PF0.VF0

The following example shows detailed information about dependents grouped by their dependencies by specifying both the –l and –r options.

primary# ldm list-dependencies -r -l
DOMAIN         DEPENDENT       TYPE      DEVICE
primary        ldg0            VDISK     primary-vds0/vdisk0
                               VNET      primary-vsw0/vnet0
               ldg1            VDISK     primary-vds0/vdisk0
                               VNET      primary-vsw0/vnet0
                               IOV       /SYS/MB/NET0/IOVNET.PF0.VF0
svcdom         ldg0            VDISK     svcdom-vds0/vdisk1
                               VNET      svcdom-vsw0/vnet1
               ldg1            VDISK     svcdom-vds0/vdisk1
                               VNET      svcdom-vsw0/vnet1
                               IOV       /SYS/MB/NET2/IOVNET.PF0.VF0
Example 18 List Resource Groups

The following example lists information about the contents of each resource group.

primary# ldm list-rsrc-group
NAME                                    CORE  MEMORY   IO
/SYS/CMU1                               12    512G     4
/SYS/CMU2                               12    512G     4
/SYS/CMU3                               12    512G     4

The following example lists detailed information about the contents of the /SYS/CMU1 resource group.

primary# ldm list-rsrc-group -l /SYS/CMU1
NAME                                    CORE  MEMORY   IO
/SYS/CMU1                               12    512G     4
CORE
BOUND             CID
primary           (64,66,68,70,72,74,80,82,84,86,88,90)
MEMORY
PA               SIZE             BOUND
0x201ff0000000   256M             _sys_
0x20000e400000   412M             _sys_
0x200000000000   102M             _sys_
0x200006600000   32M              _sys_
0x200030000000   129792M          primary
0x280000000000   128G             primary
IO
DEVICE           PSEUDONYM        BOUND
pci@500          pci_8            primary
pci@540          pci_9            primary
pci@580          pci_10           primary
pci@5c0          pci_11           primary
Example 19 Obtaining Inter-Vnet Link Status

The following examples show information about whether inter-vnet links are enabled or disabled when inter-vnet-link=auto.

  • This example ldm list -o network output shows that inter-vnet-link=auto for primary-vsw1 and that the number of virtual networks is less than or equal to the maximum specified by the ldmd/auto_inter_vnet_link_limit SMF property. As a result, inter-vnet links are enabled. So, the value of the INTER-VNET-LINK field is on/auto.

    # ldm list -o network primary
    NAME            
    primary         
    
    MAC
        00:21:28:c1:40:5e
    
    VSW
        NAME         MACADDRESS          NET-DEV   DVID|PVID|VIDs
        ----         ----------          -------   --------------
        primary-vsw0 00:14:4f:fb:e8:d8   net0      1|1|--      
                DEVICE          :switch@0        ID   :0             
                LINKPROP        :--              MTU  :1500          
                INTER-VNET-LINK :on              MODE :--            
    
        primary-vsw1 00:14:4f:f9:b6:21   --        1|1|--      
                DEVICE          :switch@1        ID   :1             
                LINKPROP        :--              MTU  :1500          
                INTER-VNET-LINK :on/auto         MODE :--
  • This example ldm list -o network output shows that inter-vnet-link=auto for primary-vsw1 and that the number of virtual networks exceeds the maximum specified by the ldmd/auto_inter_vnet_link_limit SMF property. As a result, inter-vnet links are disabled. So, the value of the INTER-VNET-LINK field is off/auto.

    # ldm list -o network primary
    NAME            
    primary         
    
    MAC
        00:21:28:c1:40:5e
    
    VSW
        NAME         MACADDRESS          NET-DEV   DVID|PVID|VIDs
        ----         ----------          -------   --------------
        primary-vsw0 00:14:4f:fb:e8:d8   net0      1|1|--      
                DEVICE          :switch@0        ID   :0             
                LINKPROP        :--              MTU  :1500          
                INTER-VNET-LINK :on              MODE :--            
    
        primary-vsw1 00:14:4f:f9:b6:21   --        1|1|--      
                DEVICE          :switch@1        ID   :1             
                LINKPROP        :--              MTU  :1500          
                INTER-VNET-LINK :off/auto        MODE :--
Example 20 Creating Virtual Switches and Virtual Networks that Report Link States
  • The following example adds the ldg1-vsw0 virtual switch to the ldg1 guest domain. The virtual switch has a net-dev value of net1. By default, the virtual switch has linkprop=phys-state.

    primary# ldm add-vsw net-dev=net1 ldg1-vsw0 ldg1
    primary# ldm list -o network ldg1
    ...
    VSW
        NAME         MACADDRESS          NET-DEV   DVID|PVID|VIDs
        ----         ----------          -------   --------------
        ldg1-vsw0    00:14:4f:f8:3e:af   net1      1|1|--
                DEVICE          :switch@0        ID   :0
                LINKPROP        :phys-state      MTU  :1500
                INTER-VNET-LINK :on/auto         MODE :--
    ...
  • The following example adds the ldg1-vnet1 virtual network to the ldg1-vsw0 virtual switch on the ldg1 guest domain. By default, the virtual network has linkprop=phys-state.

    primary# ldm add-vnet ldg1-vnet1 ldg1-vsw0 ldg1
    primary# ldm list -o network ldg1
    ...
    NETWORK
     
        NAME         SERVICE                MACADDRESS          PVID|PVLAN|VIDs
        ----         -------                ----------          ---------------
        ldg1-vnet1   ldg1-vsw0@ldg1         00:14:4f:fb:86:00   1|--|--
                DEVICE     :network@1       ID   :1
                LINKPROP   :phys-state      MTU  :1500
                MAXBW      :--              MODE :--
                CUSTOM     :disable
                PRIORITY   :--              COS  :--
                PROTECTION :--
    ...
  • The following example adds the ldg1-vnet2 virtual network to the primary-vsw0 virtual switch on the ldg1 guest domain. Even though primary-vsw0 does not have linkprop=phys-state, the ldg1-vnet2 virtual network has linkprop=phys-state by default.

    primary# ldm add-vnet ldg1-vnet2 primary-vsw0 ldg1
    primary# ldm list -o network ldg1
    ...
    VSW
        NAME         MACADDRESS          NET-DEV   DVID|PVID|VIDs
        ----         ----------          -------   --------------
        primary-vsw0    00:14:4f:f8:3c:a0   net0      1|1|--
                DEVICE          :switch@0        ID   :0
                LINKPROP        :--      MTU  :1500
                INTER-VNET-LINK :on/auto         MODE :--
    
    NETWORK
       NAME         SERVICE                MACADDRESS          PVID|PVLAN|VIDs
        ----         -------                ----------          ---------------
        ldg1-vnet2   primary-vsw0@primary   00:14:4f:fa:b0:bd   1|--|--
                DEVICE     :network@2       ID   :2
                LINKPROP   :phys-state      MTU  :1500
                MAXBW      :--              MODE :--
                CUSTOM     :disable
                PRIORITY   :--              COS  :--
                PROTECTION :--
    ...
Example 21 Listing Custom Property Values

The following ldm list command shows a configuration that has the custom property set to enable, custom-max-mac-addrs=3, and custom-max-vlans=3.

primary# ldm list ldg1
...
    NAME         SERVICE                MACADDRESS          PVID|PVLAN|VIDs
    ----         -------                ----------          ---------------
    temp         primary-vsw0@primary   00:14:4f:fb:03:fd   1|--|--     
            DEVICE     :network@4       ID   :4             
            LINKPROP   :phys-state      MTU  :1500          
            MAXBW      :--              MODE :--            
            CUSTOM     :enable
            MAX-CUSTOM-MACS:3           MAX-CUSTOM-VLANS:3
            PRIORITY   :--              COS  :--            
            PROTECTION :--
...

Exit Status

The following exit values are returned:

0

Successful completion.

>0

An error occurred.

Attributes

See the attributes(7) man page for a description of the following attributes.

Attribute Type
Attribute Value
Availability
pkg:/system/ldoms/ldomsmanager
Interface Stability
Uncommitted

See Also

attributes(7), dumpadm(8), ifconfig(8), shutdown(8), vntsd(8)

Oracle VM Server for SPARC 3.6 Administration Guide