The root domain is the owner of the PCIe bus and is responsible for initializing and managing the bus. The root domain must be active and running a version of the Oracle Solaris OS that supports the DIO or SR-IOV feature. Shutting down, halting, or rebooting the root domain interrupts access to the PCIe bus. When the PCIe bus is unavailable, the PCIe devices on that bus are affected and might become unavailable.
The behavior of I/O domains with PCIe endpoint devices is unpredictable when the root domain is rebooted while those I/O domains are running. For instance, I/O domains with PCIe endpoint devices might panic during or after the reboot. Upon reboot of the root domain, you would need to manually stop and start each domain.
Note that if the I/O domain is resilient, it can continue to operate even if the root domain that is the owner of the PCIe bus becomes unavailable. See I/O Domain Resiliency.
To work around these issues, perform one of the following steps:
Manually shut down any domains on the system that have PCIe endpoint devices assigned to them before you shut down the root domain.
This step ensures that these domains are cleanly shut down before you shut down, halt, or reboot the root domain.
To find all the domains that have PCIe endpoint devices assigned to them, run the ldm list-io command. This command enables you to list the PCIe endpoint devices that have been assigned to domains on the system. For a detailed description of this command output, see the ldm(1M) man page.
For each domain found, stop the domain by running the ldm stop command.
Configure a domain dependency relationship between the root domain and the domains that have PCIe endpoint devices assigned to them.
This dependency relationship ensures that domains with PCIe endpoint devices are automatically restarted when the root domain reboots for any reason.
Note that this dependency relationship forcibly resets those domains, and they cannot cleanly shut down. However, the dependency relationship does not affect any domains that were manually shut down.
primary# ldm set-domain failure-policy=reset primary primary# ldm set-domain master=primary domain-name
The following example describes how you can configure failure policy dependencies in a configuration that has a non-primary root domain and I/O domains.
In this example, ldg1 is a non-primary root domain. ldg2 is an I/O domain that has either PCIe SR-IOV virtual functions or PCIe endpoint devices assigned from a root complex that is owned by the ldg1 domain.
primary# ldm set-domain failure-policy=stop ldg1 primary# ldm set-domain master=ldg1 ldg2
This dependency relationship ensures that the I/O domain is stopped when the ldg1 root domain reboots.
If it is the non-primary root domain rebooting, this dependency relationship ensures that the I/O domain is stopped. Start the I/O domain after the non-primary root domain boots.
primary# ldm start ldg2
If it is the primary domain rebooting, this policy setting stops both the non-primary root domain and the dependent I/O domains. When the primary domain boots, you must start the non-primary root domain first. When the domain boots, start the I/O domain.
primary# ldm start ldg1
Wait for the ldg1 domain to become active and then start the I/O domain.
primary# ldm start ldg2