This section describes virtualization features that are new in this release. These features provide efficient cloud virtualization with no loss in performance and enable you to run large scale applications in the cloud with the optimized use of resources.
The immutable file system feature that was first introduced in Oracle Solaris 11 11/11 (read-only root for nonglobal zones) has been significantly extended so that immutable zones are now much easier to adopt and use.
Previously, to make certain configuration changes in immutable zones, you had to make the zone temporarily mutable. In Oracle Solaris 11.4, you can run in the Trusted Path Domain (TPD) while the zone is still immutable for other users.
To run in the TPD, do one of the following:
Add the user to the /etc/security/tpdusers file and set start/trusted_path to true on the console-login service.
For remote RAD access to the Trusted Path, set method_context/trusted_path to true on the rad:remote service and add tpd=yes to the user_attr entry for each user that is allowed remote access to the TPD.
These procedures are described in detail in Administering an Immutable Zone by Using the Trusted Path Domain in Creating and Using Oracle Solaris Zones.
In addition to administrators running in the TPD, you can configure some services to run in the TPD as described in SMF Services in Immutable Zones in Creating and Using Oracle Solaris Zones.
Software in Silicon Support in Oracle Solaris kernel zones is enhanced to include Silicon Secured Memory (SSM). SSM adds real-time checking of access to data in memory to help protect against malicious intrusion and flawed program code in production for greater security and reliability.
SSM protection is utilized by Oracle Database 12c by default, and is easy to turn on for other applications. See Software in Silicon Features on Kernel Zones in Creating and Using Oracle Solaris Kernel Zones.
See also Silicon Secured Memory Security Exploit Mitigations.
This feature enables Oracle Solaris software to make use of the SPARC M7 and M8 Data Analytics Accelerator (DAX) query functionality by using the High Performance Kernels (HPK) library of the Oracle in-memory database product, when running on an Oracle Solaris kernel zone.
The HPK library in the RDBMS product provides hardware-optimized operations on vector or columnar data for In-Memory Columnar (IMC) databases. The library uses hardware-specific capabilities to efficiently perform operations, and can make use of the DAX query capabilities when available in a Oracle Solaris kernel zone.
For more information, see Oracle Solaris Zones Configuration Resources, Software in Silicon Features on Kernel Zones in Creating and Using Oracle Solaris Kernel Zones, and the zonecfg(8) man page.
VLANs split a single Layer 2 (L2) network into multiple logical networks so that each logical network is its own broadcast domain. This feature means that all the devices connected to a VLAN can see the broadcast frame of every other device irrespective of its physical location.
Previously, Oracle Solaris kernel zones were able to assert only one VLAN ID. In the Oracle Solaris 11.4 release, you are able to assert additional VLAN IDs per anet by specifying the new vlan resource type in a zone configuration.
For more information, see Configuring Virtual LANs in Kernel Zones in Creating and Using Oracle Solaris Kernel Zones.
The zones delegated restarter (system/zones:default) enables control of boot order using dependencies and priorities. In previous Oracle Solaris releases, this service did not provide a way to prioritize and manage zone boot order. If applications in different zones on the same system depend on each other, for example, you might want to be able to boot all those zones in parallel.
In addition to providing milestones for zone booting, the zones delegated restarter also provides the ability to add dependencies for other zones or other services. For example, Zone-C can be configured to start after Zone-A and Zone-B have finished booting, or after a firewall service has started.
See the svc.zones(8) and zonecfg(8) man pages for information about the new boot-priority and smf-dependency properties.
The ability to change the configuration of an Oracle Solaris zone without causing an outage to the end user or service is key to meeting present day service level agreements (SLAs). With Live Zone Reconfiguration (LZR), users are able to make changes to Oracle Solaris zone configuration and push them to a running zone either as a permanent or a temporary change, without the need to reboot the zone.
In the Oracle Solaris 11.4 release, you are able to add or remove ZFS datasets to and from an Oracle Solaris native zone by using the LZR methodology.
For more information, see Live Zone Reconfiguration of Kernel Zones in Creating and Using Oracle Solaris Kernel Zones.
Oracle Solaris 11.4 enables you to use the zoneadm command with the move subcommand to move an installed Oracle Solaris zone across different storage URIs. You can perform the following actions:
Move an Oracle Solaris zone from a local file system (default) to shared storage
Move an Oracle Solaris zone from shared storage to a local file system
Move an Oracle Solaris zone from one shared storage location to another, while also changing the zonepath
Change the zonepath without moving the Oracle Solaris zone installation
For more information, see the solaris(7), zones(7), and zoneadm(8) man pages. You can also see Creating and Using Oracle Solaris Zones.
In Oracle Solaris 11.4, Oracle Solaris zones in an installed state using shared storage can be migrated to another system using the zoneadm command. Oracle Solaris kernel zones in an installed or suspended state using shared storage can also be migrated. Non-running Oracle Solaris zones and Oracle Solaris kernel zones can be evacuated using the sysadm command allowing enhanced zone availability during global zone scheduled downtimes.
For more information, see zoneadm(8), solaris(7), solaris-kz(7), and sysadm(8) man pages.
In Oracle Solaris 11.4, the Virtual HBA subsystem supports physical HBA drivers that have multipathing enabled in their respective driver.conf files. This feature allows Oracle Solaris I/O Multipathing to be supported in a sun4v Service Domain on a per-HBA port basis, as described in Managing SAN Devices and I/O Multipathing in Oracle Solaris 11.4.
The vhba module has supported multipathing in the Guest Domain since the initial release of Oracle Solaris 11.3. Allowing multipathing in both domains of a sun4v system improves fault tolerance and I/O throughput to SCSI devices.
Oracle Solaris 11.4 allows a user to configure a virtual SAN (Storage Area Network) device so that it represents an explicit set of physical SCSI devices. The original and default behavior of a vSAN device is to represent all physical SCSI devices that are reachable from the user-specified SCSI HBA Initiator port.
In Oracle Solaris 11.4, a user can enter commands to dynamically add and remove explicit physical SCSI devices from a specified vSAN device. By associating a specific vSAN device with a specific Guest Domain, the user has complete control over which physical SCSI devices can be accessed by a specific Guest Domain.