SunVTS 6.4 Patch Set 1 Release Notes

The SunVTStrademark 6.4 Patch Set 1 (PS1) software is designed for the Solaristrademark 10 6/06 Operating System (OS) and is compatible with the Solaris 10 3/05 or later OS.



Note - All new features, tests, and test enhancements that are released in
SunVTS 6.4 are documented in the SunVTS 6.4 Test Reference manuals, and
SunVTS 6.4 User's Guide. These documents are included on Solaris on the Sun Hardware collection on the Solaris Documentation DVD, in the extra value (EV) directory. These documents are also available at: http://docs.sun.com


For the latest version of this document (820-3215-10), go to: http://www.sun.com/documentation

SunVTS 6.4 Patch Set 1 supports Sun Dual 10GbE XFP PCIe ExpressModule and Sun Quad GbE UTP x8 PCIe ExpressModule driver version 1.100.

SunVTS 6.4 Patch Set 1 supports AtlasQGC (nxge). You can find the latest updates for the nxge driver software at this web site:

http://www.sun.com/download

Select Networking.

For AtlasQGC, you can find the latest Sun x8 Express Quad Gigabit Ethernet UTP Low Profile adapter documentation at this web site:

http://www.sun.com/products-n-solutions/hardware/docs/Network_Connectivity


SunVTS Support for the Solaris OS on x86-Based Systems



Note - In this document these x86 related terms mean the following:
"x86" refers to the larger family of 64-bit and 32-bit x86 compatible products.
"x64" points out specific 64-bit information about AMD64 or EM64T systems.


Starting with the Solaris 10 OS, the SunVTS infrastructure and core diagnostics are available for x86 platforms. Starting with Solaris 10 3/05 HW1, SunVTS diagnostics for x86 platforms are supported in the AMD 64-bit environment for the SunVTS kernel (vtsk). All diagnostics except the System Test (systest) are ported to 64-bit.

SunVTS is supported and tested on the following Sun x86 platforms:



Note - If you perform SunVTS on an unsupported platform, a warning message appears and SunVTS stops.


You must install the x86 version of the SunVTS packages to perform SunVTS on x86 platforms. The software packages use the same names as in the SPARC® environment. The SunVTS packages delivered separately for both x86 and SPARC Solaris platforms are as follows:

The SunVTS components available for x86 Solaris platforms are as follows.

Infrastructure:

SunVTS tests:

Displaying SunVTS Package and Version Information

Use the following command to display SunVTS package information:


# pkginfo -l SUNWvts SUNWvtsr SUNWvtsts SUNWvtsmn

You can also use either of the following commands to display additional SunVTS package information:

Use either of the following commands to display SunVTS version information:


SunVTS on LDoms Enabled Systems

SunVTS 6.4 functionality is available in the control domain and guest domains on LDoms 1.0 enabled systems, that is, Sun Fire and SPARC Enterprise T1000 servers and Sun Fire and SPARC Enterprise T2000 servers.

SunVTS 6.4 functionality is also available in the control domain and guest domains on Sun SPARC Enterprise T5120 and T5220 servers with LDoms 1.0.1 software enabled.

Sun VTS 6.3 functionality is available for all hardware configured in the control domain on Sun Fire and SPARC Enterprise T1000 servers and Sun Fire and SPARC Enterprise T2000 servers with LDoms 1.0 software enabled. If you attempt to execute in a guest domain, SunVTS 6.3 software exits after printing a message.

Performance Issues

This section describes performance issues that may be seen in a Logical domain environment if the strands from one core are split across multiple domains.

Running High Stress VTS Tests Concurrently on Multiple Domains

When high stress VTS tests are run concurrently on multiple domains, there is a high chance of moderate to serious performance degradation of the tests. The amount of performance hit will depend on the number of domains configured and also on the number of tests that are run concurrently from these domains. This is due to the fact that the logic inside the tests to selectively run on only certain strands of CMT processors for testing shared hardware resources may not function properly in a virtualized environment.

When the CPUs are virtualized, tests running in multiple domains can run on strands of the same core which, otherwise, may not be the case. As a result, contention for the same hardware resource could happen and this will result in reduced performance. This has been addressed in the non-logical domains case but that solution may not work on a logical domain environment. This is because the physical CPU ids are not available in a guest domain.

The performance impact will be felt on tests which try to access hardware resources shared among multiple strands like fputest, dtlbtest and l2sramtest. This issue may not happen if the domains are configured in such a way that the virtual CPUs from one core belong to one particular logical domain only.

CPU-ID Mapping

Another issue is that the cpu-ids reported by the test messages will be virtual cpu-ids. This means that physical cpu-id to virtual cpu-id mapping information from the LDom Manager needs to be referred to find out the actual faulty strand when the test reports a faulty CPU. The mapping can also change in some cases. At present, work is going on to change the error messages to print the chip-id and core-id along with the cpu-id in error messages. This will help in isolating the faulty CPU to some extent. This feature will be available in a future release of SunVTS.

IO Tests

In a guest domain, disktest and nettest are the only supported IO tests. usbtest will get registered if a virtual keyboard is present.