This section describes system administration bugs in Solaris 10 OS.
A system that runs the Sun Patch Manager Tool 2.0 can manage remote systems that run Patch Manager Tool, including Sun Patch Manager Tool 1.0.
However, a system with an earlier version of Patch Manager Tool cannot manage remote systems that run Patch Manager Tool 2.0. Earlier versions include the following:
Sun Patch Manager Base Software 1.x
Sun Patch Manager Tool 1.0
Note - Common Information Model/Web Based Enterprise Management (CIM/WBEM) support for Patch Manager Tool does not exist in the Solaris 8 OS. Consequently, remote management with Patch Manager does not apply to Solaris 8 systems.
Sun Remote Services (SRS) Net Connect is supported only in the global zone. Error messages are displayed if you perform one of the following actions:
You install SRS Net Connect in a local zone.
SRS Net Connect is installed in the global zone at the time a local zone is created.
The error messages are as follows:
*** package SUNWcstu failed to install - interactive administration required: Interactive request script supplied by package pkgadd: ERROR: request script did not complete successfully Installation of SUNWcstu was suspended (interaction required). No changes were made to the system. *** package SUNWfrunc failed to install - interactive administration required: Interactive request script supplied by package pkgadd: ERROR: request script did not complete successfully Installation of SUNWfrunc was suspended (interaction required). No changes were made to the system.
Workaround: Ignore the error messages.
While installing a non-global zone by using the zoneadm command, error or warning messages might be displayed during package installation. The messages are similar to the following example:
Preparing to install zone zone1. Creating list of files to copy from the global zone. Copying 2348 files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize 790 packages on the zone. Initialized 790 packages on zone. Zone zone1 is initialized. Installation of the following packages generated errors: SUNWjhrt SUNWmcc SUNWjhdev SUNWnsb SUNWmcon SUNWmpatchmgr Installation of the following packages generated warnings: SUNWj3rt SUNWmc SUNWwbmc SUNWmga SUNWdclnt SUNWlvma SUNWlvmg SUNWrmui SUNWdoc SUNWpl5m SUNWpmgr
Problems about package installation are also recorded in /export/zone1/root/var/sadm/system/logs/install_log which contains a log of the zone installation.
Note - The non-global zone can still be used even though these messages have been reported. Issues with package installation existed in earlier Solaris Express and Solaris 10 Beta releases. However, no notification about these problems was being generated. Beginning with this Solaris release, these errors are now properly reported and logged.
If you attempt to launch the Solaris Product Registry administration utility in a zone, the attempt fails. During the zone installation, productregistry, the Solaris Product Registry database, is not duplicated in the zone. Consequently, the utility cannot run in a zone.
Workaround: As superuser, copy the productregistry database to the zone.
# cp /var/sadm/install/productregistry zone_path/var/sadm/install/
In the previous command, zone_path is the path to the root directory of the zone that you created.
The patchadd command fails to reapply a patch under the following set of circumstances.
You patch a system that does not contain all the packages that are affected by the patch.
You later install the packages that were not installed when you applied the patch.
You reapply the patch to patch the newly installed packages.
The portion of the patch that applies to the package that you added later is not installed. A message that is similar to the following output is displayed.
patchadd ~tsk/patches/111111-01 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! The following requested patches are already installed on the system Requested to install patch 111111-01 is already installed on the system. No patches to check dependency.
Workaround: Choose one of the following workarounds.
Workaround 1: If you have not created zones on your system, use the patchadd command with the -t option to patch the system.
# patchadd -t patch-ID
In the previous command, patch-ID is the ID of the patch you want to apply.
Workaround 2: If you have created zones on your system, follow these steps.
Back out the patch.
# patchrm patch-ID
Install the additional packages that are not on the system but are affected by the patch.
# pkgadd -d device pkgabbrev
In the previous example, device specifies the absolute path to the package or packages you want to install. pkgabbrev specifies the abbreviated name of the package you want to install. You can specify multiple package names.
Reinstall the patch.
# patchadd patch-ID
If you create, and then patch, a global zone, remote login services are not enabled on any non-global zones you subsequently create. Examples of such remote services are rlogin and telnet. If you create a non-global zone after patching a global zone, you cannot log in remotely to the non-global zone. This issue affects systems that have been patched with patches that deliver or modify the SUNWcsr package.
Workaround: Choose one of the following workarounds.
Workaround 1: If you have not yet booted the non-global zone, follow these steps.
In the global zone, change to the /var/svc/profile directory in the non-global zone.
global# cd zone_path/root/var/svc/profile
In the previous example, zone_path is the path to the non-global zone. You can determine the path to the non-global zone by typing the following command in a global zone.
global# zonecfg -z zonename info zonepath
Remove the inetd_services.xml profile.
global# rm inetd_services.xml
Create a symbolic link for inetd_services.xml that points to the inetd_generic.xml profile.
global# ln -s inetd_generic.xml inetd_services.xml
Boot the non-global zone.
For more information about how to boot a zone, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.
Workaround 2: If you have already booted the non-global zone, follow these steps.
Perform the steps that are listed in the previous workaround.
In the non-global zone, enable the services that are listed in the /var/svc/profile/inetd_services.xml profile.
my-zone# svccfg apply /var/svc/profile/inetd_services.xml
Reboot the non-global zone.
Workaround 3: Before you create zones on the system, apply the appropriate patch for your platform.
For SPARC based systems, apply patch ID 119015-01, or a later version.
For x86 based systems, apply patch ID 119016-01, or a later version.
If you use the smdiskless command to delete a diskless client, the command fails. The diskless client is not removed from the system databases. The following error message is displayed:
Failing with error EXM_BMS.
Workaround: Unshare the /export partition before adding the client.
Installation of Net Connect 3.1.1 fails if you select the product at the beginning of a full Solaris 10 installation. This failure occurs when you are installing by using the Solaris 10 Operating System DVD. At the completion of the OS installation, the following error message is recorded in the Net Connect install log in /var/sadm/install/logs/:
Installation of SUNWSRSPX failed. Error: pkgadd failed for SUNWsrspx Install complete. Package: SUNWsrspx
Workaround: After the OS installation is completed, follow these steps:
Insert the Solaris 10 Operating System DVD or the Solaris 10 Software - CD 4.
Change to the directory of the Net Connect product.
Run the Net Connect installer.
Note - To download the latest version of the Sun Net Connect software and release notes, go to the Sun Net Connect portal at https://srsnetconnect.sun.com.
A boot failure that involves the Solaris Flash archive might occur under the following circumstances:
You create a Solaris Flash archive on a system that is using a libc C library with certain hardware-support capabilities.
You install the archive on a clone system that has different hardware-support capabilities.
When you attempt to boot the clone system, the following error message is displayed:
WARNING: init exited with fatal signal 9; restarting.
Workaround: Follow these steps.
Before you create the archive, unmount the /lib/libc.so.1 library on the master system.
# umount /lib/libc.so.1
This command enables the master system to use the basic version of the libc C library.
Create the Solaris Flash archive on the master system.
For more information about how to create Solaris Flash archives, see the Solaris 10 Installation Guide: Solaris Flash Archives (Creation and Installation).
Mount the /lib/libc.so.1 library on the master system.
# mount -O -F lofs /lib/libc.so.1 /usr/lib/libc/libc_hwcap2.so.1
Install the Solaris Flash archive on the clone system.
For more information about how to install Solaris Flash archives, see the Solaris 10 Installation Guide: Solaris Flash Archives (Creation and Installation).
If you use the smosservice delete command to remove a diskless client service, the command does not successfully remove all the service directories.
Workaround: Follow these steps.
Make sure that no clients exist that use the service.
# unshare /export/exec/Solaris_10_sparc.all # rm -rf /export/exec/Solaris_10_sparc.all # rm -rf /export/exec/.copyofSolaris_10_sparc.all # rm -rf /export/.copyofSolaris_10 # rm -rf /export/Solaris_10 # rm -rf /export/share # rm -rf /export/root/templates/Solaris_10 # rm -rf /export/root/clone/Solaris_10 # rm -rf /tftpboot/inetboot.sun4u.Solaris_10
Remove the following entry from the /etc/bootparams file.
Note - Remove this entry only if this file server does not provide functions or resources for any other services.
Remove the following entry from the /etc/dfs/dfstab file.
share -F nfs -o ro /export/exec/Solaris_8_sparc.all/usr
Modify the /var/sadm/system/admin/services/Solaris_10 file.
If the file server is not Solaris_10, delete the file.
If the file server is Solaris_10, remove all entries after the first three lines. The deleted lines indicate the service USR_PATH and SPOOLED ROOT packages in /export/root/templates/Solaris_10 and the supported platforms.
If you use the patchadd command to install patches across the NFS from another system, the command fails. The following example shows a patchadd operation that failed and the error message that is displayed:
Validating patches... Loading patches installed on the system... [...] Loading patches requested to install. [...] Checking patches that you specified for installation. [...] Approved patches will be installed in this order: [...] Checking local zones... [...] Summary for zones: [...] Patches that passed the dependency check: [...] Patching global zone Adding patches... Checking installed patches... Verifying sufficient filesystem capacity (dry run method)... Installing patch packages... Patch Patch_ID has been successfully installed. See /var/sadm/patch/Patch_ID/log for details Patch packages installed: SUNWroute [...] Adding patches... The patch directory /dev/.SUNW_patches_0111105334-1230284-00004de14dcb29c7 cannot be found on this system. [...] Patchadd is terminating.
Workaround: Manually copy all of the patches to be installed from the NFS server to the local system first. Then use the patchadd command to install the patches from the directory on the local system where the patches were copied.
If you use the lucreate command to create RAID-1 volumes (mirrors) that do not have device entries in the /dev/md directory, the command fails. You cannot mirror file systems with the lucreate command unless you first create the mirrors with Solaris Volume Manager software.
Workaround: Create the mirrored file systems with Solaris Volume Manager software, then create the new boot environment with the lucreate command.
For more information about the lucreate command, see the lucreate(1M) or Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
For more information about how to create mirrored file systems with Solaris Volume Manager software, see Solaris Volume Manager Administration Guide.
A system panic that occurs while you are performing a suspend-and-resume (cpr) cycle might cause the system to hang. More typically, this problem is observed in Sun Blade 2000 workstations that have the XVR-1000 graphics accelerator installed. Rarely, other SPARC based systems might similarly hang during a panic. When the panic occurs, the core dump is not saved, and no prompt appears on the console. The problem might be more prevalent if the kernel debugger (kadb) is active.
Workaround: To restore the system to a usable state, manually reboot the system.
If you attempt to stop the system by pressing keyboard sequences such as Stop-A or L1-A, the system might panic. An error message similar to the following example is displayed:
panic[cpu2]/thread=2a100337d40: pcisch2 (pci@9,700000): consistent dma sync timeout
Workaround: Do not use keyboard sequences to force the system to enter OpenBoot PROM.
The ipfs command saves and restores information about the state of the Network Address Translation (NAT) and packet-filtering state tables. This utility prevents network connections from being disrupted if the system reboots. If you issue the command with the -W option, ipfs fails to save the kernel state tables. The following error message is displayed:
state:SIOCSTGET: Bad address
When you create a new boot environment by using lucreate, the permissions are not preserved for the file system mount points. Consequently, some user processes fail. If you create the new boot environment in a clustering environment, the cluster brings down the nodes, then boots from the CD-ROM to repair the permissions for the mount points.
Workaround: Follow these steps.
Create the new boot environment.
# lucreate -n newbe -m /:c0t0d0s0:ufs -m /var:c1t0d0s0:ufs -m /usr:c2t0d0s0:ufs
In the previous example, the lucreate command creates the newbe boot environment. This example defines the following file systems and mount points.
The root (/) file system is mounted on c0t0d0s0.
The var file system is mounted on c1t0d0s0.
The usr file system is mounted on c2t0d0s0.
Mount the root file system of the new boot environment.
# mount /dev/dsk/c0t0d0s0 /mnt
For each mount point that is defined for the boot environment, change the permissions to 755.
# chmod 755 /mnt/var # chmod 755 /mnt/usr
Unmount the root file system.
# umount /dev/dsk/c0t0d0s0
After modifying the contents of snmpd.conf, you can issue the command kill -HUP snmp Process ID. This command stops the snmp process. The command then sends a signal to the System Management Agent's master agent (snmpd) to reread snmpd.conf and implement the modifications that you introduced. The command might not always cause the master agent to reread the configuration file. Consequently, using the command might not always activate modifications in the configuration file.
Instead of using kill -HUP, restart the System Management Agent after adding modifications to snmpd.conf. Perform the following steps:
Type the following command:
# /etc/init.d/init.sma restart
You are booting a Sun LX50 which has a Service partition and Solaris 10 OS on x86 is installed. Pressing the F4 function key to boot the Service partition, when given the option, causes the screen to go blank. The system then fails to boot the Service partition.
Workaround: Do not press the F4 key when the BIOS Bootup Screen is displayed. After a time-out period, the Current Disk Partition Information screen is displayed. Select the number in the Part# column that corresponds to type=DIAGNOSTIC. Press the Return key. The system boots the Service partition.
The Solaris WBEM Services 2.5 daemon cannot locate providers that are written to the com.sun.wbem.provider interface or to the com.sun.wbem.provider20 interface. Even if you create a Solaris_ProviderPath instance for a provider that is written to these interfaces, the Solaris WBEM Services 2.5 daemon does not locate the provider.
Workaround: To enable the daemon to locate such a provider, stop and restart the Solaris WBEM Services 2.5 daemon.
# /etc/init.d/init.wbem stop # /etc/init.d/init.wbem start
Note - If you use the javax API to develop your provider, you do not need to stop and restart the Solaris WBEM Services 2.5 daemon. The Solaris WBEM Services 2.5 daemon dynamically recognizes javax providers.
If you choose to use the com.sun application programming interface rather than the javax application programming interface to develop your WBEM software, only Common Information Model (CIM) remote method invocation (RMI) is fully supported. Other protocols, such as XML/HTTP, are not guaranteed to work completely with the com.sun application programming interface.
The following table lists examples of invocations that execute successfully under RMI but fail under XML/HTTP:
The Solaris Management Console Mounts and Shares tool cannot modify mount options on system-critical file systems such as root (/), /usr, and /var.
Workaround: Choose one of the following workarounds:
Use the remount option with the mount command.
# mount -F file-system-type -o remount, additional-mount-options \ device-to-mount mount-point
Note - Mount property modifications that are made by using the -remount option with the mount command are not persistent. In addition, all mount options that are not specified in the additional-mount-options portion of the previous command inherit the default values that are specified by the system. See the man page mount_ufs(1M) for more information.
Edit the appropriate entry in the /etc/vfstab file to modify the file-system mount properties, then reboot the system.