After you have set up your AutoClient system network using Host Manager, you will need to perform certain maintenance tasks.
This is a list of the step-by-step instructions in this chapter.
"How to Copy Patches to an OS Server's Patch Spool Directory"
"How to Back Out a Patch from the OS Server's Patch Spool Directory"
"How to Synchronize Patches Installed on AutoClient Systems with Patches Spooled on the OS Server"
"How to Update All AutoClient Systems With Their Back File Systems"
"How to Update a Single AutoClient System With Its Back File System"
"How to Update a Specific File System on an AutoClient System"
"How to Update More Than One AutoClient System With Its Back File System"
In its simplest form, you can think of a patch as a collection of files and directories that replace or update existing files and directories that are preventing proper execution of the software. The existing software is derived from a specified package format, which conforms to the Application Binary Interface. (For details about packages, see the System Administration Guide, Volume I.)
On diskless clients and AutoClient systems, all software resides on the server. For example, when you add a software patch to an AutoClient system, you don't actually install the patch on the client, because its local disk space is reserved for caching. Instead, you add the patch either to the server or to the client's root file system (which resides on the server), or both. An AutoClient system's root file system is typically in /export/root/hostname on the server.
Applying patches to clients is typically complicated because the patch may place software partially on the client's root file system and partially on the OS service used by that client.
To reduce the complexity of installing patches on diskless clients and AutoClient systems, the Solstice AutoClient product includes the admclientpatch command. Table 8-1 summarizes its options and use.
Table 8-1 admclientpatch Options and Use
Option |
Use |
---|---|
-a patch_dir/patch_id |
Add a patch to a spool directory on the server. |
-c |
List all diskless clients, AutoClient systems, and OS services along with patches installed on each that are served by this server. |
-p |
List all currently spooled patches. |
-r patch_id |
Remove the specified patch_id from the spool directory. |
-s |
Synchronize all clients so that the patches they are running match the patches in the spool directory. |
The general procedure for maintaining patches on AutoClient systems is as follows:
Use admclientpatch -a or -r to create or update a spool directory of all appropriate patches on the local machine.
On any client server, use admclientpatch -s to synchronize those patches installed on clients with those patches in the spool directory.
This general procedure for maintaining patches assumes the OS server (that is, the server providing OS services to clients) is the same system with the patch spool directory. If, however, your site has several OS servers for your AutoClient systems, you may want to use a single file server for the patch spool directory, and then mount that directory on the OS servers.
If this is the way you choose to configure your site, you will have to do all updates to the patch spool directory directly on the file server. (You can't successfully run admclientpatch -a or -r from one of the OS servers if the patch spool directory is shared read-only.) When mounting the patch spool directory from a single file server, the general procedure for maintaining patches on AutoClient systems is as follows:
On the file server, use admclientpatch -a or -r to update a spool directory of all appropriate patches on the file server.
On all OS servers that mount the patch directory from the file server, use admclientpatch -s.
Do not manually add or remove patches from the spool directory. Instead use the admclientpatch command for all of your patch administration tasks.
The admclientpatch -a command copies patch files from the patch directory to a spool directory on the local system. The spool directory is /opt/SUNWadmd/2.3/Patches. If the patch being added to the spool directory makes any existing patches obsolete, admclientpatch archives the old patches in case they need to be restored.
The admclientpatch -r command removes an existing patch from the spool directory and restores the archived obsoleted patches--if they exist. (Patches made obsolete by a new patch in the spool area are archived so that they can be restored.)
The admclientpatch command is a front-end to the standard patch utilities, installpatch and backoutpatch. Using these utilities, installing a patch and backing out a patch are distinct tasks. However, by using admclientpatch -s, you do not need to be concerned whether you are installing or backing out a patch. The -s option ensures that admclientpatch will take the appropriate actions. It either installs the patch on the server and in the client's own file systems on the server, or it backs out the patch from the clients and server and re-installs the previous version of that patch. This is what is meant by synchronizing patches installed on the clients with patches in the patch spool directory.
When you use Host Manager to add new diskless clients and AutoClient systems to a network's configuration files, it will automatically set up those new clients with the patches in the patch spool directory. Host Manager may detect that the installation of a patch in an OS service area may have made all other clients of that service out of sync with the patch spool directory. If so, Host Manager will issue a warning for you to run admclientpatch -s to synchronize the patches installed on existing diskless clients or AutoClients with the patches in the patch spool directory.
For details about what happens when you add or remove a patch and how patches are distributed, see the System Administration Guide. For more details about how to use admclientpatch, refer to the admclientpatch(1m) man page.
Make sure you have your PATH environment variable updated to include /opt/SUNWadm/2.3/bin. For details, refer to the Solstice AutoClient 2.1 Installation and Release Notes.
Log in to the OS server and become root.
Copy patches to the default spool directory with this command.
# admclientpatch -a patch_dir/patch_id |
In this command,
patch_dir |
Is the source directory where patches reside on a patch server. The patch server can be the local or a remotely available machine. |
patch_id |
Is a specific patch ID number, as in 102209-01. |
This completes the procedure for copying a patch to the default spool directory on the OS server.
To verify the selected patches have been added to the default patch spool directory for the Solstice AutoClient product, use the admclientpatch -p command to see the list of currently spooled patches.
The following example copies the patch ID 100974-02 from a patch server named cable to the spool directory on the local (OS server) system, using the automounter:
# admclientpatch -a /net/cable/install/sparc/Patches/100974-02 Copying the following patch into spool area: 100974-02 . done |
The following example copies the patch ID 102113-03 from a patch server named cable to the spool directory on the local (OS server) system, by mounting the patch server's patch directory on the local system:
# mount cable:/install/sparc/Patches /mnt # admclientpatch -a /mnt/102113-03 Copying the following patch into spool area: 102113-03 . done |
Make sure you have your PATH environment variable updated to include /opt/SUNWadm/2.3/bin. For details, refer to the Solstice AutoClient 2.1 Installation and Release Notes.
Log in to the OS server and become root.
Back out patches to the default spool directory with this command:
# admclientpatch -r patch_id |
In this command,
patch_id |
Is a specific patch ID number, as in 102209-01. |
This completes the procedure for backing out a patch from the default spool directory on the OS server.
To verify the selected patches have been backed out from the default patch spool directory for the Solstice AutoClient product, use the admclientpatch -p command to see the list of currently spooled patches.
The following example backs out the patch ID 102209-01 from the default Solstice AutoClient spool directory.
# admclientpatch -r 102209-01 Unspooling the following patch: 102209-01 Removing the following patch from the spool area: 102209-01 . |
Make sure you have your PATH environment variable updated to include /opt/SUNWadm/2.3/bin. For details, refer to the Solstice AutoClient 2.1 Installation and Release Notes.
Log in to the OS server and become root.
Synchronize patches on clients with patches in the spool directory on the OS server.
# admclientpatch -s |
Using the -s option either installs or backs out patches running on clients, whichever is appropriate.
It may be necessary to reboot your AutoClient systems after installing patches. If so, you can use the remote booting command, admreboot, to reboot the systems. For more information on this command, see the admreboot(1M) man page.
This completes the procedure synchronize patches on all clients.
To verify that the patches in the Solstice AutoClient patch spool directory are running on diskless clients and AutoClient systems, use the admclientpatch command with the -c option.
# admclientpatch -c Clients currently installed are: rogue Solaris, 2.5, sparc Patches installed : 102906-01 OS Services available are: Solaris_2.5 Patches installed : 102906-01 |
The following command synchronizes all clients with the patches in the OS server's patch spool directory. The -v option reports whether admclientpatch is adding new patches or backing out unwanted patches.
# admclientpatch -s -v Synchronizing service: Solaris_2.5 Installing patches spooled but not installed 102939-01 .....skipping; not applicable Synchronizing client: rogue All done synchronizing patches to existing clients and OS services. |
With the AutoClient technology, a new cache consistency mode has been added to the CacheFS consistency model. This consistency mode is called demandconst, which is a new option to the cfsadmin(1M) command. This mode assumes that files are generally not changed on the server, and that if they ever are changed, the system administrator will explicitly request a consistency check. So no consistency checking is performed unless a check is requested. There is an implied consistency check when a CacheFS file system is mounted (when the AutoClient system boots), and an AutoClient system is configured by default to request a consistency check every 24 hours. This model helps AutoClient performance by imposing less network load by performing less checking.
The risk of inconsistent data is minimal since the system's root area is exported only to that system. There is no cache inconsistency when the system modifies its own data since modifications are made through the cache. The only other way a system's root data can be modified is by root on the server.
The /usr file system is similar in that the server exports it as read-only, so the only way it could be modified is by the system administrator on the server. Use the autosync(1m) command to synchronize a system's cached file system with its corresponding back file systems.
You can update individual AutoClient systems, all local AutoClient systems in your network, or all AutoClient systems in a designated file, to match their corresponding back file systems. You should do this update when you add a new package in the shared /usr directory or in one or more system / (root) directories, or when you add a patch. The following procedures show how to use the autosync(1M) command. The command is issued from the server.
To use the autosync command, you need to be a member of the UNIX group, sysadmin (group 14).
If you need to create the sysadmin group, see "Setting Up User Permissions to Use the Solstice AutoClient Software".
Use the autosync command with no options to update all cached file systems on all the AutoClient systems in your network that are local to the server you are running the autosync command on.
% autosync |
The system responds with the names of any systems that failed to be updated. No system response means the updates were all successful.
The following example shows an update that failed on systems pluto, genesis, and saturn.
% autosync pluto:: failed: genesis:: failed: saturn:: failed: |
If there is no system response, all updates are successful.
Use the autosync command with the -h option to update all cached file systems on a specified AutoClient system in your network:
% autosync -h hostname |
In this command,
-h |
Specifies one system. |
hostname |
Is the name of the system whose cache you want to update. |
The following example shows how to update all cached file systems on the AutoClient system pluto:
% autosync -h pluto |
If the system failed to be updated, you would get the following system response:
% autosync -h pluto pluto:: failed: |
If there is no system response, all updates are successful.
Use the autosync command as follows to synchronize a specific file system on an AutoClient system with its back file system:
% autosync -h hostname cached-filesystem |
In this command,
-h |
Specifies one system. |
hostname |
Is the name of the system whose cache you want to update. |
cached-filesystem |
Is the name of the system cached filesystem you want to update. |
The following example shows how to update the cached file system /usr on the AutoClient system foo:
% autosync -h foo /usr |
Create a file containing the names of the systems you want to synchronize with their back file systems.
The file can be located anywhere. For example, you could put the file in /tmp or /home. If you run the autosync command without arguments and several systems fail to update, put the names of the systems that failed to update in this file. For example, enter one name per line.
Use the autosync command as follows to update all AutoClient systems in the host_file file.
% autosync -H host_file |
In this command,
-H |
Specifies a file containing the names of all AutoClient systems to update. |
host_file |
Is the name of the file containing the names of all AutoClient systems in the network you want to update. |
The following example shows how to update all AutoClient systems in the host file net_hosts:
% autosync -H net_hosts |
For example, the contents of net_hosts might be:
mars jupiter saturn
Use the autosync command as follows to update all cached file systems on a AutoClient system. This command is used on the system itself, and not the server:
% autosync -l |
You can also specify a particular file system on the system that requires updating.
The following example shows how a client requests update of its own /usr file system:
% autosync -l /usr |
Since an AutoClient system contains no permanent data, it is a field replaceable unit (FRU). An FRU can be physically replaced by another compatible system without loss of permanent data. So, if an AutoClient system fails, you can use the following procedure to replace it without the user losing data or wasting a lot of time.
If you replace only the disks or another part of the system, and the Ethernet address stays the same, you must use the boot -f command to reboot the system so that the cache is reconstructed.
You cannot switch kernel architectures or OS releases from the original configuration.
If the system is currently running, use the halt command to get it to the prom monitor environment and turn it off.
Disconnect the faulty AutoClient system from the network.
Connect the replacement AutoClient system onto the network.
The replacement AutoClient system must have the same kernel architecture as the faulty AutoClient system.
Start Host Manager from the Solstice Launcher on the AutoClient system's server, and select the name service, if not done already.
See "How to Start Host Manager"for more information.
Select the faulty AutoClient system you wish to modify from the main window.
Choose Modify from the Edit menu.
The Modify window appears with fields filled in specific to the AutoClient system you selected.
Modify the Ethernet address and the disk configuration to be that of the new AutoClient system.
Click on OK.
Choose Save Changes from the File menu.
Turn on the new system.
If the screen displays the > prompt instead of the ok prompt, type n and press Return.
The screen should now display the ok prompt.
This step is not required for Sun-4 systems, because they do not have the ok prompt.
Boot the AutoClient system with the following command:
If the AutoClient System Is A ... |
Then Enter ... |
---|---|
Sun4/3nn |
b le() |
Sun4/1nn Sun4/2nn Sun4/4nn |
b ie() |
i386 | |
All other Sun systems |
boot net |
After the AutoClient system boots, log in as root.
Set the AutoClient system's default boot device to the network by referring to "SPARC: How to Set Up a System to Automatically Boot From the Network".
This step is necessary for an AutoClient system, because it must always boot from the network. For example, an AutoClient system should automatically boot from the network after a power failure.
The following command is equivalent to using Host Manager to modify the Ethernet address for an AutoClient system.
% admhostmod -e ethernet_address host_name |
The following command is equivalent to using Host Manager to modify the disk configuration for an AutoClient system.
% admhostmod -x diskconf=disk_config host_name |
For more information on disk configuration options, see Table 6-3.
You can use the cachefspack command to pack an AutoClient system's cache with specific cached files and directories, which means that they will always be in the system's cache and not removed when the cache becomes full. The files and/or directories that you pack in your cache must be from a cached file system, which means they must be under the root (/) or /usr file systems for AutoClient systems.
If you set up your AutoClient system with the disconnectable option, you will have the added benefit of continued access to your cache and the packed files if the server becomes unavailable. For more information on the disconnectable option, see Table 6-2.
Pack files in the cache using the cachefspack command.
$ cachefspack -p filename |
In this command,
-p |
Specifies that you want the file or files packed. This is also the default. |
filename |
Specifies the name of the cached file or directory you want packed in the cache. When you specify a directory to be packed, all of its subdirectories are also packed. For more information about the cachefspack command, see the man page. |
The following example specifies the file cm (Calendar Manager) to be packed in the cache.
$ cachefspack -p /usr/openwin/bin/cm |
The following example shows several files specified to be packed in the cache.
$ cachefspack -p /usr/openwin/bin/xcolor /usr/openwin/bin/xview |
The following example shows a directory specified to be packed in the cache.
$ cachefspack -p /usr/openwin/bin |
You may need to unpack a file from the cache. For example, if you have other files or directories that are a higher priority than others, you can unpack the less critical files.
Unpack individual files in the cache using the -u option of the cachefspack command.
$ cachefspack -u filename |
In this command,
-u |
Specifies that you want the file or files unpacked. |
filename |
Is the name of the file or files you want unpacked in the cache. For more information about the cachefspack command, see the man page. |
Unpack all the files in a cache directory using the -U option of the cachefspack command.
$ cachefspack -U cache_directory |
In this command,
-U |
Specifies that you want to unpack all packed files in the specified cached directory. |
cache_directory |
Is the name of the cache directory that you want unpacked from the cache. For more information about the cachefspack command, see the man page. |
The following example shows the file /usr/openwin/bin/xlogo specified to be unpacked from the cache.
$ cachefspack -u /usr/openwin/bin/xlogo |
The following example shows several files specified to be unpacked from the cache.
$ cachefspack -u /usr/openwin/bin/xview /usr/openwin/bin/xcolor |
The following example uses the -U option to specify all files in a cache directory to be unpacked.
$ cachefspack -U /usr/openwin/bin |
You cannot unpack a cache that does not have at least one file system mounted. With the -U option, if you specify a cache that does not contain mounted file systems, you will see output similar to the following:
$ cachefspack -U /local/mycache cachefspack: Could not unpack cache /local/mycache, no mounted filesystems in the cache. |
You may want to view information about the files that you've specified to be packed, and what their packing status is.
To display information about packed files and directories, use the -i option of the cachefspack command, as follows:
$ cachefspack -i cached-filename-or-directory |
In this command,
-i |
Specifies you want to view information about your packed files. |
cached-filename-or-directory |
Is the name of the file or directory for which to display information. |
The following example shows that a file called ttce2xdr.1m is marked to be packed, and it is in the cache.
# cachefspack -i /usr/openwin/man/man1m/ttce2xdr.1m cachefspack: file /usr/openwin/man/man1m/ttce2xdr.1m marked packed YES, packed YES . . . |
The following example shows a directory called /usr/openwin, which contains a subdirectory bin. Three of the files in the bin subdirectory are: xterm, textedit, and resize. The file textedit is specified to be packed, but it is not in the cache. The file textedit is specified to be packed, and it is in the cache. The file resize is specified to be packed, but it is not in the cache.
$ cachefspack -i /usr/openwin/bin . . . cachefspack: file /bin/xterm marked packed YES, packed NO cachefspack: file /bin/textedit marked packed YES,packed YES cachefspack: file /bin/resize marked packed YES,packed NO . . . |