This part provides an overview of the Solstice AutoClient software and contains these chapters.
Chapter 1, About the AutoClient Technology
"About the Technology" provides information on the AutoClient technology: AutoClient system characteristics, the advantages over other system types, and how the AutoClient technology works.
Chapter 2, About the AutoClient Product
"About the Product" provides information on what is new with the current product, disk space requirements, configuration issues, limitations, and other product information.
Chapter 3, Using Solstice AutoClient in a Name Service Environment
"Using Solstice in a Name Service" provides information on how to use the Solstice AutoClient software in a name service environment.
"Security" describes security issues and provides suggestions on how to use the Solstice AutoClient software in a manner that conforms to your site security policies.
Chapter 5, Host Manager Reference Information
"Host Manager Reference Information" provides information on various features of the Host Manager application.
The Solstice AutoClient product allows you to set up AutoClient systems and provide centralized administration for these systems. An AutoClient system is a system type that caches (locally stores copies of data as it is referenced) all of its needed system software from a server. AutoClient systems use SolarisTM diskless and cache file system (CacheFSTM) technologies.
CacheFS is a general purpose file system caching mechanism that improves NFSTM server performance and scalability by reducing server and network load. (You can also use CacheFS with HSFS file systems.) The AutoClient technology improves ease of administration, enabling system administrators to maintain many AutoClient systems from a server. Changes do not have to be made on each individual system. Users may notice improved performance as well, on both AutoClient systems and servers.
For more information about CacheFS, see System Administration Guide, Volume I.
This is a list of the overview information in this chapter.
Throughout this guide, "AutoClient systems" refer to any system that uses the AutoClient technology.
System types are basically defined by how they access the root (/) and /usr file systems, including the swap area. For example, standalone and server systems mount these file systems from a local disk, while diskless and dataless clients mount the file systems remotely, relying on servers to provide these services. Table 1-1 lists these and other differences for each system type.
Table 1-1 System Type Overview
System Type |
Local File Systems |
Local Swap? |
Remote File Systems |
---|---|---|---|
root (/) /usr /home /opt /export /export/home /export/root |
Yes |
optional |
|
root (/) /usr /export/home |
Yes |
optional |
|
root (/) |
Yes |
/usr /home
|
|
- none - |
No |
root (/) swap /usr /home
|
|
AutoClient System |
cached root (/) cached /usr |
Yes |
root (/) /usr /home |
Table 1-2 describes how the other clients compare to a standalone system.
Table 1-2 Comparison of Clients Relative to a Standalone System
System Type |
Centralized Administration |
Performance |
System Disk Usage |
Network Use |
---|---|---|---|---|
AutoClient System |
better |
similar |
better |
similar |
Diskless Client |
better |
worse |
better |
worse |
Dataless Client |
similar |
worse |
better |
worse |
A server system has the following file systems:
The root (/) and /usr file systems, plus swap space
The /export, /export/swap, and /export/home file systems, which support client systems and provide home directories for users
The /opt directory or file system for storing application software
Servers can also contain the following software to support other systems:
OS services for diskless clients and AutoClient systems
Solaris CD image and boot software for networked systems to perform remote installations
JumpStartTM directory for networked systems to perform custom JumpStart installations
A networked standalone system can share information with other systems in the network, but it can function autonomously because it has its own hard disk with enough space to contain the root (/), /usr, and /export/home file systems and swap space. The standalone system thus has local access to operating system software, executables, virtual memory space, and user-created files.
A non-networked standalone system is a standalone system with all the characteristics listed above except that is not connected to a network.
A dataless client has local storage for its root (/) file system and swap space. The dataless client cannot function if detached from the network, because its executables (/usr) and user files (/home) are located across the network on the disk of a server.
SunSoft plans to remove support for dataless clients after Solaris 2.5. You can add this system type now using Host Manager, but in future releases of the Solaris operating environment you will need to choose a different type. It is recommended that you use AutoClient systems instead of dataless clients.
A dataless client places far less demand on the server and the network than a diskless client does. Because dataless clients require less network access, a server can accommodate many more dataless clients than it can diskless clients. Also, since all the user files of all the dataless clients are stored centrally (on a server), they can be backed up and administered centrally.
A diskless client has no disk and depends on a server for all its software and storage area. A diskless client remotely mounts its root (/), /usr, and /home file systems from a server.
A diskless client generates significant network traffic due to its continual need to procure operating system software and virtual memory space from across the network. A diskless client cannot operate if it is detached from the network or if its server malfunctions.
An AutoClient system is nearly identical to a diskless client in terms of installation and administration. It has the following characteristics:
Requires a 100-Mbyte or larger local disk for swapping and for caching its individual root (/) file system and the /usr file system from a server
Can be set up so that it can continue to access its cache when the server is unavailable
Relies on servers to provide other file systems and software applications
Contains no permanent data, making it a field replaceable unit (FRU)
The following figure shows how a server and an AutoClient system work together.
You must obtain a license for each AutoClient system you want to add to your network. See the Solstice AutoClient 2.1 Installation and Product Notes for licensing information.
AutoClient technology provides many system administration advantages over existing system types.
AutoClient systems:
Provide better overall scalability in a network environment, which could result in less network load
Use less disk space on a server than a diskless system (an AutoClient system does not require any swap space on a server)
Use significantly less network and server bandwidth than a diskless system
Require less system administration overhead. The AutoClient system's data is on a server, which enables centralized administration. For example, with AutoClient systems you only need to back up the server(s) that supports the AutoClient systems. To back up dataless systems, you have to perform a backup on each system. Also, you can manipulate AutoClient root file systems from the server, without accessing each system individually.
Are FRUs, which makes them easy to replace if they fail.
Are installed by setting up an AutoClient system with the Host Manager. You do not have to use the Solaris installation program to install the Solaris environment on an AutoClient system.
The CacheFS technology is the important component of AutoClient systems. A cache is a local storage area for data. A cached file system is a local file system that stores files in the cache as they are referenced, and subsequent references to the same files are satisfied from the cache rather than again retrieving them from the server. This functionality reduces the load on the network and the server, and generally results in faster access for the AutoClient system. Note that when the cache becomes full, space is reclaimed on a least recently used (LRU) basis. Files that have been unreferenced for the longest time are discarded from the cache to free space for the files that are currently being referenced.
An AutoClient system uses its local disk for swap space and to cache its individual root (/) file system and the /usr file system from a server's back file systems. Figure 1-2 shows how an AutoClient system works.
An AutoClient system uses consistency checking to keep a cached file system synchronized with its back file system. The following descriptions show how consistency checking is done for an AutoClient system:
By default, files that are updated in the server's back file systems are updated on the AutoClient system's cached file systems within 24 hours. However, if the update needs to occur sooner, you can use the autosync command. The autosync(1M) command initiates consistency checking that updates (synchronizes) an AutoClient system's cached file systems with its server's back file systems.
For more information about the autosync command, see Chapter 8, AutoClient Environment Maintenance. You can also refer to the autosync(1M) man page.
Each time an AutoClient system is booted, the AutoClient system's cached file systems are checked for consistency and updated with its server's back file systems.
Consistency checking for an AutoClient system is different from a system running CacheFS. AutoClient files (/ and /usr) are not likely to change very often, so consistency checking does not need to occur as frequently on an AutoClient system as it does on a system running CacheFS. This reduces traffic on your AutoClient network. See System Administration Guide for detailed information about CacheFS consistency checking.
Also, if you add new files to an AutoClient system, its server's back file systems are updated immediately, because an AutoClient system uses a write-through cache. A write-through cache is one that immediately updates its back file system as data is changed or added to the cache.
The Solstice AutoClient product allows you to set up AutoClient systems and administer changes to them. This chapter provides information regarding the AutoClient product so that you can successfully complete the tasks discussed in the subsequent chapters.
This is a list of the overview information in this chapter.
"Disk Space Requirements for AutoClient Servers and AutoClient Systems"
"The Relationship Between AutoClient Systems and Host Manager"
The Solstice AutoClient 2.1 product provides the following new features:
Script feature for Host Manager
The script feature enables you to run customized scripts when adding, modifying, or deleting an AutoClient system. When adding an AutoClient system, you can specify the scripts to run before and after the AutoClient is added and before and after the AutoClient is booted. When modifying an AutoClient system, you can specify the scripts to run before and after the AutoClient is modified.
For more information about this feature, refer to the online help in Host Manager or Chapter 5, Host Manager Reference Information.
System root password functionality for Host Manager
Using Host Manager, you can now set the system's root password when adding or modifying an AutoClient system. For more information about this feature, refer to the online help in Host Manager or Chapter 5, Host Manager Reference Information.
JavaStationTM support with Host Manager
Host Manager now has the capability to add JavaStation clients. In order to use this feature, you must have JavaOS services loaded on your server. Refer to the online help or the Solstice AdminSuite 2.3 Administration Guide for more information about this feature in Host Manager.
Multihomed host alias support
Host Manager now enables you to add additional IP addresses for hosts that have multiple network interfaces.
Updated root user handling
Previous versions of Host Manager had limited root capabilities; that is, when running Host Manager as root, very few functions could be performed. Host Manager has been updated to allow root more flexibility in running Host Manager applications.
Removal of OS services support
Using Host Manager, you can now remove OS services from an OS server.
Table 2-1 describes the server-client configurations that are supported by the Solstice AutoClient 2.1 software.
Table 2-1 Supported Server-Client Configurations
If You Have A ... |
You Can Add OS Services and Support For ... |
For the Following Releases ... |
---|---|---|
SPARC server running Solaris 2.3 or later |
SPARC clients |
Solaris 2.4 or later |
|
i386 clients |
Solaris 2.4 or later |
i386 server running Solaris 2.4 or later |
SPARC clients |
Solaris 2.4 or later |
|
i386 clients |
Solaris 2.4 or later |
Table 2-2 lists the disk space requirements for AutoClient servers and AutoClient systems.
Table 2-2 Disk Space Requirements for AutoClient Servers and Systems
System Type |
File System |
Minimum Disk Space Requirements |
---|---|---|
root (/) /usr /var /export /export |
1 Mbyte 4 Mbytes 7.5 Mbytes 17 Mbytes per OS service (this is the minimum space required for the OS; depending upon the OS that you wish to install, the space required could be much greater) 20 Mbytes for each AutoClient system (typically in /export) Note: When you add an AutoClient system to a server, the /export/root directory is specified by default to store the 20 Mbytes for each system. However, you can specify any directory that has available disk space. See "Adding AutoClient Systems" for detailed information. |
|
AutoClient systems |
cache for root (/) and shared /usr |
Minimum of 70 Mbytes |
The AutoClient configuration uses the entire disk(s) on the system. (For more information on AutoClient disk configurations, see Table 6-3.) If data already exists on the disk(s), it will be overwritten. You should preserve the data elsewhere by backing it up before you add and boot a system. (See "Adding AutoClient Systems".)
In operating systems Solaris 2.5 and later, you can add new AutoClient systems to your network or you can make the following AutoClient system conversions.
Table 2-3 AutoClient System Conversions
You Can Convert A ... |
To A ... |
---|---|
Generic System |
AutoClient System |
Standalone System |
AutoClient System |
Dataless System |
AutoClient System |
AutoClient System |
Standalone System |
If you plan to convert existing generic, dataless, or standalone systems to AutoClient systems, you should consider this process a re-installation. Any existing system data will be overwritten when the AutoClient system is booted for the first time.
Supported configurations for AutoClient systems are systems with one or two disks only. Other disk configurations are not recommended for the AutoClient system type. Depending on the disk configuration you choose, all of one disk or all of two disks could be overwritten by the AutoClient product. (Disk configuration options are described in Table 6-3.)
If your standalone system that is being converted to an AutoClient contains local mail (in /var/mail), copy these directories from the local disk before using the local disk as a cache. In your AutoClient configuration, set up a central mail spool directory on your server for ease of administration.
If your network has local file systems (other than the Solaris distribution file systems) on your standalone systems, you need to save these files before converting these systems to AutoClient systems. AutoClient systems that maintain local file systems lose the significant advantages of being FRUs, and of not requiring system backup.
When an AutoClient system is set up using Host Manager, the /opt directory will be empty. On the server, you should establish a uniquely-named /opt file system for each platform that it will support (for example, sparc_opt or x86_opt), so that the AutoClient systems can mount the appropriate file system.
You should use Storage Manager to create and maintain your file systems. See Solstice AdminSuite 2.3 Administration Guide for more information on Storage Manager.
When you set up your network with AutoClient systems, you need to consider the following limitations:
The /usr file system is read-only for AutoClient systems; systems cannot make any modifications to the /usr file system. AutoClient systems make use of the /usr file system in the same way as diskless and dataless systems (mounted read-only).
The pkginfo(1) command will not reflect all the software that is available to an AutoClient system. In particular, the package database for an AutoClient system will contain only the packages that were installed in the system's root directory. The pkginfo(1) command will not reflect all of the software in that is available /usr.
Normally, booting an AutoClient system as an NIS system will not work if the network has an NIS+ server running that already knows about the AutoClient system; the AutoClient system will be automatically set up as an NIS+ system. However, you can override this by modifying your bootparams file and adding the ns key for your AutoClient system. For more information on the ns key, see bootparams(4).
If an AutoClient system is running the Solaris 2.4 software, and the AutoClient server is unavailable, the AutoClient system will see the message in its console "NFS server servername not responding." Only AutoClient systems running the Solaris 2.5 or later software can be set up to use the file systems in the cache when the server is unavailable. For more information on the disconnectable feature, see Table 6-2 or online help.
The AutoClient product does not support Power ManagementTM software, which conserves the amount of power that a system consumes. For more information on Power Management software, see Using Power Management.
AutoClient systems are installed, configured, and maintained with the command-line interface or with Host Manager. Host Manager is a graphical user interface that allows for greater efficiency and ease of use in administering your AutoClient systems in a network environment. Host Manager enables system administrators to perform the following tasks:
Add, modify, display, or remove AutoClient system information in a network environment
Convert existing generic, standalone, and dataless systems to the AutoClient system type
Change information about multiple AutoClient systems in one operation
Host Manager does not set up an AutoClient system's /opt directory. For more information, see "Configuration and Transition Issues".
Easy conversion to the AutoClient system type - You can easily add AutoClient systems to your network, and convert some existing system types to AutoClient systems.
Easy Modification - You can modify an AutoClient system by using the Modify screen. You can modify all attributes before saving changes of a newly added AutoClient or stand alone converted to an AutoClient. After saving changes, you can only modify a subset of the attributes.
Global browsing - You can look at the systems in your local network on one screen.
Batching - You can add, delete, and modify many AutoClient systems in one work session.
Progress/status indication - At the bottom of the main menu is a display area that shows you how many systems have been added, deleted, or modified within a work session.
Viewing and scrolling capabilities - Scroll bars enable easy viewing of system information. Host Manager also provides a search mechanism.
Viewing error messages - If an error occurs during an operation, a pop-up window appears. You can also open the window manually from the View menu.
You can find more information on these features in Chapter 5, Host Manager Reference Information, and in Chapter 6, Managing AutoClient Systems," as these features pertain to individual tasks.
This book focuses on using Host Manager to maintain AutoClient systems. For more information on other Host Manager functionality, use online help or see the Solstice AdminSuite 2.3 Administration Guide.
Table 2-4 lists the commands that provide the same functionality as Host Manager and can be used without running an X Window SystemTM, such as the OpenWindowsTM environment. Many of the tasks in Chapter 6, Managing AutoClient Systems, provide corresponding examples using the command-line equivalents.
Table 2-4 Command-Line Equivalents of Host Manager
Command |
Description |
---|---|
admhostadd |
Adds support for a new system or OS server. |
admhostmod |
Modifies an existing system or OS server. You can also add OS services to an existing OS server. |
admhostdel |
Deletes an existing system or OS server. |
admhostls |
Lists one or more system entries in the selected name service. |
admhostls -h |
Lists hardware information of one or more system entries in the selected name service. |
Table 2-5 describes the system files that may be modified by Host Manager when adding and maintaining your AutoClient systems.
Table 2-5 Files Modified by Host Manager
System File |
Where Modified |
Description |
---|---|---|
/etc files, NIS, or NIS+ |
A database listing the servers that provide the paths to a client's boot and installation software and a client's root and swap areas |
|
/etc/dfs/dfstab |
Server providing the file services |
A file containing a series of share commands that make file resources available to the client system |
ethers |
/etc files, NIS, or NIS+ |
A database containing the client's Ethernet address |
/etc files, NIS, or NIS+ |
A database containing the client's host name and associated IP address |
|
/etc files, NIS, or NIS+ |
A database containing the client's time zone |
|
/export/root |
Server providing the file services |
A default directory that contains root files for a diskless client or AutoClient system |
/export/swap |
Server providing the file services |
A default directory that contains the swap file for a diskless client |
/var/sadm/softinfo
|
Solaris 2.3 and 2.4 servers providing OS services |
A directory containing a list of OS services available on Solaris 2.3 and 2.4 servers |
/var/sadm/system/admin/services
|
Solaris 2.5 or later server providing OS services |
A directory containing a list of OS services available on a Solaris 2.5 or later server |
/tftpboot |
Server providing the boot services | |
/rplboot |
Server providing the boot services |
A directory containing i386 client booting information |
/etc/inetd.conf |
Server providing the boot services |
A system file that starts the tftp and rpl boot daemons |
cred.org_dir |
NIS+ |
A NIS+ table used to store the host's DES and LOCAL credentials |
The Solstice AutoClient software can be used in different name service environments. When you use each application or command-line equivalent, you must specify the name service environment data you wish to modify.
This is a list of the overview information in this chapter.
"The /etc/nsswitch.conf File and the Solstice AutoClient Product"
"Setting Up User Permissions to Use the Solstice AutoClient Software"
The Solstice AutoClient software can be used to manage information on the local system or across the network using a name service. The sources of information that can be managed by the Solstice AutoClient software are described in Table 3-1.
Table 3-1 Available Name Service Environments
Name Service |
Select This Name Service To Manage ... |
---|---|
NIS+ table information. This requires sysadmin group (group 14) membership and the appropriate ownership or permissions on the NIS+ tables to be modified. |
|
NIS map information. You must be a member of the sysadmin group. If the NIS master server is running the Solaris 1.x OS Release, you must have explicit permissions on the NIS master server to update the maps. This means an entry for your host name and user name must reside in root's .rhosts file on the NIS master server. This entry is not required if the NIS master server is running the Solaris 2.x OS Release and the Name Services Transition Kit 1.2 software. |
|
None |
The /etc files on the local system. You must be a member of the sysadmin group on the local system. |
See "Setting Up User Permissions to Use the Solstice AutoClient Software" for information on using the Solstice AutoClient software with or without a name service environment.
The Solstice AutoClient software allows you to select which name service databases will be updated (written to) when you make modifications with Host Manager. However, the /etc/nsswitch.conf file on each system specifies the policy for name service lookups (where data will be read from) on that system.
It is up to the user to make sure that the name service they select from Host Manager is consistent with the specifications in the /etc/nsswitch.conf file. If the selections are not consistent, Host Manager may behave in unexpected ways, resulting in errors or warnings. See "Selecting a Name Service Environment" for an example of the window from which you select a name service.
The /etc/nsswitch.conf file has no effect on how the system configuration files get updated. In the /etc/nsswitch.conf file, more than one source can be specified for the databases, and complex rules can be used to specify how a lookup can be performed from multiple sources. There is no defined syntax for using the rules in the /etc/nsswitch.conf file to perform updates.
Because of this, updates are controlled by the name service selection that is made when the Host Manager is started. The administrator must decide where the update is to take place.
When using Host Manager, administrative operations can take place on multiple systems with a single operation. It is possible that each of these systems could have a different /etc/nsswitch.conf configuration. This situation can make it very difficult to administer your network. It is recommended that all of the systems have a consistent set of /etc/nsswitch.conf files and that the Solstice AutoClient software is used to administer the primary name service specified in the standard /etc/nsswitch.conf file.
With this release of the Solstice AutoClient product, you can define a more complex update policy for Host Manager by using the admtblloc command. For more information on this command, refer to the admtblloc(1M) man page and see "The admtblloc Command".
After you start the Solstice Launcher and click on an application icon, a window is displayed prompting you to select a name service. Select the name service that is appropriate for your environment.
This example is from Host Manager's Load window.
The Name Services Transition Kit 1.2 is designed to allow you to support a NIS server running Solaris 2.x. Installing the software and setting up the Solaris 2.x NIS servers is described in the Naming Services Transition Kit 1.2 Administrator's Guide. The Solstice AutoClient software can manage information using the NIS name service supported by Solaris 2.x NIS servers installed with the Name Services Transition Kit 1.2 software.
On NIS servers installed with the Solaris 2.x OS Release, the Name Service Transition Kit 1.2, and the Solstice AutoClient software, the configuration files stored in /etc directory are modified by the Solstice AutoClient applications (these files are in turn automatically converted to NIS maps). If the NIS server is not installed with the Solstice AutoClient software, then the directory location specified by the $DIR variable in the /var/yp/Makefile is used.
To use the Solstice AutoClient software, membership in the sysadmin group (group 14) is required. See "Adding Users to the sysadmin Group" for more information.
Following are additional requirements to use the Solstice AutoClient software for each name service.
The requirements for using the Solstice AutoClient software are:
Membership in the NIS+ admin group.
Modify permissions on the NIS+ tables to be managed. These permissions are usually given to the NIS+ group members.
See Solaris Naming Administration Guide for information on adding users to a NIS+ group and granting permissions on NIS+ tables.
The requirements for using the Solstice AutoClient software are:
An entry for your host name and user name in root's .rhosts file on the NIS master server if the server is running the Solaris 1.x OS Release. If the NIS master server is running the Solaris 2.x OS Release and Name Services Transition Kit 1.2 software, this entry is not required as long as Solstice AdminSuite is also installed.
Running ypbind with the -broadcast option, which is the default form, if you want to manage NIS map information in domains other than your own.
In order to manager NIS map information in domains other than your own, the other NIS domain masters need to be on directly attached networks.
The following procedures describe how to add users to the sysadmin group for each name service. If you have access to the Solstice AdminSuite software, you should use Group Manager instead of these procedures to add users to the sysadmin group.
Log in to a system in your NIS+ domain as an authorized user with read and write access rights to the group table.
Save the group table to a temporary file.
$ niscat group.org_dir > /var/tmp/group-file |
Edit the file, adding the users you want to authorize to use the Solstice AutoClient software.
The following sample shows users added to the sysadmin entry in the group file.
. . . sysadmin::14:user1,user2,user3 nobody::60001: noaccess::60002: |
In this example,
user1,user2,user3 |
Represent the user IDs you are adding to the sysadmin group. |
Merge the file with the NIS+ group table.
$ /usr/lib/nis/nisaddent -mv -f /var/tmp/group-file group |
The results of the merge are displayed.
Remove the temporary file.
$ rm /var/tmp/group-file |
Verify that the user is a member of the sysadmin group by entering the following commands. Perform this step for each user you added to the file.
# su - user1 $ groups staff sysadmin $ exit |
Log in as root on the NIS master server.
Edit the group file (the default directory location is /etc).
Add a comma-separated list of members to the sysadmin group.
. . . sysadmin::14:user1,user2,user3 |
The directory location of the group file is specified in the NIS makefile using the $DIR variable. Consult this file if you are uncertain of the location of the group file.
Change directory to the location of the NIS makefile (the default is /var/yp) and remake the NIS map.
# cd /var/yp # make group |
Depending on the size of the NIS map, it may take several minutes or several hours to update the maps and propagate the changes throughout the network.
(Optional) If the NIS master server is running the Solaris 1.x OS Release, create a .rhosts entry in the root (/) directory on the NIS master server for users authorized to modify NIS maps. Use the following format:
host-name user-name |
Use this procedure if you will use the Solstice AutoClient software on the local system only.
Become root on your system.
Add a comma-separated list of members to the sysadmin group.
. . . sysadmin::14:user1,user2,user3 |
A name service policy is used to specify the location of system and network information managed by the Solstice AutoClient software. This information can be located in the /etc directory for a local system, or in the NIS+ or NIS name service.
The Solstice AutoClient software supports a mixed-mode name service policy. A mixed-mode name service policy enables you to specify different name services for configuration information.
You can use the admtblloc(1M) command to choose a mixture of name services for the Solstice AutoClient tools to populate. For example, you can set up Host Manager to populate local /etc files for bootparams information and to populate the NIS+ tables for the other host configuration information, as shown in Figure 3-1.
If you choose to implement a mixed-mode name service policy, you must run the Solstice AutoClient software from the system containing information in the /etc directory.
The admtblloc command is used to implement a mixed-mode name service policy in the Solstice AutoClient software. To use this command, you must have permission to use the software for each name service as described in "Setting Up User Permissions to Use the Solstice AutoClient Software".
The admtblloc command has no relation to the /etc/nsswitch.conf file used to set the system-wide name service selection policy in the Solaris 2.x operating environment. The admtblloc command is used to set the policy for all users of the Solstice AutoClient software graphical user interface tools or command line interfaces.
This example shows how to specify the name service policy specified in Figure 3-1 using the admtblloc command:
$ admtblloc -c NIS+ -d solar.com bootparams NONE |
In this example,
- c NIS+ -d solar.com |
The NIS+ domain solar.com is the name service context (the name service and domain name specified in the Load window). |
bootparams |
bootparams is the configuration file to set the name service policy for. |
NONE |
NONE specifies that the host running the Solstice AutoClient tool or command line interface must use the bootparams file found in the local /etc directory. |
After setting the mixed-mode name service policy specified in Figure 3-1, the Solstice AutoClient software will use the bootparams information stored in the /etc directory on the current host running the Solstice AutoClient tool whenever the name service (specified in the Load window) is NIS+. The name service policy for the other configuration files (hosts, ethers, timezone and credential) is NIS+, unless you specify otherwise using admtblloc again. The mixed-mode name service policy remains in effect for all users of the Solstice AutoClient software in the name service until you change it using the admtblloc command once again.
If you specify that the name service location of a configuration file is NONE using the admtblloc command, the /etc file on the current host running the Solstice AutoClient application or command-line interface is modified. You should log in to the host where you want to use the local /etc file and perform operations using the Solstice AutoClient on that system.
This example shows how to display the name service policy using the admtblloc command:
$ admtblloc Name Name Service Path Aliases NIS+ Hosts NIS+ Group NIS+ Netgroup NIS+ Protocols NIS+ Bootparams NONE Auto.home NIS+ RPC NIS+ Timezone NIS+ Netmasks NIS+ Ethers NIS+ Passwd NIS+ Services NIS+ Networks NIS+ Locale NIS+ |
In this example output,
Name |
Is the name of the configuration file. |
Name Service |
Specifies the name service used to access the configuration file. |
Path |
(Optional) Specifies the path to the ASCII source file on NIS servers in the NIS name service. The default is the /etc directory. |
By default, the admtblloc command displays the policy for the name service to which the current host belongs. To display the name service policy for a different name service, specify the name service context.
This example shows how to display the name service policy for the NONE or local /etc files name service context domain using the admtblloc command:
$ admtblloc -c NONE Name Name Service Path Aliases NONE Hosts NONE Group NONE Auto_home NONE Netgroup NONE Protocols NONE Bootparams NONE RPC NONE Timezone NONE Netmasks NONE Ethers NONE Passwd NONE Services NONE Networks NONE Locale NONE |
In this example,
-c |
Specifies the name service context. |
NONE |
Is the local /etc files name service. |
You can also use the admtblloc command to display the name service policy for a specified configuration file. This example shows how to display the name service policy for the hosts file in the default name service:
$ admtblloc Hosts Hosts NIS+ |
The configuration file names are case-sensitive.
Following is a list of the configuration files the Solstice AutoClient software can use in a mixed-mode name service environment.
Aliases
Hosts
Group
Auto_home
Credentials
Netgroup
Protocols
Bootparams
Rpc
Timezone
Netmasks
Ethers
Passwd
Services
Networks
Locale
The admtblloc command can be used to set the name service policy for only the configuration files present in this list.
Refer to the admtblloc(1M) man page for more information about how to use this command.
An important part of using the Solstice AutoClient software is understanding its security features and setting up security policies to protect your administrative data.
This is a list of the step-by-step instructions in this chapter.
The Solstice AutoClient software uses the distributed system administration daemon (sadmind) to carry out security tasks when you perform administrative tasks across the network. The sadmind daemon executes the request on the server on behalf of the client process and controls who can access the Solstice AutoClient software.
Administering security involves authentication of the user and authorization of permissions.
Authentication means that the sadmind daemon must verify the identity of the user making the request.
Authorization means that sadmind verifies that the authenticated user has permission to execute the Solstice AutoClient software on the server. After the user identity is verified, sadmind uses the user identity to perform authorization checks.
If you have permission to use the Solstice AutoClient software, you also need to have create, delete, or modify permission before you can change an NIS+ map. See NIS+ and DNS Setup and Configuration Guide for a description of NIS+ security.
User and group identities are used for authorization checking as follows:
Root identity - The root identity has privileges (to access and update data) only on the local system. If the server is the local system (in other words, if the user has logged in as root on the server), the user will be allowed to perform Solstice AutoClient functions on the server under the root identity.
User who is a member of sysadmin group (group 14) - Solstice AutoClient permissions are granted to users who are members of the sysadmin group (group 14). This means that a user modifying administration data must be a member of the sysadmin group on the system where the task is being executed.
Each request to change administration data contains a set of credentials with a UID and a set of GIDs to which the user belongs. The server uses these credentials to perform identity and permission checks. Three levels of authentication security are available.
The security levels are described in Table 4-1.
Table 4-1 Solstice AdminSuite Security Levels
Level |
Level Name |
Description |
---|---|---|
0 |
NONE |
No identity checking is done by the server. All UIDs are set to the nobody identity. This level is used mostly for testing. |
1 |
SYS |
The server accepts the original user and group identities from the client system and uses them as the identities for the authorization checks. There is no checking to be sure that the UID of the user represents the same user on the server system. That is, it is assumed the administrator has made the UIDs and GIDs consistent on all systems in the network. Checks are made to see if the user has permission to execute the request. |
2 |
DES |
Credentials are validated using DES authentication, and checks are made to be sure that the user has permission to execute the request. The user and group identities are obtained from files on the server system by mapping the user's DES network identity to a local UID and set of GIDs. The file used depends on which name service is selected on the server system. This level provides the most secure environment for performing administrative tasks and requires that a publickey entry exists for all server systems where the sadmind daemon is running, and for all users accessing the tools. |
Level 1 is the default security used by sadmind.
You can change the security level from Level 1 to Level 2 by editing the /etc/inetd.conf file on each system, and adding the -S 2 option to the sadmind entry. If you do this, make sure that the servers in the domain are set up to use DES security.
You do not need to maintain the same level of security on all systems in the network. You can run some systems, such as file servers requiring strict security, at security Level 2, while running other systems at the default Level 1 security.
See the description of how to set up security for NIS+ in NIS+ and FNS Administration Guide.
The sadmind daemon uses information held by the name service. The three sources of information are:
On each system, the /etc/nsswitch.conf file lists several administrative files, followed by a list of one or more keywords that represent the name services to be searched for information. If more than one keyword is listed, they are searched in the order given. For example, the entry
group: files nisplus
indicates that the security mechanism looks first in the local /etc/group file for an entry. If the entry exists, the security mechanism uses the information in this entry. If the entry doesn't exist, the NIS+ group file is searched.
By default, systems running the Solaris 2.4 and higher OS release have an entry for group 14 in the local /etc/group file. If you want to set up your system to use network-wide information, do not add members to the sysadmin group on the local system. Instead, update the group 14 entry found in the group table stored in the name service.
When running under Level 2 security, the security mechanisms use the public/private key information. Make sure that the entry for publickey is followed by either nis or nisplus (depending on which name service you are using), and remove the files designation. See NIS+ and FNS Administration Guide for more information about the nsswitch.conf file.
Consider the following when creating a security policy for using the Solstice AutoClient software in a name service environment.
Determine how much trust is needed.
If your network is secure and you do not need to use authentication security, you can use the Solstice AutoClient software with the default Level 1 security.
If you need to enforce a higher level of security, you can set the security level of sadmind to Level 2.
Determine which name service will be used.
The name service determines where the security methods get information about user and group identities. The name services are designated in the /etc/nsswitch.conf file (see "Name Service Information").
Decide which users have access to the Solstice AutoClient software.
Decide which users will perform administrative functions over the network with the Solstice AutoClient software. List these users as members of group 14 accessed by the server system. The group 14 must be accessible from each system where administration data will be updated by the Solstice AutoClient software. The group 14 can be established locally on each system or can be used globally within a name service domain, depending upon the policy established by the administrator.
Determine global and local policies.
The global policy affects all hosts in the network. For example, you can add members to group 14 in the NIS or NIS+ group file. Members of this group will have permission to perform administrative tasks on all server systems that list the network name service as the primary source of information. The name services are listed in the /etc/nsswitch.conf file. For more information about the nsswitch.conf file, see "Name Service Information".
A user can establish a local policy that is different from the global policy by creating a group 14 in the local /etc/group file and listing the users who have access to the local system. The members of this group will have permission to manipulate or run the Solstice AutoClient software methods on the user's local system.
Setting up a local policy does not disable a global policy. Name service access is determined by the nsswitch.conf file.
Set up permissions for NIS+ management.
You need the proper permissions when using the Solstice AutoClient software to modify or update the NIS+ files. In addition to the permissions required by the Solstice AutoClient software, the NIS+ security mechanisms impose their own set of access permissions. The NIS+ security mechanisms are described in NIS+ and FNS Administration Guide.
Set up access for NIS management.
If the NIS master server is running the Solaris 1.x operating system, a user must have a .rhosts entry on the NIS master server to modify the NIS files. If the NIS master server is running the Solaris 2.x operating system and the Name Services Transition Kit 1.2, then no entry is required if AdminSuite has already been installed. The NIS updates will be authorized using the standard group 14 mechanism.
Creating a Level 2 DES security system requires a number of steps that depend upon your system configuration. The following sections describe how to set up your system to have Level 2 DES security for systems using /etc, NIS, and NIS+ name services.
On each system that runs the sadmind daemon, edit the /etc/inetd.conf file.
Change this line (or one similar to this):
100232/10 tli rpc/udp wait root /usr/sbin/sadmind sadmind |
to:
100232/10 tli rpc/udp wait root /usr/sbin/sadmind sadmind -S 2 |
On each system that runs the sadmind daemon, set the /etc/nsswitch.conf entry for publickey to files.
Change this entry (or one similar to this):
publickey: nis [NOTFOUND=return] files |
to:
publickey: files |
Create credentials for all group 14 users and all of the systems that will run sadmind -S 2.
Log in as root to one of the systems that will run sadmin -S 2.
Run the following command for each user that will run AdminSuite.
# newkey -u username |
You must run this command even for users who are not in group 14. If you are not in group 14 and do not have credentials, you are not a user according to sadmind; you will not be able to run any methods, even those that do not require root. You will have to supply the user's password to the newkey program.
Run the following command for every host that you have configured to run secure sadmind.
# newkey -h hostname |
You will have to provide the root password for each of these hosts to the newkey program.
Copy the /etc/publickey file on this system to each of the hosts (put this file in /etc/publickey).
This file contains all the credentials for each user and each host.
Do not run newkey on each of the systems. This seems to create a different public/private key pair, and the public key will not be valid across the network. You must create this file on one machine and then copy it to all the others.
As root, enter the following command on each system to put root's private key in /etc/.rootkey.
# keylogin -r |
By doing this, you will not have to keylogin as root on every system every time you want to run admintool; this creates an automatic root keylogin at boot time.
Create an /etc/netid file for each user and each system; put this file on all of the systems.
For each user in the publickey file, create an entry in /etc/netid that looks like the following:
unix.uid@domainname uid: uid: gid,gid, ... |
List every group that this user is a member of; sadmind -S 2 and files check netid rather than /etc/group to determine group 14 membership.
For each host in the publickey file, create an entry in /etc/netid that looks like the following:
unix.hostname@domainname 0:hostname |
Copy this file to every system in /etc/netid.
Reboot all of the machines.
On each system that you want to run the application on, log in and then keylogin. (You must be a member of group 14.)
After the keylogin, you can safely log out; your key is stored in the keyserv daemon until you explicitly keylogout or the system reboots.
On each system that runs the sadmind daemon, edit the /etc/inetd.conf file.
Change this line (or one similar to this):
100232/10 tli rpc/udp wait root /usr/sbin/sadmind sadmind |
to:
100232/10 tli rpc/udp wait root /usr/sbin/sadmind sadmind -S 2 |
On each system that runs the sadmind daemon, set the /etc/nsswitch.conf entry for publickey to nis.
Change this entry (or one similar to this):
publickey: nis [NOTFOUND=return] files |
to:
publickey: nis |
Create credentials for all group 14 users and all of the systems that will run sadmind -S 2.
Log in as root on the NIS server.
Run the following command for each user that will run AdminSuite.
# newkey -u username -s files |
You must run this command even for users who are not in group 14. If you are not in group 14 and do not have credentials, you are not a user according to sadmind; you will not be able to run any methods, even those that do not require root. You will have to supply the user's password to the newkey program.
Run the following command for every host that you have configured to run secure sadmind.
# newkey -h hostname |
You will have to provide the root password for each of these hosts to the newkey program.
Copy the /etc/publickey file on this system to the source file that is specified in /var/yp/Makefile; remake and push the nis maps.
# cd /var/yp; make |
Verify that you are a member of group 14 in the group/nis maps.
Login as root.
Change directories to the source file specified in /var/yp/Makefile.
Manually edit the group file and add yourself to group 14, just as you did in the /etc/group file.
Change directories to /var/yp and run make.
# cd /var/yp; make |
You should see the group map pushed; a message appears indicating that this action has occurred.
The security system looks in the NIS maps for your group14 access and will fail if you do not have group14 specified there, regardless if your /etc/nsswitch.conf file has group files nis.
When sadmind is running in -S 2 mode, it uses the publickey entry to determine which name service to look at for user credentials. When the entry in /etc/nsswitch.conf is nis, it looks in the nis group map to ensure that the user is a member of group 14.
As root, enter the following command on each system to put root's private key in /etc/.rootkey.
# keylogin -r |
By doing this, you will not have to keylogin as root on every system every time you want to run AdminSuite; this creates an automatic root keylogin at boot time.
To ensure that the nscd gets flushed, reboot all of the workstations.
On each system that you want to the application to run on, log in and then keylogin. (You must be a member of group 14.)
After the keylogin, you can safely log out; your key is stored in the keyserv daemon until you explicitly keylogout or the system reboots.
On each system that runs the sadmind daemon, edit the /etc/inetd.conf file.
Change this line:
100232/10 tli rpc/udp wait root /usr/sbin/sadmind sadmind |
to:
100232/10 tli rpc/udp wait root /usr/sbin/sadmind sadmind -S 2 |
On each system that runs the sadmind daemon, set the /etc/nsswitch.conf entry for publickey to nisplus.
Change this entry (or one similar to this):
publickey: nisplus [NOTFOUND=return] files |
to:
publickey: nisplus |
Log in as root on the NIS+ master server; create credentials for all group 14 users and all of the systems that will run sadmind -S 2.
Log in as root on the NIS+ master server; add all of the users for the AdminSuite to the NIS+ group 14 using the following command.
# nistbladm -m members=username,username...[name-sysadmin],group.org_dir |
The use of this function replaces the current member list with the one that is input; therefore, you must include all members you wish to be a part of group 14.
As root, add all of the users for the AdminSuite to the NIS+ admin group.
# nisgrpadm -a admin username |
Verify that the NIS_GROUP environmental variable is set to admin.
On all the workstations that you intend to run the admintool, enter the following command.
# keylogin -r |
Reboot all of the workstations; verify that the nscd gets flushed.
On each system that you want to the application to run on, log in and then keylogin. (You must be a member of group 14.)
After the keylogin, you can safely log out; your key is stored in the keyserv daemon until you explicitly keylogout or the system reboots.
This chapter contains reference information for features found in Host Manager.
This is a list of the overview information in this chapter.
When you select the Host Manager icon in the Solstice Launcher, the Host Manager's main window is displayed. The areas in the Host Manager's main window are shown in Figure 5-1.
The main window contains two areas: a menu bar and a display area. The menu bar usually contains four menus: File, Edit, View, and Help. For more information on these menus, see the online help reference (the section "Using Admin Help" describes how to access online help).
An important part of the Solstice AutoClient software is a Help utility called Admin Help. Admin Help provides detailed information about Host Manager and its functions.
To access Admin Help from the Host Manager main window, choose "About Host Manager" from the Help menu.
To access the online help from a command window, click on the Help button.
Figure 5-2 shows the Admin Help window.
The titles displayed in the top window pane identify the list of topics available for each level of help.
The text displayed in the bottom window pane describes information about using the current menu or command.
Use the scroll bars to the right of each pane to scroll through the help information displayed.
On the left side of the Admin Help window are buttons used to find information and navigate through the help system. The buttons are described in Table 5-1.
Table 5-1 Admin Help Buttons
This Button ... |
Is Used To ... |
Notes |
---|---|---|
Topics |
Displays a list of overview topics. |
Click on a title in the top window pane to view the accompanying help text. |
How To |
Displays a list of step-by-step procedures. |
Click on a title in the top window pane to view the accompanying help text. |
Reference |
Displays a list of more detailed information. |
Click on a title in the top window pane to view the accompanying help text. |
Previous |
Returns to the last accessed help text. |
The help viewer automatically returns to the previous help selection. |
Done |
Exits the help system. |
The Admin Help window is closed. |
To view specific system entries in Host Manager's main window, choose Set Filter from the View menu. The Filter window is displayed and you have the option of setting from one to three filtering characteristics, as shown in Figure 5-3.
After you have chosen a method for filtering the entries that are displayed in the main window, click on OK.
Table 5-2 describes the common window buttons used in Host Manager.
Table 5-2 Common Window Buttons in Host Manager
This Button ... |
Is Used To ... |
---|---|
OK |
Complete a task so that it can be processed. The window is closed after the task is completed. |
Apply |
Complete a task but leave the window open. (Not available on all windows.) |
Reset |
Reset all fields to their original contents (since the last successful operation). |
Cancel |
Cancel the task without submitting any changes and close the window. Fields are reset to their original contents. |
Help |
Access Admin Help. |
Clicking on OK after clicking on Apply might cause a duplicate operation, resulting in an error. Click on Cancel after clicking on Apply to dismiss the window.
Host Manager enables you to see most system attributes in the main window, shown in Figure 5-4. Choose Customize from the View menu to change your attribute viewing options.
Host Manager enables you to add, delete, modify, convert, and revert more than one system at the same time, which is called batching. The scrolling and highlighting capabilities of the main window enable you to select multiple systems, as shown in Figure 5-5. To select more than one system, click SELECT (by default, the left mouse button) on the first system. Then select each subsequent system by pressing the Control key and clicking SELECT.
See Chapter 6, Managing AutoClient Systems, for information on completing add, delete, modify, convert, and revert operations.
"Main Window Areas" describes two areas of Host Manager's main window: a menu bar area and a display area. The Host Manager main window also has a status area in the bottom of the window, which is shown in Figure 5-6.
In the left corner, the status area displays status information about pending changes, such as how many systems are waiting to be added, deleted, modified, and converted. In the right corner, the status area displays the current name service you are modifying with Host Manager.
The message "Total Changes Pending" reflects the number of systems that are waiting to be added, deleted, modified, and converted when you choose Save Changes from the File menu. After you choose "Save Changes" from the File menu, this message changes to "All Changes Successful." If any changes did not succeed, a message is written to the Errors pop-up window.
You can set up a log file to record each major operation completed with Host Manager or its command-line equivalents. After you enable logging, the date, time, server, user ID (UID), and description for every operation are written to the specified log file.
You need to follow the procedure described in "How to Enable Logging of Host Manager Operations" on each server where you run the Host Manager and want to maintain a logging file.
You do not need to quit Host Manager or the Solstice Launcher, if they are already started.
Become root.
Edit the /etc/syslog.conf file and add an entry at the bottom of the file that follows this format:
user.info filename |
Note that filename must be the absolute path name of the file, for example: /var/log/admin.log.
Create the file, filename, if it does not already exist:
# touch filename |
Make the changes to the /etc/syslog.conf file take effect by stopping and starting the syslog service:
# /etc/init.d/syslog stop Stopping the syslog service. # /etc/init.d/syslog start syslog service starting. # |
Solstice AdminSuite operations will now be logged to the file you specified.
Aug 30 10:34:23 lorna Host Mgr: [uid=100] Get host prototype Aug 30 10:34:52 lorna Host Mgr: [uid=100] Adding host: frito Aug 30 10:35:37 lorna Host Mgr: [uid=100] Get host prototype Aug 30 10:35:59 lorna Host Mgr: [uid=100] Deleting host frito Aug 30 10:36:07 lorna Host Mgr: [uid=100] Modifying sinister with sinister Aug 30 14:39:21 lorna Host Mgr: [uid=0] Read hosts Aug 30 14:39:43 lorna Host Mgr: [uid=0] Get timezone for lorna Aug 30 14:39:49 lorna Host Mgr: [uid=0] Get host prototype Aug 30 14:40:01 lorna Host Mgr: [uid=0] List supported architectures for lorna dirpath=/cdrom/cdrom0/s0 |