Solstice AutoClient 2.1 Administration Guide

Part I Solstice AutoClient Overview

This part provides an overview of the Solstice AutoClient software and contains these chapters.

Chapter 1, About the AutoClient Technology

"About the Technology" provides information on the AutoClient technology: AutoClient system characteristics, the advantages over other system types, and how the AutoClient technology works.

Chapter 2, About the AutoClient Product

"About the Product" provides information on what is new with the current product, disk space requirements, configuration issues, limitations, and other product information.

Chapter 3, Using Solstice AutoClient in a Name Service Environment

"Using Solstice in a Name Service" provides information on how to use the Solstice AutoClient software in a name service environment.

Chapter 4, Security

"Security" describes security issues and provides suggestions on how to use the Solstice AutoClient software in a manner that conforms to your site security policies.

Chapter 5, Host Manager Reference Information

"Host Manager Reference Information" provides information on various features of the Host Manager application.

Chapter 1 About the AutoClient Technology

The Solstice AutoClient product allows you to set up AutoClient systems and provide centralized administration for these systems. An AutoClient system is a system type that caches (locally stores copies of data as it is referenced) all of its needed system software from a server. AutoClient systems use SolarisTM diskless and cache file system (CacheFSTM) technologies.

CacheFS is a general purpose file system caching mechanism that improves NFSTM server performance and scalability by reducing server and network load. (You can also use CacheFS with HSFS file systems.) The AutoClient technology improves ease of administration, enabling system administrators to maintain many AutoClient systems from a server. Changes do not have to be made on each individual system. Users may notice improved performance as well, on both AutoClient systems and servers.

For more information about CacheFS, see System Administration Guide, Volume I.

This is a list of the overview information in this chapter.


Note -

Throughout this guide, "AutoClient systems" refer to any system that uses the AutoClient technology.


Overview of System Types

System types are basically defined by how they access the root (/) and /usr file systems, including the swap area. For example, standalone and server systems mount these file systems from a local disk, while diskless and dataless clients mount the file systems remotely, relying on servers to provide these services. Table 1-1 lists these and other differences for each system type.

Table 1-1 System Type Overview

System Type 

Local File Systems 

Local Swap? 

Remote File Systems 

Server

root (/)

/usr

/home

/opt

/export 

/export/home

/export/root

Yes 

optional 

Standalone System

root (/)

/usr

/export/home

Yes 

optional 

Dataless Client

root (/)

Yes 

/usr

/home

 

Diskless Client

- none -

No 

root (/)

swap 

/usr

/home

 

AutoClient System 

cached root (/)

cached /usr

Yes 

root (/)

/usr

/home

Table 1-2 describes how the other clients compare to a standalone system.

Table 1-2 Comparison of Clients Relative to a Standalone System

System Type 

Centralized Administration 

Performance 

System Disk Usage 

Network Use 

AutoClient System 

better 

similar 

better 

similar 

Diskless Client 

better 

worse 

better 

worse 

Dataless Client 

similar 

worse 

better 

worse 

Server Characteristics

A server system has the following file systems:

Servers can also contain the following software to support other systems:

Standalone System Characteristics

A networked standalone system can share information with other systems in the network, but it can function autonomously because it has its own hard disk with enough space to contain the root (/), /usr, and /export/home file systems and swap space. The standalone system thus has local access to operating system software, executables, virtual memory space, and user-created files.

A non-networked standalone system is a standalone system with all the characteristics listed above except that is not connected to a network.

Dataless Clients

A dataless client has local storage for its root (/) file system and swap space. The dataless client cannot function if detached from the network, because its executables (/usr) and user files (/home) are located across the network on the disk of a server.


Note -

SunSoft plans to remove support for dataless clients after Solaris 2.5. You can add this system type now using Host Manager, but in future releases of the Solaris operating environment you will need to choose a different type. It is recommended that you use AutoClient systems instead of dataless clients.


A dataless client places far less demand on the server and the network than a diskless client does. Because dataless clients require less network access, a server can accommodate many more dataless clients than it can diskless clients. Also, since all the user files of all the dataless clients are stored centrally (on a server), they can be backed up and administered centrally.

Diskless Client Characteristics

A diskless client has no disk and depends on a server for all its software and storage area. A diskless client remotely mounts its root (/), /usr, and /home file systems from a server.

A diskless client generates significant network traffic due to its continual need to procure operating system software and virtual memory space from across the network. A diskless client cannot operate if it is detached from the network or if its server malfunctions.

AutoClient System Characteristics

An AutoClient system is nearly identical to a diskless client in terms of installation and administration. It has the following characteristics:

The following figure shows how a server and an AutoClient system work together.


Note -

You must obtain a license for each AutoClient system you want to add to your network. See the Solstice AutoClient 2.1 Installation and Product Notes for licensing information.


Figure 1-1 AutoClient System Characteristics

Graphic

Why Use an AutoClient System?

AutoClient technology provides many system administration advantages over existing system types.

Advantages Over Diskless Systems

AutoClient systems:

Advantages Over Dataless and Standalone Systems

AutoClient systems:

How an AutoClient System Works

The CacheFS technology is the important component of AutoClient systems. A cache is a local storage area for data. A cached file system is a local file system that stores files in the cache as they are referenced, and subsequent references to the same files are satisfied from the cache rather than again retrieving them from the server. This functionality reduces the load on the network and the server, and generally results in faster access for the AutoClient system. Note that when the cache becomes full, space is reclaimed on a least recently used (LRU) basis. Files that have been unreferenced for the longest time are discarded from the cache to free space for the files that are currently being referenced.

An AutoClient system uses its local disk for swap space and to cache its individual root (/) file system and the /usr file system from a server's back file systems. Figure 1-2 shows how an AutoClient system works.

Figure 1-2 How an AutoClient System Works

Graphic

How an AutoClient System's Cache Is Updated

An AutoClient system uses consistency checking to keep a cached file system synchronized with its back file system. The following descriptions show how consistency checking is done for an AutoClient system:


Note -

Consistency checking for an AutoClient system is different from a system running CacheFS. AutoClient files (/ and /usr) are not likely to change very often, so consistency checking does not need to occur as frequently on an AutoClient system as it does on a system running CacheFS. This reduces traffic on your AutoClient network. See System Administration Guide for detailed information about CacheFS consistency checking.


Also, if you add new files to an AutoClient system, its server's back file systems are updated immediately, because an AutoClient system uses a write-through cache. A write-through cache is one that immediately updates its back file system as data is changed or added to the cache.

Chapter 2 About the AutoClient Product

The Solstice AutoClient product allows you to set up AutoClient systems and administer changes to them. This chapter provides information regarding the AutoClient product so that you can successfully complete the tasks discussed in the subsequent chapters.

This is a list of the overview information in this chapter.

What's New in the Solstice AutoClient 2.1 Product

The Solstice AutoClient 2.1 product provides the following new features:

Solstice AutoClient Interoperability Support

Table 2-1 describes the server-client configurations that are supported by the Solstice AutoClient 2.1 software.

Table 2-1 Supported Server-Client Configurations

If You Have A ...  

You Can Add OS Services and Support For ... 

For the Following Releases ... 

SPARC server running Solaris 2.3 or later 

SPARC clients 

Solaris 2.4 or later 

 

i386 clients 

Solaris 2.4 or later 

i386 server running Solaris 2.4 or later 

SPARC clients 

Solaris 2.4 or later 

 

i386 clients 

Solaris 2.4 or later 

Disk Space Requirements for AutoClient Servers and AutoClient Systems

Table 2-2 lists the disk space requirements for AutoClient servers and AutoClient systems.

Table 2-2 Disk Space Requirements for AutoClient Servers and Systems

System Type 

File System 

Minimum Disk Space Requirements 

Servers of AutoClient systems

root (/) 

/usr

/var

/export

/export

1 Mbyte 

4 Mbytes 

7.5 Mbytes 

17 Mbytes per OS service (this is the minimum space required for the OS; depending upon the OS that you wish to install, the space required could be much greater) 

20 Mbytes for each AutoClient system (typically in /export) 

Note: When you add an AutoClient system to a server, the /export/root directory is specified by default to store the 20 Mbytes for each system. However, you can specify any directory that has available disk space. See "Adding AutoClient Systems" for detailed information.

AutoClient systems 

cache for root (/) and 

shared /usr

Minimum of 70 Mbytes 


Caution - Caution -

The AutoClient configuration uses the entire disk(s) on the system. (For more information on AutoClient disk configurations, see Table 6-3.) If data already exists on the disk(s), it will be overwritten. You should preserve the data elsewhere by backing it up before you add and boot a system. (See "Adding AutoClient Systems".)


Configuration and Transition Issues

In operating systems Solaris 2.5 and later, you can add new AutoClient systems to your network or you can make the following AutoClient system conversions.

Table 2-3 AutoClient System Conversions

You Can Convert A ...  

To A ...  

Generic System 

AutoClient System 

Standalone System 

AutoClient System 

Dataless System 

AutoClient System 

AutoClient System 

Standalone System 


Caution - Caution -

If you plan to convert existing generic, dataless, or standalone systems to AutoClient systems, you should consider this process a re-installation. Any existing system data will be overwritten when the AutoClient system is booted for the first time.



Note -

Supported configurations for AutoClient systems are systems with one or two disks only. Other disk configurations are not recommended for the AutoClient system type. Depending on the disk configuration you choose, all of one disk or all of two disks could be overwritten by the AutoClient product. (Disk configuration options are described in Table 6-3.)


Solstice AutoClient Product Limitations

When you set up your network with AutoClient systems, you need to consider the following limitations:

The Relationship Between AutoClient Systems and Host Manager

AutoClient systems are installed, configured, and maintained with the command-line interface or with Host Manager. Host Manager is a graphical user interface that allows for greater efficiency and ease of use in administering your AutoClient systems in a network environment. Host Manager enables system administrators to perform the following tasks:

Command-Line Equivalents of Host Manager Operations

Table 2-4 lists the commands that provide the same functionality as Host Manager and can be used without running an X Window SystemTM, such as the OpenWindowsTM environment. Many of the tasks in Chapter 6, Managing AutoClient Systems, provide corresponding examples using the command-line equivalents.

Table 2-4 Command-Line Equivalents of Host Manager

Command 

Description 

admhostadd

Adds support for a new system or OS server.  

admhostmod

Modifies an existing system or OS server. You can also add OS services to an existing OS server. 

admhostdel

Deletes an existing system or OS server. 

admhostls

Lists one or more system entries in the selected name service. 

admhostls -h

Lists hardware information of one or more system entries in the selected name service. 

Files Modified by Host Manager

Table 2-5 describes the system files that may be modified by Host Manager when adding and maintaining your AutoClient systems.

Table 2-5 Files Modified by Host Manager

System File 

Where Modified 

Description 

bootparams

/etc files, NIS, or NIS+

A database listing the servers that provide the paths to a client's boot and installation software and a client's root and swap areas 

/etc/dfs/dfstab

Server providing the file services 

A file containing a series of share commands that make file resources available to the client system

ethers

/etc files, NIS, or NIS+

A database containing the client's Ethernet address 

hosts

/etc files, NIS, or NIS+

A database containing the client's host name and associated IP address 

timezone

/etc files, NIS, or NIS+

A database containing the client's time zone 

/export/root

Server providing the file services 

A default directory that contains root files for a diskless client or AutoClient system 

/export/swap 

Server providing the file services 

A default directory that contains the swap file for a diskless client 

/var/sadm/softinfo 

 

Solaris 2.3 and 2.4 servers providing OS services 

A directory containing a list of OS services available on Solaris 2.3 and 2.4 servers 

/var/sadm/system/admin/services 

 

Solaris 2.5 or later server providing OS services 

A directory containing a list of OS services available on a Solaris 2.5 or later server 

/tftpboot

Server providing the boot services 

A directory containing SPARC client booting information

/rplboot 

Server providing the boot services 

A directory containing i386 client booting information 

/etc/inetd.conf 

Server providing the boot services 

A system file that starts the tftp and rpl boot daemons

cred.org_dir 

NIS+ 

A NIS+ table used to store the host's DES and LOCAL credentials 

Chapter 3 Using Solstice AutoClient in a Name Service Environment

The Solstice AutoClient software can be used in different name service environments. When you use each application or command-line equivalent, you must specify the name service environment data you wish to modify.

This is a list of the overview information in this chapter.

Available Name Service Environments

The Solstice AutoClient software can be used to manage information on the local system or across the network using a name service. The sources of information that can be managed by the Solstice AutoClient software are described in Table 3-1.

Table 3-1 Available Name Service Environments

Name Service 

Select This Name Service To Manage ... 

NIS+

NIS+ table information. This requires sysadmin group (group 14) membership and the appropriate ownership or permissions on the NIS+ tables to be modified. 

NIS

NIS map information. You must be a member of the sysadmin group. If the NIS master server is running the Solaris 1.x OS Release, you must have explicit permissions on the NIS master server to update the maps. This means an entry for your host name and user name must reside in root's .rhosts file on the NIS master server. This entry is not required if the NIS master server is running the Solaris 2.x OS Release and the Name Services Transition Kit 1.2 software.

None 

The /etc files on the local system. You must be a member of the sysadmin group on the local system.

See "Setting Up User Permissions to Use the Solstice AutoClient Software" for information on using the Solstice AutoClient software with or without a name service environment.

The /etc/nsswitch.conf File and the Solstice AutoClient Product

The Solstice AutoClient software allows you to select which name service databases will be updated (written to) when you make modifications with Host Manager. However, the /etc/nsswitch.conf file on each system specifies the policy for name service lookups (where data will be read from) on that system.


Caution - Caution -

It is up to the user to make sure that the name service they select from Host Manager is consistent with the specifications in the /etc/nsswitch.conf file. If the selections are not consistent, Host Manager may behave in unexpected ways, resulting in errors or warnings. See "Selecting a Name Service Environment" for an example of the window from which you select a name service.


The /etc/nsswitch.conf file has no effect on how the system configuration files get updated. In the /etc/nsswitch.conf file, more than one source can be specified for the databases, and complex rules can be used to specify how a lookup can be performed from multiple sources. There is no defined syntax for using the rules in the /etc/nsswitch.conf file to perform updates.

Because of this, updates are controlled by the name service selection that is made when the Host Manager is started. The administrator must decide where the update is to take place.

When using Host Manager, administrative operations can take place on multiple systems with a single operation. It is possible that each of these systems could have a different /etc/nsswitch.conf configuration. This situation can make it very difficult to administer your network. It is recommended that all of the systems have a consistent set of /etc/nsswitch.conf files and that the Solstice AutoClient software is used to administer the primary name service specified in the standard /etc/nsswitch.conf file.

With this release of the Solstice AutoClient product, you can define a more complex update policy for Host Manager by using the admtblloc command. For more information on this command, refer to the admtblloc(1M) man page and see "The admtblloc Command".

Selecting a Name Service Environment

After you start the Solstice Launcher and click on an application icon, a window is displayed prompting you to select a name service. Select the name service that is appropriate for your environment.

This example is from Host Manager's Load window.

Graphic

Working With the Name Services Transition Kit 1.2

The Name Services Transition Kit 1.2 is designed to allow you to support a NIS server running Solaris 2.x. Installing the software and setting up the Solaris 2.x NIS servers is described in the Naming Services Transition Kit 1.2 Administrator's Guide. The Solstice AutoClient software can manage information using the NIS name service supported by Solaris 2.x NIS servers installed with the Name Services Transition Kit 1.2 software.

On NIS servers installed with the Solaris 2.x OS Release, the Name Service Transition Kit 1.2, and the Solstice AutoClient software, the configuration files stored in /etc directory are modified by the Solstice AutoClient applications (these files are in turn automatically converted to NIS maps). If the NIS server is not installed with the Solstice AutoClient software, then the directory location specified by the $DIR variable in the /var/yp/Makefile is used.

Setting Up User Permissions to Use the Solstice AutoClient Software

To use the Solstice AutoClient software, membership in the sysadmin group (group 14) is required. See "Adding Users to the sysadmin Group" for more information.

Following are additional requirements to use the Solstice AutoClient software for each name service.

User Permissions in the NIS+ Environment

The requirements for using the Solstice AutoClient software are:

See Solaris Naming Administration Guide for information on adding users to a NIS+ group and granting permissions on NIS+ tables.

User Permissions in the NIS Environment

The requirements for using the Solstice AutoClient software are:


Note -

In order to manager NIS map information in domains other than your own, the other NIS domain masters need to be on directly attached networks.


Adding Users to the sysadmin Group

The following procedures describe how to add users to the sysadmin group for each name service. If you have access to the Solstice AdminSuite software, you should use Group Manager instead of these procedures to add users to the sysadmin group.

How to Add a User to the sysadmin Group Using NIS+

  1. Log in to a system in your NIS+ domain as an authorized user with read and write access rights to the group table.

  2. Save the group table to a temporary file.


    $ niscat group.org_dir > /var/tmp/group-file
    
  3. Edit the file, adding the users you want to authorize to use the Solstice AutoClient software.

    The following sample shows users added to the sysadmin entry in the group file.


    .
    .
    .
    sysadmin::14:user1,user2,user3
    nobody::60001:
    noaccess::60002:

    In this example,

    user1,user2,user3

    Represent the user IDs you are adding to the sysadmin group. 

  4. Merge the file with the NIS+ group table.


    $ /usr/lib/nis/nisaddent -mv -f /var/tmp/group-file group
    

    The results of the merge are displayed.

  5. Remove the temporary file.


    $ rm /var/tmp/group-file
    

Verification of Adding Users to the sysadmin Group

Verify that the user is a member of the sysadmin group by entering the following commands. Perform this step for each user you added to the file.


# su - user1
$ groups
staff sysadmin
$ exit

How to Add a User to the sysadmin Group Using NIS

  1. Log in as root on the NIS master server.

  2. Edit the group file (the default directory location is /etc).

    Add a comma-separated list of members to the sysadmin group.


    .
    .
    .
    sysadmin::14:user1,user2,user3
    

    Note -

    The directory location of the group file is specified in the NIS makefile using the $DIR variable. Consult this file if you are uncertain of the location of the group file.


  3. Change directory to the location of the NIS makefile (the default is /var/yp) and remake the NIS map.


    # cd /var/yp
    # make group
    

    Note -

    Depending on the size of the NIS map, it may take several minutes or several hours to update the maps and propagate the changes throughout the network.


  4. (Optional) If the NIS master server is running the Solaris 1.x OS Release, create a .rhosts entry in the root (/) directory on the NIS master server for users authorized to modify NIS maps. Use the following format:


    host-name user-name
    

How to Add a User to the sysadmin Group Without a Name Service

Use this procedure if you will use the Solstice AutoClient software on the local system only.

  1. Become root on your system.

  2. Edit the /etc/group file.

    Add a comma-separated list of members to the sysadmin group.


    .
    .
    .
    sysadmin::14:user1,user2,user3
    

Setting Up Solstice AutoClient Name Service Policy

A name service policy is used to specify the location of system and network information managed by the Solstice AutoClient software. This information can be located in the /etc directory for a local system, or in the NIS+ or NIS name service.

The Solstice AutoClient software supports a mixed-mode name service policy. A mixed-mode name service policy enables you to specify different name services for configuration information.

You can use the admtblloc(1M) command to choose a mixture of name services for the Solstice AutoClient tools to populate. For example, you can set up Host Manager to populate local /etc files for bootparams information and to populate the NIS+ tables for the other host configuration information, as shown in Figure 3-1.

Figure 3-1 Example Mixed-Mode Name Service Policy

Graphic


Caution - Caution -

If you choose to implement a mixed-mode name service policy, you must run the Solstice AutoClient software from the system containing information in the /etc directory.


The admtblloc Command

The admtblloc command is used to implement a mixed-mode name service policy in the Solstice AutoClient software. To use this command, you must have permission to use the software for each name service as described in "Setting Up User Permissions to Use the Solstice AutoClient Software".


Note -

The admtblloc command has no relation to the /etc/nsswitch.conf file used to set the system-wide name service selection policy in the Solaris 2.x operating environment. The admtblloc command is used to set the policy for all users of the Solstice AutoClient software graphical user interface tools or command line interfaces.


Specifying the Name Service Policy Using admtblloc

This example shows how to specify the name service policy specified in Figure 3-1 using the admtblloc command:


$ admtblloc -c NIS+ -d solar.com bootparams NONE

In this example,

- c NIS+ -d solar.com

The NIS+ domain solar.com is the name service context (the name service and domain name specified in the Load window).

bootparams

bootparams is the configuration file to set the name service policy for.

NONE

NONE specifies that the host running the Solstice AutoClient tool or command line interface must use the bootparams file found in the local /etc directory.

After setting the mixed-mode name service policy specified in Figure 3-1, the Solstice AutoClient software will use the bootparams information stored in the /etc directory on the current host running the Solstice AutoClient tool whenever the name service (specified in the Load window) is NIS+. The name service policy for the other configuration files (hosts, ethers, timezone and credential) is NIS+, unless you specify otherwise using admtblloc again. The mixed-mode name service policy remains in effect for all users of the Solstice AutoClient software in the name service until you change it using the admtblloc command once again.


Note -

If you specify that the name service location of a configuration file is NONE using the admtblloc command, the /etc file on the current host running the Solstice AutoClient application or command-line interface is modified. You should log in to the host where you want to use the local /etc file and perform operations using the Solstice AutoClient on that system.


Viewing the Name Service Policy Using admtblloc

This example shows how to display the name service policy using the admtblloc command:


$ admtblloc
Name           Name Service  Path
 
Aliases        NIS+
Hosts          NIS+
Group          NIS+
Netgroup       NIS+
Protocols      NIS+
Bootparams     NONE
Auto.home      NIS+
RPC            NIS+
Timezone       NIS+
Netmasks       NIS+
Ethers         NIS+
Passwd         NIS+
Services       NIS+
Networks       NIS+
Locale         NIS+

In this example output,

Name

Is the name of the configuration file. 

Name Service

Specifies the name service used to access the configuration file. 

Path

(Optional) Specifies the path to the ASCII source file on NIS servers in the NIS name service. The default is the /etc directory.

By default, the admtblloc command displays the policy for the name service to which the current host belongs. To display the name service policy for a different name service, specify the name service context.

This example shows how to display the name service policy for the NONE or local /etc files name service context domain using the admtblloc command:


$ admtblloc -c NONE
Name           Name Service  Path
Aliases        NONE
Hosts          NONE
Group          NONE
Auto_home      NONE
Netgroup       NONE
Protocols      NONE
Bootparams     NONE
RPC            NONE
Timezone       NONE
Netmasks       NONE
Ethers         NONE
Passwd         NONE
Services       NONE
Networks       NONE
Locale         NONE

In this example,

-c

Specifies the name service context. 

NONE

Is the local /etc files name service.

You can also use the admtblloc command to display the name service policy for a specified configuration file. This example shows how to display the name service policy for the hosts file in the default name service:


$ admtblloc Hosts
Hosts          NIS+

Note -

The configuration file names are case-sensitive.


Configuration Supported by the admtblloc Command

Following is a list of the configuration files the Solstice AutoClient software can use in a mixed-mode name service environment.


Note -

The admtblloc command can be used to set the name service policy for only the configuration files present in this list.


Refer to the admtblloc(1M) man page for more information about how to use this command.

Chapter 4 Security

An important part of using the Solstice AutoClient software is understanding its security features and setting up security policies to protect your administrative data.

This is a list of the step-by-step instructions in this chapter.

Security Information

The Solstice AutoClient software uses the distributed system administration daemon (sadmind) to carry out security tasks when you perform administrative tasks across the network. The sadmind daemon executes the request on the server on behalf of the client process and controls who can access the Solstice AutoClient software.

Administering security involves authentication of the user and authorization of permissions.

User and group identities are used for authorization checking as follows:

Security Levels

Each request to change administration data contains a set of credentials with a UID and a set of GIDs to which the user belongs. The server uses these credentials to perform identity and permission checks. Three levels of authentication security are available.

The security levels are described in Table 4-1.

Table 4-1 Solstice AdminSuite Security Levels

Level 

Level Name 

Description 

NONE 

No identity checking is done by the server. All UIDs are set to the nobody identity. This level is used mostly for testing.

SYS 

The server accepts the original user and group identities from the client system and uses them as the identities for the authorization checks. There is no checking to be sure that the UID of the user represents the same user on the server system. That is, it is assumed the administrator has made the UIDs and GIDs consistent on all systems in the network. Checks are made to see if the user has permission to execute the request. 

DES 

Credentials are validated using DES authentication, and checks are made to be sure that the user has permission to execute the request. The user and group identities are obtained from files on the server system by mapping the user's DES network identity to a local UID and set of GIDs. The file used depends on which name service is selected on the server system. This level provides the most secure environment for performing administrative tasks and requires that a publickey entry exists for all server systems where the sadmind daemon is running, and for all users accessing the tools.


Note -

Level 1 is the default security used by sadmind.


Changing the Security Level

You can change the security level from Level 1 to Level 2 by editing the /etc/inetd.conf file on each system, and adding the -S 2 option to the sadmind entry. If you do this, make sure that the servers in the domain are set up to use DES security.

You do not need to maintain the same level of security on all systems in the network. You can run some systems, such as file servers requiring strict security, at security Level 2, while running other systems at the default Level 1 security.

See the description of how to set up security for NIS+ in NIS+ and FNS Administration Guide.

Name Service Information

The sadmind daemon uses information held by the name service. The three sources of information are:

On each system, the /etc/nsswitch.conf file lists several administrative files, followed by a list of one or more keywords that represent the name services to be searched for information. If more than one keyword is listed, they are searched in the order given. For example, the entry

group:	files nisplus

indicates that the security mechanism looks first in the local /etc/group file for an entry. If the entry exists, the security mechanism uses the information in this entry. If the entry doesn't exist, the NIS+ group file is searched.

By default, systems running the Solaris 2.4 and higher OS release have an entry for group 14 in the local /etc/group file. If you want to set up your system to use network-wide information, do not add members to the sysadmin group on the local system. Instead, update the group 14 entry found in the group table stored in the name service.

When running under Level 2 security, the security mechanisms use the public/private key information. Make sure that the entry for publickey is followed by either nis or nisplus (depending on which name service you are using), and remove the files designation. See NIS+ and FNS Administration Guide for more information about the nsswitch.conf file.

Things to Consider When Creating a Security Policy

Consider the following when creating a security policy for using the Solstice AutoClient software in a name service environment.


Note -

Setting up a local policy does not disable a global policy. Name service access is determined by the nsswitch.conf file.


Creating a Level 2 DES Security System

Creating a Level 2 DES security system requires a number of steps that depend upon your system configuration. The following sections describe how to set up your system to have Level 2 DES security for systems using /etc, NIS, and NIS+ name services.

How to Create Level 2 DES Security for Systems Using /etc Name Service

  1. On each system that runs the sadmind daemon, edit the /etc/inetd.conf file.

    Change this line (or one similar to this):


    100232/10	tli	rpc/udp wait root /usr/sbin/sadmind sadmind

    to:


    100232/10	tli	rpc/udp wait root /usr/sbin/sadmind sadmind -S 2
    
  2. On each system that runs the sadmind daemon, set the /etc/nsswitch.conf entry for publickey to files.

    Change this entry (or one similar to this):


    publickey:	nis [NOTFOUND=return] files

    to:


    publickey:	files
    
  3. Create credentials for all group 14 users and all of the systems that will run sadmind -S 2.

    1. Log in as root to one of the systems that will run sadmin -S 2.

    2. Run the following command for each user that will run AdminSuite.


      # newkey -u username
      

      Note -

      You must run this command even for users who are not in group 14. If you are not in group 14 and do not have credentials, you are not a user according to sadmind; you will not be able to run any methods, even those that do not require root. You will have to supply the user's password to the newkey program.


    3. Run the following command for every host that you have configured to run secure sadmind.


      # newkey -h hostname
      

      You will have to provide the root password for each of these hosts to the newkey program.

    4. Copy the /etc/publickey file on this system to each of the hosts (put this file in /etc/publickey).

      This file contains all the credentials for each user and each host.


      Note -

      Do not run newkey on each of the systems. This seems to create a different public/private key pair, and the public key will not be valid across the network. You must create this file on one machine and then copy it to all the others.


    5. As root, enter the following command on each system to put root's private key in /etc/.rootkey.


      # keylogin -r
      

      By doing this, you will not have to keylogin as root on every system every time you want to run admintool; this creates an automatic root keylogin at boot time.

  4. Create an /etc/netid file for each user and each system; put this file on all of the systems.

    1. For each user in the publickey file, create an entry in /etc/netid that looks like the following:


      unix.uid@domainname	uid: uid: gid,gid, ...
      
    2. List every group that this user is a member of; sadmind -S 2 and files check netid rather than /etc/group to determine group 14 membership.

    3. For each host in the publickey file, create an entry in /etc/netid that looks like the following:


      unix.hostname@domainname			0:hostname
      
    4. Copy this file to every system in /etc/netid.

  5. Reboot all of the machines.

  6. On each system that you want to run the application on, log in and then keylogin. (You must be a member of group 14.)

    After the keylogin, you can safely log out; your key is stored in the keyserv daemon until you explicitly keylogout or the system reboots.

How to Create Level 2 DES Security for Systems Using NIS Name Service

  1. On each system that runs the sadmind daemon, edit the /etc/inetd.conf file.

    Change this line (or one similar to this):


    100232/10	tli	rpc/udp wait root /usr/sbin/sadmind sadmind

    to:


    100232/10	tli	rpc/udp wait root /usr/sbin/sadmind sadmind -S 2
    
  2. On each system that runs the sadmind daemon, set the /etc/nsswitch.conf entry for publickey to nis.

    Change this entry (or one similar to this):


    publickey:	nis [NOTFOUND=return] files

    to:


    publickey:	nis
    
  3. Create credentials for all group 14 users and all of the systems that will run sadmind -S 2.

    1. Log in as root on the NIS server.

    2. Run the following command for each user that will run AdminSuite.


      # newkey -u username -s files
      

      Note -

      You must run this command even for users who are not in group 14. If you are not in group 14 and do not have credentials, you are not a user according to sadmind; you will not be able to run any methods, even those that do not require root. You will have to supply the user's password to the newkey program.


    3. Run the following command for every host that you have configured to run secure sadmind.


      # newkey -h hostname
      

      You will have to provide the root password for each of these hosts to the newkey program.

    4. Copy the /etc/publickey file on this system to the source file that is specified in /var/yp/Makefile; remake and push the nis maps.


      # cd /var/yp; make
      
  4. Verify that you are a member of group 14 in the group/nis maps.

    1. Login as root.

    2. Change directories to the source file specified in /var/yp/Makefile.

    3. Manually edit the group file and add yourself to group 14, just as you did in the /etc/group file.

    4. Change directories to /var/yp and run make.


      # cd /var/yp; make
      

      You should see the group map pushed; a message appears indicating that this action has occurred.


      Note -

      The security system looks in the NIS maps for your group14 access and will fail if you do not have group14 specified there, regardless if your /etc/nsswitch.conf file has group files nis.


      When sadmind is running in -S 2 mode, it uses the publickey entry to determine which name service to look at for user credentials. When the entry in /etc/nsswitch.conf is nis, it looks in the nis group map to ensure that the user is a member of group 14.

  5. As root, enter the following command on each system to put root's private key in /etc/.rootkey.


    # keylogin -r
    

    By doing this, you will not have to keylogin as root on every system every time you want to run AdminSuite; this creates an automatic root keylogin at boot time.

  6. To ensure that the nscd gets flushed, reboot all of the workstations.

  7. On each system that you want to the application to run on, log in and then keylogin. (You must be a member of group 14.)

    After the keylogin, you can safely log out; your key is stored in the keyserv daemon until you explicitly keylogout or the system reboots.

How to Create Level 2 DES Security for Systems Using NIS+ Name Service

  1. On each system that runs the sadmind daemon, edit the /etc/inetd.conf file.

    Change this line:


    100232/10	tli	rpc/udp wait root /usr/sbin/sadmind sadmind

    to:


    100232/10	tli	rpc/udp wait root /usr/sbin/sadmind sadmind -S 2
    
  2. On each system that runs the sadmind daemon, set the /etc/nsswitch.conf entry for publickey to nisplus.

    Change this entry (or one similar to this):


    publickey:	nisplus [NOTFOUND=return] files

    to:


    publickey:	nisplus
    
  3. Log in as root on the NIS+ master server; create credentials for all group 14 users and all of the systems that will run sadmind -S 2.

    1. Create local credentials for the user.


      # nisaddcred -p uid username.domainname. local
      
    2. Create des credentials for the user.


      # nisaddcred -p unix.uid@domainname -P username.domainname. des
      
  4. Log in as root on the NIS+ master server; add all of the users for the AdminSuite to the NIS+ group 14 using the following command.


    # nistbladm -m members=username,username...[name-sysadmin],group.org_dir
    

    Note -

    The use of this function replaces the current member list with the one that is input; therefore, you must include all members you wish to be a part of group 14.


  5. As root, add all of the users for the AdminSuite to the NIS+ admin group.


    # nisgrpadm	-a admin username
    

    Verify that the NIS_GROUP environmental variable is set to admin.

  6. On all the workstations that you intend to run the admintool, enter the following command.


    # keylogin -r
    
  7. Reboot all of the workstations; verify that the nscd gets flushed.

  8. On each system that you want to the application to run on, log in and then keylogin. (You must be a member of group 14.)

    After the keylogin, you can safely log out; your key is stored in the keyserv daemon until you explicitly keylogout or the system reboots.

Chapter 5 Host Manager Reference Information

This chapter contains reference information for features found in Host Manager.

This is a list of the overview information in this chapter.

Main Window Areas

When you select the Host Manager icon in the Solstice Launcher, the Host Manager's main window is displayed. The areas in the Host Manager's main window are shown in Figure 5-1.

Figure 5-1 Host Manager Main Window Areas

Graphic

The main window contains two areas: a menu bar and a display area. The menu bar usually contains four menus: File, Edit, View, and Help. For more information on these menus, see the online help reference (the section "Using Admin Help" describes how to access online help).

Using Admin Help

An important part of the Solstice AutoClient software is a Help utility called Admin Help. Admin Help provides detailed information about Host Manager and its functions.

Figure 5-2 shows the Admin Help window.

Figure 5-2 Admin Help Window

Graphic

The titles displayed in the top window pane identify the list of topics available for each level of help.

The text displayed in the bottom window pane describes information about using the current menu or command.

Use the scroll bars to the right of each pane to scroll through the help information displayed.

On the left side of the Admin Help window are buttons used to find information and navigate through the help system. The buttons are described in Table 5-1.

Table 5-1 Admin Help Buttons

This Button ... 

Is Used To ... 

Notes 

Topics 

Displays a list of overview topics. 

Click on a title in the top window pane to view the accompanying help text. 

How To 

Displays a list of step-by-step procedures. 

Click on a title in the top window pane to view the accompanying help text. 

Reference 

Displays a list of more detailed information. 

Click on a title in the top window pane to view the accompanying help text. 

Previous 

Returns to the last accessed help text. 

The help viewer automatically returns to the previous help selection. 

Done 

Exits the help system. 

The Admin Help window is closed. 

Filtering System Entries

To view specific system entries in Host Manager's main window, choose Set Filter from the View menu. The Filter window is displayed and you have the option of setting from one to three filtering characteristics, as shown in Figure 5-3.

Figure 5-3 Filtering System Entries With Host Manager

Graphic

After you have chosen a method for filtering the entries that are displayed in the main window, click on OK.

Buttons

Table 5-2 describes the common window buttons used in Host Manager.

Table 5-2 Common Window Buttons in Host Manager

This Button ... 

Is Used To ... 

OK 

Complete a task so that it can be processed. The window is closed after the task is completed. 

Apply 

Complete a task but leave the window open. (Not available on all windows.) 

Reset 

Reset all fields to their original contents (since the last successful operation). 

Cancel 

Cancel the task without submitting any changes and close the window. Fields are reset to their original contents. 

Help 

Access Admin Help. 


Caution - Caution -

Clicking on OK after clicking on Apply might cause a duplicate operation, resulting in an error. Click on Cancel after clicking on Apply to dismiss the window.


Global Browsing Capabilities

Host Manager enables you to see most system attributes in the main window, shown in Figure 5-4. Choose Customize from the View menu to change your attribute viewing options.

Figure 5-4 Global Browsing Capabilities With Host Manager

Graphic

Batching Operations

Host Manager enables you to add, delete, modify, convert, and revert more than one system at the same time, which is called batching. The scrolling and highlighting capabilities of the main window enable you to select multiple systems, as shown in Figure 5-5. To select more than one system, click SELECT (by default, the left mouse button) on the first system. Then select each subsequent system by pressing the Control key and clicking SELECT.

Figure 5-5 Selecting Multiple Entries Within Host Manager

Graphic

See Chapter 6, Managing AutoClient Systems, for information on completing add, delete, modify, convert, and revert operations.

Status Area

"Main Window Areas" describes two areas of Host Manager's main window: a menu bar area and a display area. The Host Manager main window also has a status area in the bottom of the window, which is shown in Figure 5-6.

In the left corner, the status area displays status information about pending changes, such as how many systems are waiting to be added, deleted, modified, and converted. In the right corner, the status area displays the current name service you are modifying with Host Manager.

The message "Total Changes Pending" reflects the number of systems that are waiting to be added, deleted, modified, and converted when you choose Save Changes from the File menu. After you choose "Save Changes" from the File menu, this message changes to "All Changes Successful." If any changes did not succeed, a message is written to the Errors pop-up window.

Figure 5-6 Status Information Within Host Manager

Graphic

Logging Host Manager Operations

You can set up a log file to record each major operation completed with Host Manager or its command-line equivalents. After you enable logging, the date, time, server, user ID (UID), and description for every operation are written to the specified log file.

You need to follow the procedure described in "How to Enable Logging of Host Manager Operations" on each server where you run the Host Manager and want to maintain a logging file.

How to Enable Logging of Host Manager Operations

You do not need to quit Host Manager or the Solstice Launcher, if they are already started.

  1. Become root.

  2. Edit the /etc/syslog.conf file and add an entry at the bottom of the file that follows this format:


    user.info filename
    

    Note that filename must be the absolute path name of the file, for example: /var/log/admin.log.

  3. Create the file, filename, if it does not already exist:


    # touch filename
    
  4. Make the changes to the /etc/syslog.conf file take effect by stopping and starting the syslog service:


    # /etc/init.d/syslog stop
    Stopping the syslog service.
    # /etc/init.d/syslog start
    syslog service starting.
    #

    Solstice AdminSuite operations will now be logged to the file you specified.

Example of a Host Manager Log File


Aug 30 10:34:23 lorna Host Mgr: [uid=100] Get host prototype
Aug 30 10:34:52 lorna Host Mgr: [uid=100] Adding host: frito
Aug 30 10:35:37 lorna Host Mgr: [uid=100] Get host prototype
Aug 30 10:35:59 lorna Host Mgr: [uid=100] Deleting host frito
Aug 30 10:36:07 lorna Host Mgr: [uid=100] Modifying sinister with
sinister
Aug 30 14:39:21 lorna Host Mgr: [uid=0] Read hosts
Aug 30 14:39:43 lorna Host Mgr: [uid=0] Get timezone for lorna
Aug 30 14:39:49 lorna Host Mgr: [uid=0] Get host prototype
Aug 30 14:40:01 lorna Host Mgr: [uid=0] List supported
architectures for lorna dirpath=/cdrom/cdrom0/s0