System Administration Guide: Security Services

Part III Authentication Services and Secure Communication

Chapter 9, Using Authentication Services (Tasks)

Provides information about Diffie-Hellman authentication. 

Chapter 10, Using PAM

Provides information about the Pluggable Authentication Module (PAM) framework. 

Chapter 11, Using Solaris Secure Shell (Tasks)

Provides an introduction to Solaris Secure Shell, as well as step-by-step instructions. 

Chapter 12, Solaris Secure Shell Administration (Reference)

Provides a description of the files that are used to configure Solaris Secure Shell.  

Chapter 13, Introduction to SEAM

Provides overview information about Sun Enterprise Authentication Mechanism (SEAM). 

Chapter 14, Planning for SEAM

Provides a list of information that needs to be gathered and issues that need to be resolved before you configure SEAM. 

Chapter 15, Configuring SEAM (Tasks)

Provides step-by-step instructions for configuring SEAM. 

Chapter 16, SEAM Error Messages and Troubleshooting

Provides a list of SEAM error messages, how to fix the conditions that generate the messages, and how to troubleshoot some error conditions. 

Chapter 17, Administering Principals and Policies (Tasks)

Provides step-by-step instructions for administering SEAM principals and policies with the gkadmin GUI and at the command line.

Chapter 18, Using SEAM (Tasks)

Provides user instructions for SEAM. 

Chapter 19, SEAM (Reference)

Provides reference information about SEAM. 

Chapter 9 Using Authentication Services (Tasks)

This chapter provides information about the Diffie-Hellman authentication mechanism that can be used with Secure RPC.

The following is a list of the step-by-step instructions in this chapter.

Overview of Secure RPC

Secure RPC is an authentication method that authenticates both the host and the user who is making a request for a service. Secure RPC uses the Diffie-Hellman authentication mechanism. This authentication mechanism uses DES encryption. Applications that use Secure RPC include NFS and the NIS+ name service.

NFS Services and Secure RPC

NFS enables several hosts to share files over the network. Under the NFS service, a server holds the data and resources for several clients. The clients have access to the file systems that the server shares with the clients. Users who are logged in to the client machines can access the file systems by mounting the file systems from the server. To the user on the client machine, it appears as if the files are local to the client. One of the most common uses of NFS allows systems to be installed in offices, while keeping all user files in a central location. Some features of the NFS service, such as the mount -nosuid option, can be used to prohibit the opening of devices and file systems by unauthorized users.

The NFS service uses Secure RPC to authenticate users who make requests over the network. This process is known as Secure NFS. The authentication mechanism, AUTH_DH, uses DES encryption with Diffie-Hellman authentication to ensure authorized access. The AUTH_DH mechanism has also been called AUTH_DES.

DES Encryption

The Data Encryption Standard (DES) encryption functions use a 56-bit key to encrypt data. If two credential users or principals know the same DES key, they can communicate in private by using the key to encipher and decipher text. DES is a relatively fast encryption mechanism. A DES chip makes the encryption even faster. However, if the chip is not present, a software implementation is substituted.

The risk of using just the DES key is that an intruder can collect enough cipher-text messages that were encrypted with the same key to be able to discover the key and decipher the messages. For this reason, security systems such as Secure NFS change the keys frequently.

Kerberos Authentication

Kerberos is an authentication system that was developed at MIT. Encryption in Kerberos is based on DES. Kerberos V4 support is no longer supplied as part of Secure RPC. However, a client-side implementation of Kerberos V5, which uses RPCSEC_GSS, is included with this release. For more information, see Chapter 13, Introduction to SEAM.

Diffie-Hellman Authentication

The Diffie-Hellman (DH) method of authenticating a user is nontrivial for an intruder to crack. The client and the server have their own private key, which they use with the public key to devise a common key. The private key is also known as the secret key. The client and the server use the common key to communicate with each other by using an agreed-on encryption function, such as DES. This method was identified as DES authentication in previous Solaris releases.

Authentication is based on the ability of the sending system to use the common key to encrypt the current time. Then the receiving system can decrypt and check against its current time. Make sure to synchronize the time on the client and the server.

The public keys and private keys are stored in an NIS or NIS+ database. NIS stores the keys in the publickey map. NIS+ stores the keys in the cred table. These files contain the public key and the private key for all potential users.

The system administrator is responsible for setting up NIS maps or NIS+ tables, and generating a public key and a private key for each user. The private key is stored in encrypted form with the user's password. This process makes the private key known only to the user.

Implementation of Diffie-Hellman Authentication

This section describes the series of transactions in a client-server session that use DH authentication (AUTH_DH).

Generating the Public Keys and Secret Keys

Sometime prior to a transaction, the administrator runs either the newkey or nisaddcred command to generate a public key and a secret key. Each user has a unique public key and secret key. The public key is stored in a public database. The secret key is stored in encrypted form in the same database. To change the key pair, use the chkey command.

Running the keylogin Command

Normally, the login password is identical to the secure RPC password. In this case, the keylogin command is not required. However, if the passwords are different, the users have to log in, and then run a keylogin command explicitly.

The keylogin command prompts the user for a secure RPC password. The command then uses the password to decrypt the secret key. The keylogin command then passes the decrypted secret key to a program that is called the keyserver. The keyserver is an RPC service with a local instance on every computer. The keyserver saves the decrypted secret key and waits for the user to initiate a secure RPC transaction with a server.

If both the login password and the RPC password are the same, the login process passes the secret key to the keyserver. If the passwords are required to be different and the user must always run the keylogin command, then the keylogin command can be included in the user's environment configuration file, such as the ~/.login, ~/.cshrc, or ~/.profile file. Then the keylogin command runs automatically whenever the user logs in.

Generating the Conversation Key

When the user initiates a transaction with a server, the following occurs:

  1. The keyserver randomly generates a conversation key.

  2. The kernel uses the conversation key to encrypt the client's time stamp, among other things.

  3. The keyserver looks up the server's public key in the public key database. See the publickey(4) man page for more information.

  4. The keyserver uses the client's secret key and the server's public key to create a common key.

  5. The keyserver encrypts the conversation key with the common key.

First Contact With the Server

The transmission, which includes the encrypted time stamp and the encrypted conversation key, is then sent to the server. The transmission includes a credential and a verifier. The credential contains three components:

The window is the difference in time that the client says should be allowed between the server's clock and the client's time stamp. If the difference between the server's clock and the time stamp is greater than the window, the server rejects the client's request. Under normal circumstances, this rejection does not happen, because the client first synchronizes with the server before starting the RPC session.

The client's verifier contains the following:

The window verifier is needed in case somebody wants to impersonate a user. The impersonator can write a program that, instead of filling in the encrypted fields of the credential and verifier, just stuffs in random bits. The server decrypts the conversation key into some random key. The server then uses the key to try to decrypt the window and the time stamp. The result is random numbers. After a few thousand trials, however, the random window/time stamp pair is likely to pass the authentication system. The window verifier makes the process of guessing the right credential much more difficult.

Decrypting the Conversation Key

When the server receives the transmission from the client, the following occurs:

  1. The keyserver that is local to the server looks up the client's public key in the public key database.

  2. The keyserver uses the client's public key and the server's secret key to deduce the common key. The common key is the same common key that is computed by the client. Only the server and the client can calculate the common key because the calculation requires knowing one of the secret keys.

  3. The kernel uses the common key to decrypt the conversation key.

  4. The kernel calls the keyserver to decrypt the client's time stamp with the decrypted conversation key.

Storing Information on the Server

After the server decrypts the client's time stamp, the server stores four items of information in a credential table:

The server stores the first three items for future use. The server stores the time stamp to protect against replays. The server accepts only time stamps that are chronologically greater than the last time stamp seen, so any replayed transactions are guaranteed to be rejected.


Note –

Implicit in these procedures are the name of the caller, who must be authenticated in some manner. The keyserver cannot use DES authentication to authenticate the caller because it would create a deadlock. To solve this problem, the keyserver stores the secret keys by user ID (UID) and grants requests only to local root processes.


Returning the Verifier to the Client

The server returns a verifier to the client, which includes the following:

The reason for subtracting 1 from the time stamp is to ensure that the time stamp is invalid. So, the time stamp cannot be reused as a client verifier.

Client Authenticates the Server

The client receives the verifier and authenticates the server. The client knows that only the server could have sent the verifier because only the server knows what time stamp the client sent.

Additional Transactions

With every transaction after the first transaction, the client returns the index ID to the server in its second transaction and sends another encrypted time stamp. The server sends back the client's time stamp minus 1, which is encrypted by the conversation key.

Administering Diffie-Hellman Authentication

A system administrator can implement policies that help secure the network. The level of security that is required differs with each site. This section provides instructions for some tasks that are associated with network security.

How to Restart the Keyserver

  1. Become superuser or assume an equivalent role.

  2. Verify whether the keyserv daemon is running.


    # ps -ef | grep keyserv
    root   100      1  16   Apr 11    ?         0:00 /usr/sbin/keyserv
    root  2215   2211   5   09:57:28  pts/0     0:00 grep keyserv
  3. Start the keyserver if the process isn't running.


    # /usr/sbin/keyserv
    

How to Set Up a root Key in NIS+ Credentials for Diffie-Hellman Authentication

For detailed description of NIS+ security, see System Administration Guide: Naming and Directory Services (FNS and NIS+).

  1. Become superuser or assume an equivalent role.

  2. Edit the /etc/nsswitch.conf file, and add the following line:


    publickey: nisplus
  3. Initialize the NIS+ client.


    # nisinit -cH hostname
    

    hostname is the name of a trusted NIS+ server that contains an entry in its tables for the client machine.

  4. Add the client to the cred table by typing the following commands:


    # nisaddcred local
    # nisaddcred des
    
  5. Verify the setup by using the keylogin command.

    If you are prompted for a password, the procedure has succeeded.

Example—Setting Up a New Key for root on an NIS+ Client

The following example uses the host pluto to set up earth as an NIS+ client. You can ignore the warnings. The keylogin command is accepted, verifying that earth is correctly set up as a secure NIS+ client.


# nisinit -cH pluto
NIS Server/Client setup utility.
This machine is in the North.Abc.COM. directory.
Setting up NIS+ client ...
All done.
# nisaddcred local
# nisaddcred des 
DES principal name : unix.earth@North.Abc.COM
Adding new key for unix.earth@North.Abc.Com (earth.North.Abc.COM.)
 
Network password: xxx Press Return
Warning, password differs from login password.
Retype password: xxx Press Return
 
# keylogin
Password:
#

How to Set Up a New User Key That Uses NIS+ Credentials for Diffie-Hellman Authentication

  1. Add the user to the cred table on the root master server by typing the following command:


    # nisaddcred -p unix.UID@domain-name -P username.domain-name. des
    

    Note that, in this case, the username.domain-name must end with a dot (.)

  2. Verify the setup by logging in as the client and typing the keylogin command.

Example—Setting Up a New Key for an NIS+ User

The following example shows how DES authorization is given to a user who is named george.


# nisaddcred -p unix.1234@North.Abc.com -P george.North.Abc.COM. des
DES principal name : unix.1234@North.Abc.COM
Adding new key for unix.1234@North.Abc.COM (george.North.Abc.COM.)
 
Password:
Retype password:
 
# rlogin rootmaster -l george
# keylogin
Password:
#

How to Set Up a root Key by Using NIS Credentials With Diffie-Hellman Authentication

  1. Become superuser on the client or assume an equivalent role.

  2. Edit the /etc/nsswitch.conf file, and add the following line:


    publickey: nis
  3. Create a new key pair by using the newkey command.


    # newkey -h hostname 
    

    hostname is the name of the client.

Example—Setting Up a New Key for root on a NIS Client

The following example shows how to set up earth as a secure NIS client.


# newkey -h earth
Adding new key for unix.earth@North.Abc.COM
New Password:
Retype password:
Please wait for the database to get updated...
Your new key has been successfully stored away.
#

How to Create a New User Key That Uses NIS Credentials With Diffie-Hellman Authentication

  1. Log in to the NIS master server as superuser or assume an equivalent role.

    Only system administrators, when logged in to the NIS master server, can generate a new key for a user.

  2. Create a new key for a user.


    # newkey -u username 
    

    username is the name of the user. The system prompts for a password. You can type a generic password. The private key is stored in an encrypted form by using the generic password.


    # newkey -u george
    Adding new key for unix.12345@Abc.North.Acme.COM
    New Password:
    Retype password:
    Please wait for the database to get updated...
    Your new key has been successfully stored away.
    #
  3. Tell the user to log in and type the chkey -p command.

    This command allows the user to re-encrypt his or her private key with a password known only to the user.


    earth% chkey -p
    Updating nis publickey database.
    Reencrypting key for unix.12345@Abc.North.Acme.COM
    Please enter the Secure-RPC password for george:
    Please enter the login password for george:
    Sending key change request to pluto...
    #

    Note –

    The chkey command can be used to create a new key-pair for a user.


How to Share and Mount Files With Diffie-Hellman Authentication

Prerequisite

The Diffie-Hellman publickey authentication must be enabled on the network. See How to Set Up a root Key in NIS+ Credentials for Diffie-Hellman Authentication and How to Set Up a root Key by Using NIS Credentials With Diffie-Hellman Authentication.

To share a file system with Diffie-Hellman authentication:
  1. Become superuser or assume an equivalent role.

  2. Share the file system with Diffie-Hellman authentication.


    # share -F nfs -o sec=dh /filesystem 
    
To mount a file system with Diffie-Hellman authentication:
  1. Become superuser or assume an equivalent role.

  2. Mount the file system with Diffie-Hellman authentication.


    # mount -F nfs -o sec=dh server:resource  mountpoint 
    

    The -o sec=dh option mounts the file system with AUTH_DH authentication.

Chapter 10 Using PAM

This chapter covers the Pluggable Authentication Module (PAM) framework. PAM provides a method to “plug in” authentication services and provides support for multiple authentication services.

PAM (Overview)

The Pluggable Authentication Module (PAM) framework lets you “plug in” new authentication technologies without changing system entry services, such as login, ftp, telnet, and so on. You can also use PAM to integrate UNIX login with other security mechanisms like Kerberos. Mechanisms for account, session, and password management can also be “plugged in” by using this framework.

Benefits of Using PAM

The PAM framework allows you to configure the use of system entry services (ftp, login, telnet, or rsh, for example) for user authentication. Some benefits that PAM provides are as follows:

PAM Components

The PAM software consists of a library, several modules, and a configuration file. New versions of several commands or daemons that take advantage of the PAM interfaces are also included.

The following figure illustrates the relationship between the applications, the PAM library, the pam.conf file, and the PAM modules.

Figure 10–1 How PAM Works

Diagram shows how the PAM library is situated between the PAM modules and the applications that use the modules.

The applications, such as ftp, telnet, and login, use the PAM library to call they configuration policy. The pam.conf file defines which modules to use, and in what order the modules are to be used with each application. Responses from the modules are passed back through the library to the application.

The following sections describe the relationship between the PAM components and the applications.

PAM Library

The PAM library provides the framework to load the appropriate modules and to manage the stacking process. The PAM library provides a generic structure to which all of the modules can plug in. See the pam.3PAM XREF man page for more information.

Password-Mapping Feature

The stacking feature can require that a user remembers several passwords. With the password-mapping feature, the primary password is used to decrypt the other passwords. The user does not need to remember or enter multiple passwords. The other option is to synchronize the passwords across each authentication mechanism. This strategy could increase the security risk, because the mechanism security is limited by the least secure password method that is used in the stack.

Changes to PAM for the Solaris 9 Release

The Solaris 9 release includes several enhancements to the PAM service. The following list highlights the most important changes:

Changes to PAM for the Solaris 9 Update 2 Release

Update 2 includes a new binding control flag. This flag provides the ability to skip additional authentication if the service module returns success and if no preceding required modules have failed. The control flag is documented in the pam.conf(4) man page and in PAM Control Flags.

PAM (Tasks)

This section discusses some tasks that might be required to make the PAM framework fully functional. In particular, you should be aware of some security issues that are associated with the PAM configuration file.

PAM (Task Map)

Task 

Description 

For Instructions 

Plan for your PAM Installation 

 Consider configuration issues and make decisions about them before you start the software configuration process.Planning for PAM

Add new PAM modules 

 Sometimes, site-specific modules must be written and installed to cover requirements that are not part of the generic software. This procedure covers the installation process.How to Add a PAM Module

Block access through ~/.rhosts

Steps to further increase security by preventing access through ~/.rhosts.How to Prevent Unauthorized Access From Remote Systems With PAM

Initiate error reporting 

Steps to start the reporting of PAM error messages through syslog.How to Initiate PAM Error Reporting

Planning for PAM

When you are deciding how best to use PAM in your environment, start by focusing on these issues:

Here are some suggestions to consider before you change the PAM configuration file:

How to Add a PAM Module

  1. Become superuser or assume an equivalent role.

  2. Determine which control flags and which other options should be used.

    Refer to PAM Modules information on the modules.

  3. Copy the new module to /usr/lib/security/sparcv9.

    In the Solaris 8 release, the module should be copied to /usr/lib/security.

  4. Set the permissions so that the module file is owned by root and that permissions are 555.

  5. Edit the PAM configuration file, /etc/pam.conf, and add this module to the appropriate services.

Verification

You must test before the system is rebooted in case the configuration file is misconfigured. Run rlogin, su, and telnet before you reboot the system. The service might be a daemon that is spawned only once when the system is booted. Then you must reboot the system before you can verify that the module has been added.

How to Prevent Unauthorized Access From Remote Systems With PAM

Remove the rlogin auth rhosts_auth.so.1 entry from the PAM configuration file. This step prevents the reading of the ~/.rhosts files during an rlogin session. Therefore, this step prevents unauthenticated access to the local system from remote systems. All rlogin access requires a password, regardless of the presence or contents of any ~/.rhosts or /etc/hosts.equiv files.


Note –

To prevent other unauthenticated access to the ~/.rhosts files, remember to disable the rsh service. The best way to disable a service is to remove the service entry from the /etc/inetd.conf file. Changing the PAM configuration file does not prevent the service from being started.


How to Initiate PAM Error Reporting

  1. Edit the /etc/syslog.conf file to add any of the following entries for PAM error reporting:

    • auth.alert – Messages about conditions that should be fixed immediately

    • auth.crit – Critical messages

    • auth.err – Error messages

    • auth.info – Informational messages

    • auth.debug – Debugging messages

  2. Restart the syslog daemon, or send a SIGHUP signal to the daemon to activate the PAM error reporting.

Example—Initiating PAM Error Reporting

In the following example, all alert messages are displayed on the console. Critical messages are mailed to root. Informational and debug messages are added to the /var/log/pamlog file.


auth.alert	/dev/console
auth.crit	'root'
auth.info;auth.debug	/var/log/pamlog

Each line in the log contains a time stamp, the name of the system that generated the message, and the message. The pamlog file is capable of logging a large amount of information.

PAM (Reference)

PAM uses run-time pluggable modules to provide authentication for system entry services. A stacking feature is provided to let you authenticate users through multiple services. Also provided is a password-mapping feature to not require that users remember multiple passwords.

PAM Modules

Every PAM module implements a specific mechanism. When you set up PAM authentication, you need to specify both the module and the module type, which defines what the module does. More than one module type, such as auth, account, session, or password, can be associated with each module.

The following table describes every PAM module, and includes the module name and the module file name. The path of each module is determined by the instruction set that is available in the Solaris release that is installed. The default path to the modules is /usr/lib/security/$ISA. The value for $ISA could be sparc or i386. See the isalist(5) man page for more information.

Table 10–1 PAM Modules

Module Name and Module File Name 

Description 

authtok_check

pam_authtok_check.so.1

Provides support for password management. This module performs various checks on passwords. Those check are for the length of the password, for circular shift of the login name, for password complexity, and for the amount of variation between new passwords and old passwords. See pam_authtok_check(5) for more information.

authtok_get

pam_authtok_get.so.1

Provides password prompting for authentication and password management. See pam_authtok_get(5) for more information.

authtok_store

pam_authtok_store.so.1

Provides support for authentication only. This module updates the authentication token for the user. After the successful update, the module stores the token in the specified repository or default repository. See pam_authtok_store(5) for more information.

dhkeys

pam_dhkeys.so.1

Provides support for Diffie-Hellman key management in authentication. This module supports Secure RPC authentication and Secure RPC authentication token management. See pam_dhkeys(5) for more information.

dial_auth

pam_dial_auth.so.1

Can only be used for authentication. This module uses data that is stored in the /etc/dialups and /etc/d_passwd files for authentication. This module is mainly used by the login command. See pam_dial_auth(5) for more information.

krb5

pam_krb5_auth.so.1

Provides support for authentication, account management, session management, and password management. Kerberos credentials are used for authentication. See pam_krb5(5) for more information.

ldap

pam_ldap.so.1

Provides support for authentication and password management. Data from an LDAP server are used for authentication. See pam_ldap(5) for more information.

projects

pam_projects.so.1

Provides support for account management. See pam_projects(5) for more information.

rhosts_auth

pam_rhosts_auth.so.1

Can only be used for authentication. This module uses data that is stored in the ~/.rhosts and /etc/host.equiv files through the ruserok() routine. This module is mainly used by the rlogin and rsh commands. See pam_rhosts_auth(5) for more information.

roles

pam_roles.so.1

Provides support for account management only. The RBAC user_attr database determines which roles a user can assume. See pam_roles(5) for more information.

sample

pam_sample.so.1

Provides support for authentication, account management, session management, and password management. Used for testing. See pam_sample(5) for more information.

smartcard

pam_smartcard.so.1

Provides support for authentication only. See pam_smartcard(5) for more information.

unix

pam_unix.so.1

Provides support for authentication, account management, session management, and password management. Any of the four module type definitions can be used with this module. This module uses UNIX passwords for authentication.  

In the Solaris environment, the selection of appropriate name services to get password records is controlled through the /etc/nsswitch.conf file. See pam_unix(5) for more information.

unix_account

pam_unix_account.so.1

Provides support for account management. This module retrieves password aging information from the repository that is specified in the nsswitch.conf file. Then the module verifies that the password and the user's account have not expired. See pam_unix_account(5) for more information.

unix_auth

pam_unix_auth.so.1

Provides support for authentication. This module verifies the password that is contained in the PAM handle. The module checks that the user's password matches the password in the specified repository or default repository. See pam_unix_auth(5) for more information.

unix_session

pam_unix_session.so.1

Provides support for session management. This module initiates session management by updating the /var/adm/lastlog file. See pam_unix_session(5) for more information.

For security reasons, these module files must be owned by root and must not be writable through group or other permissions. If the file is not owned by root, PAM does not load the module.

PAM Module Types

You need to understand the PAM module types because the types define the interface to the module. Here are the four types of run-time PAM modules:

PAM Configuration File

The PAM configuration file, /etc/pam.conf, determines the authentication services to be used, and the order in which the services are used. This file can be edited to select authentication mechanisms for each system entry application.

PAM Configuration File Syntax

The PAM configuration file consists of entries with the following syntax:


service_name module_type control_flag module_path module_options

service_name

Is the name of the service, for example, ftp, login, telnet.

module_type

Is the module type for the service. For more information, see PAM Module Types.

control_flag

Determines the continuation or failure behavior for the module. 

module_path

Specifies the path to the library object that implements the service. 

module_options

Specifies the options that are passed to the service modules. 

You can add comments to the pam.conf file by starting the line with a # (pound sign). Use white spaces or tabs to delimit the fields.


Note –

An entry in the PAM configuration file is ignored if one of the following conditions exists: the line has less than four fields, an invalid value is given for module_type or control_flag, or the named module does not exist.


Valid Service Names for PAM

The following table lists:

Some module types are not appropriate for each service. For example, the password module type is appropriate for only the passwd command. Also, because the passwd command is not concerned with authentication, no auth module type is associated with the service.

Table 10–2 Valid Service Names for the /etc/pam.conf File

Service Name 

Daemon or Command 

Applicable Module Types 

cron

/usr/sbin/cron

auth, account

dtlogin

/usr/dt/bin/dtlogin

auth, account, session

dtsession

/usr/dt/bin/dtsession

auth

ftp

/usr/sbin/in.ftpd

auth, account, session

init

/usr/sbin/init

session

login

/usr/bin/login

auth, account, session

passwd

/usr/bin/passwd

password

ppp

/usr/bin/ppp

auth, account, session

rexd

/usr/sbin/rpc.rexd

account, session

rlogin

/usr/sbin/in.rlogind

auth, account, session

rsh

/usr/sbin/in.rshd

auth, account, session

sac

/usr/lib/saf/sac

session

ssh

/usr/bin/ssh

auth, account, session

su

/usr/bin/su

auth, account

telnet

/usr/sbin/in.telnetd

auth, account, session

ttymon

/usr/lib/saf/ttymon

session

uucp

/usr/sbin/in.uucpd

auth, account, session

PAM Control Flags

To determine the continuation or failure behavior from a module, you must select a control flag for each entry in the PAM configuration file, /etc/pam.conf. Each module in a stack can determine the success or failure of the request.

Continuation behavior defines if any following modules are checked. Depending on the response from a particular module, you can decide to skip any additional modules.

Failure behavior defines how error messages are logged or reported. Failures are either optional or required. A required failure causes that request to fail, even if other modules succeed. An optional failure does not always cause the request to fail.

Even though these flags apply to all module types, the following explanation assumes that these flags are being used for authentication modules. The control flags are as follows:

More information about these control flags is provided in the following section, which describes the default /etc/pam.conf file.

Generic pam.conf File

The generic /etc/pam.conf file specifies the following actions:

  1. When the login command is run, authentication must succeed for the pam_authtok_get, pam_dhkeys, pam_auth_unix, and the pam_dial_auth modules.

  2. For the rlogin command, authentication through the pam_authtok_get, pam_dhkeys, and pam_auth_unix modules must succeed if authentication through pam_rhost_auth fails.

  3. The sufficient control flag indicates that for the rlogin command, the successful authentication that is provided by the pam_rhost_auth module is sufficient. The next entry is ignored.

  4. Most of the other commands that require authentication require successful authentication through the pam_authtok_get, pam_dhkeys, and pam_auth_unix modules.

  5. For the rsh command, authentication through the pam_rhost_auth module is flagged as sufficient. No other authentication is required if authentication through the pam_rhost_auth module succeeds.

The OTHER service name allows a default to be set for any other commands that require authentication and are not included in the file. The OTHER option simplifies administration of the file, since many commands that are using the same module can be covered by using only one entry. Also, the OTHER service name, when used as a “catch-all,” can ensure that each access is covered by one module. By convention, the OTHER entry is included at the bottom of the section for each module type.

Normally, the entry for the module_path is “root-relative.” If the file name that you enter for module_path does not begin with a slash (/), the path /usr/lib/security/$ISA precedes the file name. A full path name must be used for modules that are located in other directories. The values for the module_options can be found in the man pages for the module. For example, the UNIX module is covered in the pam_unix(5) man page.


login   auth required           pam_authtok_get.so.1
login   auth required           pam_dhkeys.so.1
login   auth required           pam_unix_auth.so.1
login   auth required           pam_dial_auth.so.1

In this example, the login service specifies authentication through all four authentication modules. A login command fails if any one of the modules returns an error.

Chapter 11 Using Solaris Secure Shell (Tasks)

Solaris Secure Shell enables a user to securely access a remote host over an unsecured network. The shell provides commands for remote login and remote file transfer. The following is a list of the information in this chapter.

Introduction to Solaris Secure Shell

In Solaris Secure Shell, authentication is provided by the use of passwords, public keys, or both. All network traffic is encrypted. Thus, Solaris Secure Shell prevents a would-be intruder from being able to read an intercepted communication or from spoofing the system.

Solaris Secure Shell can also be used as an on-demand virtual private network, or VPN. A VPN can forward X Window system traffic or connect individual port numbers between the local machines and remote machines over an encrypted network link.

With Solaris Secure Shell, you can perform these actions:

Solaris Secure Shell supports two versions of the Secure Shell protocol. Version 1 is the original version of the protocol. Version 2 is more secure, and amends some of the basic security design flaws of Version 1. Version 1 is provided only to assist users who are migrating to Version 2. Users are strongly discouraged from using Version 1.


Note –

Hereafter in this text, v1 is used to represent Version 1, and v2 is used to represent Version 2.


The requirements for Solaris Secure Shell authentication are as follows:

The following table shows the authentication methods, the compatible protocol versions, the local host and remote host requirements, and the relative security. Note that the default method is password-based authentication.

Table 11–2 Authentication Methods for Solaris Secure Shell

Authentication Method (Protocol Version) 

Local Host Requirements 

Remote Host Requirements 

Security Level 

Password-based (v1 or v2) 

User account 

User account 

Medium 

RSA/DSA public key (v2) 

User account 

Private key in $HOME/.ssh/id_rsa or $HOME/.ssh/id_dsa

Public key in $HOME/.ssh/id_rsa.pub or $HOME/.ssh/id_dsa.pub

User account 

User's public key (id_rsa.pub or id_dsa.pub ) in $HOME/.ssh/authorized_keys

Strong  

RSA public key (v1) 

User account 

Private key in $HOME/.ssh/identity

Public key in $HOME/.ssh/identity.pub

User account 

User's public key (identity.pub ) in $HOME/.ssh/authorized_keys

Strong  

.rhosts with RSA (v1)

User account 

User account 

Local host name in /etc/hosts.equiv, /etc/shosts.equiv, $HOME/.rhosts, or $HOME/.shosts

Local host public key in $HOME/.ssh/known_hosts or /etc/ssh/ssh_known_hosts

Medium 

.rhosts only (v1 or v2)

User account 

User account 

Local host name in /etc/hosts.equiv, /etc/shosts.equiv, $HOME/.rhosts, or $HOME/.shosts

Weak 

Using Solaris Secure Shell (Task Map)

Task 

Description 

For Instructions 

Create a public/private key pair 

The use of public/private key pairs is the preferred method for authenticating yourself and encrypting your communications. 

How to Create a Public/Private Key Pair

Log in with Solaris Secure Shell 

Encrypted Secure Shell communication is enabled by logging in remotely through a process similar to using rsh.

How to Log In to Another Host With Solaris Secure Shell

Log in without a password with Solaris Secure Shell 

You can log in using Secure Shell without having to provide a password by using ssh-agent. The ssh-agent command can be run manually or from a startup script.

How to Log In With No Password With the ssh-agent Command

How to Set Up the ssh-agent Command to Run Automatically

Port forwarding in Solaris Secure Shell 

You can specify a local port or a remote port to be used in a Secure Shell connection over TCP. 

How to Use Solaris Secure Shell Port Forwarding

Copy files with Solaris Secure Shell 

You can copy remote files securely. 

How to Copy Files With Solaris Secure Shell

Transfer files with Solaris Secure Shell 

You can log in to a remote host with Secure Shell by using transfer commands that are similar to ftp.

How to Transfer Files Remotely With the sftp Command

Connect from a host inside a firewall to a host on the outside 

Secure shell provides commands that are compatible with HTTP or SOCKS5. The commands can be specified in a configuration file or on the command line. 

How to Set Up Default Connections to Hosts Outside a Firewall

Example—Connecting to Hosts Outside a Firewall From the Command Line

Using Solaris Secure Shell

How to Create a Public/Private Key Pair

The standard procedure for creating a Solaris Secure Shell public/private key pair follows. For additional options, see the ssh-keygen(1) man page.

  1. Start the key generation program.


    myLocalHost% ssh-keygen
    Generating public/private rsa key pair.
    …
  2. Enter the path to the file that will hold the key.

    By default, the file name id_rsa, which represents an RSA v2 key, appears in parentheses. You can select this file by pressing the Return key. Or, you can type an alternative filename.


    Enter file in which to save the key (/home/johndoe/.ssh/id_rsa): <Return>
    

    The public key name is created automatically. The string .pub is appended to the private key name.

  3. Enter a passphrase for using your key.

    This passphrase is used for encrypting your private key. A good passphrase is 10-30 characters long, mixes alphabetic and numeric characters, and avoids simple English prose and English names. A null entry means no passphrase is used. A null entry is strongly discouraged for user accounts. Note that the passphrase is not displayed when you type it in.


    Enter passphrase (empty for no passphrase): <Type the passphrase>
    
  4. Re-enter the passphrase to confirm it.


    Enter same passphrase again: <Type the passphrase>
    Your identification has been saved in /home/jdohnoe/.ssh/id_rsa.
    Your public key has been saved in /home/johndoe/.ssh/id_rsa.pub.
    The key fingerprint is:
    0e:fb:3d:57:71:73:bf:58:b8:eb:f3:a3:aa:df:e0:d1 johndoe@myLocalHost
  5. Check the results.

    The key fingerprint, which is a colon-separated series of 2-digit hexadecimal values, is displayed. Check that the path to the key is correct. In the example, the path is /home/johndoe/.ssh/id_rsa.pub. At this point, you have created a public/private key pair.

  6. Set up the authorized_keys file on the destination host.

    1. Copy the id_rsa.pub file to the destination host. Type the command on one line with no backslash.


      myLocalHost% cat $HOME/.ssh/id_rsa.pub | ssh myRemoteHost \
       'cat >> .ssh/authorized_keys && echo "Key uploaded successfully."'
      
    2. When you are prompted, supply your login password.

      When the file is copied, the phrase “Key uploaded successfully.” is displayed.

How to Log In to Another Host With Solaris Secure Shell

  1. Use the ssh command, specifying the name of the remote host.


    myLocalHost% ssh myRemoteHost
    
    The first time that you run the ssh command, a prompt questions the authenticity of the remote host:


    The authenticity of host 'myRemoteHost' can't be established.
    RSA key fingerprint in md5 is: 04:9f:bd:fc:3d:3e:d2:e7:49:fd:6e:18:4f:9c:26
    Are you sure you want to continue connecting(yes/no)? 

    This prompt is normal. You should type yes and continue. If you have used ssh in the past on this remote host, then the prompt is not normal. You should check for a breach in your security.

  2. Enter the Solaris Secure Shell passphrase and the account password when you are prompted for them.


    Enter passphrase for key '/home/johndoe/.ssh/id_rsa': <Return> 
    johndoe@myRemoteHost's password: <Return>
    Last login: Fri Jul 20 14:24:10 2001 from myLocalHost
    myRemoteHost%

    Conduct transactions on the remote host. The commands that you send are encrypted. Any responses that you receive are encrypted.


    Note –

    If you want to subsequently change your passphrase, use the ssh-keygen command with the -p option.


  3. When you are finished with your remote session, type exit or use your usual method for exiting your shell.


    myRemoteHost% exit
    myRemoteHost% logout
    Connection to myRemoteHost closed
    myLocalHost%

How to Log In With No Password With the ssh-agent Command

If you want to omit passphrase and password entry when you are using Solaris Secure Shell, you can use the agent daemon. Use the ssh-agent command at the beginning of the session. Then, store your private keys with the agent by using the ssh-add command. If you have different accounts on different hosts, add those keys that you intend to use in the session.

You can start the agent manually when needed as described in the following procedure. Or, you can set the agent to run automatically at the start of every session as described in How to Set Up the ssh-agent Command to Run Automatically.

  1. Start the agent daemon.

    The ssh-agent command starts the agent daemon and displays its process ID.


    myLocalHost% eval `ssh-agent`
    Agent pid 9892
    myLocalHost% 
  2. Add your private key to the agent daemon.

    The ssh-add command adds your private key to the agent daemon so that subsequent Secure Shell activity does not prompt you for the passphrase.


    myLocalHost% ssh-add
    Enter passphrase for /home/johndoe/.ssh/id_rsa:
    Identity added: /home/johndoe/.ssh/id_rsa(/home/johndoe/.ssh/id_rsa)
    myLocalHost%
  3. Start a Solaris Secure Shell session.


    myLocalHost% ssh myRemoteHost
    

Example—Using ssh-add Options

You can use ssh-add to add other keys to the daemon as well. For example, you might concurrently have DSA v2, RSA v2, and RSA v1 keys. To list all keys that are stored in the daemon, use the -l option. To delete a single key from the daemon, use the -d option. To delete all keys, use the -D option.


myLocalHost% eval `ssh-agent`
Agent pid 3347
myLocalHost% ssh-add
Enter passphrase for /home/johndoe/.ssh/id_rsa:
Identity added: /home/johndoe/.ssh/id_rsa(/home/johndoe/.ssh/id_rsa)
myLocalHost% ssh-add /home/johndoe/.ssh/id_dsa
Enter passphrase for /home/johndoe/.ssh/id_dsa: <type passphrase>
Identity added:
/home/johndoe/.ssh/id_dsa(/home/johndoe/.ssh/id_dsa)
myLocalHost% ssh-add -l
md5 1024 0e:fb:3d:53:71:77:bf:57:b8:eb:f7:a7:aa:df:e0:d1
/home/johndoe/.ssh/id_rsa(RSA)
md5 1024 c1:d3:21:5e:40:60:c5:73:d8:87:09:3a:fa:5f:32:53
/home/johndoe/.ssh/id_dsa(DSA)
myLocalHost% ssh-add -d
Identity removed:
/home/johndoe/.ssh/id_rsa(/home/johndoe/.ssh/id_rsa.pub)
/home/johndoe/.ssh/id_dsa(DSA)

How to Set Up the ssh-agent Command to Run Automatically

You can avoid providing your passphrase and password whenever you use Secure Shell by starting an agent daemon, ssh-agent. You can start the agent daemon from the .dtprofile script.

  1. To start the agent daemon automatically, add the following lines to the end of the $HOME/.dtprofile script:


    if [ "$SSH_AUTH_SOCK" = "" -a -x /usr/bin/ssh-agent ]; then
                    eval `/usr/bin/ssh-agent`
    fi
  2. To terminate the Secure Shell agent daemon when you exit the CDE session, add the following to the $HOME/.dt/sessions/sessionexit script:


    if [ "$SSH_AGENT_PID" != "" -a -x /usr/bin/ssh-agent ]; then
                    /usr/bin/ssh-agent -k
    fi

    This entry ensures that no one can use the Secure Shell agent after the CDE session is terminated.

  3. Start a Solaris Secure Shell session.


    myLocalHost% ssh myRemoteHost
    

    There is no prompt for a passphrase.

How to Use Solaris Secure Shell Port Forwarding

You can specify that a local port be forwarded to a remote host. Effectively, a socket is allocated to listen to the port on the local side. The connection from this port is made over a secure channel to the remote host. For example, you might specify port 143 to obtain email remotely with IMAP4. Similarly, a port can be specified on the remote side.


Note –

Secure Shell port forwarding must use TCP connections. Secure Shell does not support UDP connections.


    To set a local port to be forwarded, specify two ports. Specify the local port to listen to, and specify the remote host and port to forward to.


    myLocalHost% ssh -L localPort:remoteHost:remotePort 
    

    To set a remote port to receive a secure connection, specify two ports. Specify the remote port to listen to, and specify the local host and port to forward to.


    myLocalHost% ssh -R remotePort:localHost:localPort 
    

Example—Using Local Port Forwarding to Receive Mail

The following example demonstrates how you can use local port forwarding to receive mail securely from a remote server.


myLocalHost% ssh -L 9143:myRemoteHost:143 myRemoteHost 

This command forwards connections to port 9143 on myLocalHost to port 143, which is the IMAP v2 server port on myRemoteHost. When the user launches a mail application, the user needs to specify the local port number. An example that uses the dtmail command is shown in Figure 11–1.

Note that the term localhost in this case and in Example—Using Remote Port Forwarding to Communicate Outside of a Firewall refers to the keyword that designates the user's local host. The localhost keyword should not be confused with myLocalHost. The myLocalHost variable is the hypothetical host name that identifies a local host in the examples in this chapter.

Figure 11–1 Specifying Port Forwarding for Email

Dialog box titled Mailer - Login. The IMAP Server field shows the server name followed by a colon and the port number.

Example—Using Remote Port Forwarding to Communicate Outside of a Firewall

This example demonstrates how a user in an enterprise environment can forward connections from a host on an external network to a host inside a corporate firewall.


myLocalHost% ssh -R 9022:myLocalHost:22 myOutsideHost

This command forwards connections to port 9022 on myOutsideHost to port 22, the sshd server, on the local host.


myOutsideHost% ssh -p 9022 localhost
myLocalHost%

This command demonstrates how after the remote forwarding connection has been established, the user can use the ssh command to connect securely from the remote host.

How to Copy Files With Solaris Secure Shell

Use the scp command to copy encrypted files between hosts. You can copy encrypted files between either a local and remote host, or between two remote hosts. The command operates similarly to the rcp command except that the scp command prompts for passwords. See scp(1) for more information.

  1. Start the secure copy program.

    Specify the source file, user name at remote destination, and destination directory.


    myLocalHost% scp myfile.1 johndoe@myRemoteHost:~
    
  2. Type the Solaris Secure Shell passphrase when prompted.


    Enter passphrase for key '/home/johndoe/.ssh/id_rsa': <Return>
    myfile.1       25% |*******                      |    640 KB  0:20 ETA 
    myfile.1 

    After you type the passphrase, the progress meter is displayed. See the second line in the preceding output. The progress meter displays:

    • The file name

    • The percentage of the file that has been transferred at this point

    • A series of asterisks that are analogous to the percentage transferred

    • The quantity of data transferred

    • The estimated time of arrival, or ETA, of the complete file (that is, the remaining amount of time)

How to Transfer Files Remotely With the sftp Command

The sftp command works similarly to ftp, but uses a different set of subcommands. The following table lists some representative subcommands.

Table 11–3 Interactive sftp Subcommands
 Category

Subcommands 

Description 

Navigation 

cd path,

Changes the remote directory to path

lcd path

Changes the local directory to path

Ownership 

chgrp group file

Changes the group for file to group, a numeric GID

chmod mode file

Changes the permissions of file

File copying 

get remote_file [local-path]

Retrieves a remote file and stores the file on the local host 

put local_file [remote_path]

Stores a local file on the remote host 

rename old_filenew_file

Renames a local file 

Directory listing 

ls [path]

Lists the contents of the remote directory 

Directory creation 

mkdir path

Creates a remote directory 

Miscellaneous 

exit, quit

Quits the sftp command

How to Set Up Default Connections to Hosts Outside a Firewall

You can use Solaris Secure Shell to make a connection from a host inside a firewall to a host on the other side of the firewall. This task is done by specifying a proxy command for ssh either in a configuration file or as an option on the command line. For more information, see Example—Connecting to Hosts Outside a Firewall From the Command Line.

In general, you can customize your ssh interactions through a configuration file, either your own personal file $HOME/.ssh/config or an administrative configuration file in /etc/ssh/ssh_config. See ssh_config(4). There are two types of proxy commands. One proxy command is for HTTP connections. The other proxy command is for SOCKS5 connections.

  1. Specify the proxy commands and hosts in a configuration file.

    Use the following syntax to add as many lines as you need:


    [Host outside_host]
    ProxyCommand proxy_command [-h proxy_server] \
    [-p proxy_port] outside_host|%h outside_port|%p

    where

    Host outside_host

    Limits the proxy command specification to instances when a remote host name is specified on the command line. If you use a wildcard for outside_host, you apply the specification to a set of hosts.

    proxy_command

    Specifies the proxy command. The command can be either of the following:

    • /usr/lib/ssh/ssh-http-proxy-connect for HTTP connections

    • /usr/lib/ssh/ssh-socks5-proxy-connect for SOCKS5 connections

    -h proxy_server and -p proxy_port

    These options specify a proxy server and a proxy port, respectively. If present, the proxies override any environment variables that specify proxy servers and proxy ports, such as HTTPPROXY, HTTPPROXYPORT, SOCKS5_PORT, SOCKS5_SERVER, and http_proxy. The http_proxy variable specifies a URL. If the options are not used, then the relevant environment variables must be set. See the ssh-socks5-proxy-connect(1) and ssh-http-proxy-connect(1) man pages.

    outside_host

    Designates a specific host to connect to. You can use %h to specify the host on the command line.

    outside_port

    Designates a specific port to connect to. You can use %p to specify the port on the command line. By specifying %h and %p without using the Host outside_host option, the proxy command is applied to the host argument whenever the ssh command is invoked.

  2. Run Solaris Secure Shell, specifying the outside host.

    For example, type the following:


    myLocalHost% ssh myOutsideHost
    

    This command looks for a proxy command specification for myOutsideHost in your personal configuration file. If the specification is not found, then the command looks in the system-wide configuration file, ssh_config. The proxy command is substituted for ssh.

Example—Connecting to Hosts Outside a Firewall From the Command Line

The -o option to the ssh command lets you type any line that is permitted in an ssh configuration file. In this case, the proxy command specification from the previous task is used.

  1. Specify the proxy commands and hosts in a configuration file.

  2. Run the ssh command. Include a proxy command specification as an argument to the -o option. For example, type the following:


    % ssh -o'Proxycommand=/usr/lib/ssh/ssh-http-proxy-connect \
    -h myProxyServer -p 8080 myOutsideHost 22' myOutsideHost
    

    This command substitutes the HTTP proxy command for ssh, uses port 8080 and myProxyServer as the proxy server, and connects to port 22 on myOutsideHost.

Chapter 12 Solaris Secure Shell Administration (Reference)

This chapter describes how Solaris Secure Shell works from the administrator's point of view and how it is configured. The following is a list of the reference information in this chapter.

A Typical Solaris Secure Shell Session

The Solaris Secure Shell daemon (sshd) is normally started at boot from the /etc/init.d/sshd script. The daemon listens for connections from clients. ASolaris Secure Shell session begins when the user runs the ssh, scp, or sftp command. A new sshd daemon is forked for each incoming connection. The forked daemons handle key exchange, encryption, authentication, command execution, and data exchange with the client. These session characteristics are determined by client-side configuration files and server configuration files, and potentially command-line parameters. The client and server must authenticate themselves to each other. After successful authentication, the user can execute commands remotely and copy data between hosts.

Session Characteristics

The server-side behavior of the sshd daemon is controlled by keyword settings in the /etc/ssh/sshd_config file and potentially the command-line options when sshd is started. For example, sshd_config controls which types of authentication are permitted for accessing the server.

The behavior on the client side is controlled by the Solaris Secure Shell parameters in this order of precedence:

For example, a user can override a system-wide configuration Cipher that is set to blowfish by specifying -c 3des on the command line.

Authentication

The steps in the authentication process for Solaris Secure Shell are as follows:

  1. The user runs the ssh, scp, or sftp command.

  2. The client and server agree on a shared session key.

    In v1, the remote host sends its host (RSA) key and a server (RSA) key to the client. Note that the server key is typically generated every hour and stored in memory only. The client checks that the remote host key is stored in the $HOME/.ssh/known_hosts file on the local host. The client then generates a 256 bit random number and encrypts it with the remote host's host key and server key. The encrypted random number is used as a session key to encrypt all further communications in the session.

    In v2, the remote host uses DSA in its host key and does not generate a server key. Instead, the shared session key is derived through a Diffie-Hellman agreement.

  3. The local and remote hosts authenticate each other.

    In v1, the client can use .rhosts, .rhosts with RSA, RSA challenge-response, or password-based authentication. In v2, only .rhosts, DSA, and password-based authentication are permitted.

Command Execution and Data Forwarding

After authentication is complete, the user can use Solaris Secure Shell, generally by requesting a shell or executing a command. Through the ssh options, the user can make requests, such as allocating a pseudo-tty, forwarding X11 connections or TCP/IP connections, or enabling an ssh-agent over a secure connection. The basic components of a user session are as follows:

  1. The user requests a shell or the execution of a command, which begins the session mode.

    In this mode, data is sent or received through the terminal on the client side, and the shell or command on the server side.

  2. The user program terminates.

  3. All X11 forwarding and TCP/IP forwarding is stopped. Any X11 connections and TCP/IP connections that already exist remain open.

  4. The server sends the command exit to the client, and both sides exit.

Configuring the Solaris Secure Shell

The characteristics of a Solaris Secure Shell session are controlled by configuration files, which can be overridden to a certain degree by options on the command line.

Solaris Secure Shell Client Configuration

In most cases, the client-side characteristics of a Solaris Secure Shell session are governed by the system-wide configuration file, /etc/ssh/ssh_config, which is set up by the administrator. The settings in the system-wide configuration file can be overridden by the user's configuration in $HOME/.ssh/config. In addition, the user can override both configuration files on the command line.

The command line options are client requests and are permitted or denied on the server side by the /etc/ssh/sshd_config file (see ssh_config(4)). The configuration file keywords and command options are introduced in the following sections and are described in detail in the ssh(1), scp(1), sftp(1), and ssh_config(4) man pages. Note that in the two user configuration files, the Host keyword indicates a host or wildcard expression to which all following keywords up to the next Host keyword apply.

Host-Specific Parameters

If it is useful to have different Solaris Secure Shell characteristics for different local hosts, the administrator can define separate sets of parameters in the /etc/ssh/ssh_config file to be applied according to host or regular expression. This task is done by grouping entries in the file by Host keyword. If the Host keyword is not used, the entries in the client configuration file apply to whichever local host a user is working on.

Client-Side Authentication Parameters

The authentication method is determined by setting one of the following keywords to “yes”:

The keyword UseRsh specifies that the rlogin and rsh commands be used, probably due to no Secure Shell support.

The Protocol keyword sets the Solaris Secure Shell protocol version to v1 or v2. You can specify both versions separated by a comma. The first version is tried and upon failure, the second version is used.

The IdentityFile keyword specifies an alternate file name to hold the user's private key.

The keyword Cipher specifies the v1 encryption algorithm, which might be blowfish or 3des. The keyword Ciphers specifies an order of preference for the v2 encryption algorithms: 3des-cbc, blowfish-cbc, and aes128–cbc. The commands ssh and scp have a -c option for specifying the encryption algorithm on the command line.

Known Host File Parameters

The known host files (/etc/ssh/ssh_known_hosts and $HOME/.ssh/known_hosts) contain the public keys for all hosts with which the client can communicate by using Solaris Secure Shell. The GlobalKnownHostsFile keyword specifies an alternate file instead of /etc/ssh/ssh_known_hosts. The UserKnownHostsFile keyword specifies an alternate to $HOME/.ssh/known_hosts.

The StrictHostKeyChecking keyword requires new hosts to be added manually to the known hosts file, and refuses any host whose public key has changed or whose public key is not in the known hosts file. The keyword CheckHostIP enables the IP address for hosts in the known host files to be checked, in case a key has been changed due to DNS spoofing.

Client-Side X11 Forwarding and Port Forwarding Parameters

The LocalForward keyword specifies a local TCP/IP port to be forwarded over a secure channel to a specified port on a remote host. The GatewayPorts keyword enables remote hosts to connect to local forwarded ports.

The command ssh enables port forwarding through these options:

The ForwardX11 keyword redirects X11 connections to the remote host with the DISPLAY environment variable set. The XAuthLocation keyword specifies the location of the xauth program.

Client-Side Connection and Other Parameters

The NumberOfPasswordPrompts keyword specifies how many times the user is prompted for a password before Solaris Secure Shell quits. The ConnectionAttempts keyword specifies how many tries (at one try per second) are made before Solaris Secure Shell either quits or falls back to rsh if the FallBackToRsh keyword is set.

The Compression keyword enables compression of transmitted data. The CompressionLevel keyword sets a level of 1 to 9, trading off between speed and amount of compression.

User specifies an alternate user name. Hostname specifies an alternate name for a remote host. ProxyCommand specifies an alternate command name for starting Solaris Secure Shell. Any command that can connect to your proxy server can be used. The command should read from its standard input and write to its standard output.

Batchmode disables password prompts, which is useful for scripts and other batch jobs.

KeepAlive enables messages to indicate network problems due to host crashes. LogLevel sets the verbosity level for ssh messages.

EscapeChar defines a single character that is used as a prefix for displaying special characters as plain text.

Solaris Secure Shell Server Configuration

The server-side characteristics of a Solaris Secure Shell session are governed by the /etc/ssh/sshd_config file, which is set up by the administrator.

Server-Side Authentication Parameters

Permitted authentication methods are indicated by theses keywords:

HostKey and HostDSAKey identify files that hold host public keys when the default file name is not used. KeyRegenerationInterval defines how often the server key is regenerated.

Protocol specifies the version. Ciphers specifies the encryption algorithms for v2. ServerKeyBits defines the number of bits in the server's key.

Ports and Forwarding Parameters

AllowTCPForwarding specifies whether TCP forwarding is permitted.

GatewayPorts allows remote hosts to connect to ports forwarded for the client. Port specifies the port number that sshd listens on. ListenAddress designates a specific local address that sshd listens to. If there is no ListenAddress specification, sshd listens to all addresses by default.

X11Forwarding allows X11 forwarding. X11DisplayOffset specifies the first display number that is available for forwarding. This keyword prevents sshd from interfering with real X11 servers. XAuthLocation specifies the location of the xauth program.

Session Control Parameters

KeepAlive displays messages regarding broken connections and host crashes. LogLevel sets the verbosity level of messages from sshd. SyslogFacility provides a facility code for messages that are logged from sshd.

Server Connection and Other Parameters

The AllowGroups, AllowUsers, DenyGroups, and DenyUsers keywords control which users can or cannot use ssh.

The LoginGraceTime, MaxStartups, PermitRootLogin, and PermitEmptyPasswords keywords set controls on users who are logging in. StrictModes causes sshd to check file modes and ownership of the user's files and home directory before login. UseLogin specifies whether login is used for interactive login sessions. Turning this keyword on should not be necessary and is not recommended for the Solaris environment.

Subsystem configures a file transfer daemon for using sftp.

Maintaining Known Hosts on a Site-Wide Basis

Each host that needs to talk to another host securely must have the server's public key stored in the local host's /etc/ssh/ssh_known_hosts file. Although it is most convenient to update the /etc/ssh/ssh_known_hosts files by a script, this practice is heavily discouraged because it opens a major security vulnerability.

The /etc/ssh/ssh_known_hosts file should only be distributed by a secure mechanism as follows:

To avoid the possibility of an intruder gaining access by inserting bogus public keys into a known_hosts file, you should use the jumpstart server as the known and trusted source of the ssh_known_hosts file. The ssh_known_hosts file can be distributed during installation and by regularly running scripts on the individual hosts that pull in the latest version by using scp. This approach is secure because each host already has the public key of the jumpstart server.

Solaris Secure Shell Files

The following table shows the important Solaris Secure Shell files and the suggested UNIX permissions.

Table 12–1 Solaris Secure Shell Files

File Name 

Description 

Suggested Permissions and Owner 

/etc/ssh/sshd_config

Contains configuration data for sshd, the Secure Shell daemon.

-rw-r--r-- root

/etc/ssh/ssh_host_key

Contains the host private key. 

-rw------- root

/etc/ssh_host_key.pub

Contains the host public key. Used to copy the host key to the local known_hosts file.

-rw-r--r-- root

/var/run/sshd.pid

Contains the process ID of the Secure Shell daemon, sshd, which listens for connections (if there are multiple daemons, the file contains the last daemon that was started).

rw-r--r-- root

$HOME/.ssh/authorized_keys

Lists the RSA keys that can be used with v1 to log into the user's account, or the DSA and RSA keys that can be used with v2. 

-rw-rw-r-- johndoe

/etc/ssh/ssh_known_hosts

Contains the host public keys for all hosts with which the client may communicate securely. The file should be prepared by the administrator. 

-rw-r--r-- root

$HOME/.ssh/known_hosts

Contains the host public keys for all hosts with which the client may communicate securely. The file is maintained automatically. Whenever the user connects with an unknown host, the remote host key is added to the file. 

-rw-r--r-- johndoe

/etc/nologin

If this file exists, sshd refuses to let anyone except root log in. The contents are displayed to users who are attempting to log in.

-rw-r--r-- root

$HOME/.rhosts

Contains the host-user name pairs that specifies the hosts to which the user can log in to without a password. The file is used Secure Shell, as well as by the rlogind and rshd daemons.

-rw-r—r-- johndoe

$HOME/.shosts

Contains the host-user name pairs that specifies the hosts to which the user can log in to without a password using Secure Shell only. 

-rw-r—r-- johndoe

/etc/hosts.equiv

Contains the hosts that are used in .rhosts authentication and Secure Shell authentication.

-rw-r--r-- root

/etc/ssh/shosts.equiv

Contains the hosts that are used in Secure Shell authentication. 

-rw-r--r-- root

$HOME/.ssh/environment

Used for initialization to make assignments at login. 

-rw------- johndoe

$HOME/.ssh/rc

Runs initialization routines before the user shell starts. 

-rw------- johndoe

/etc/ssh/sshrc

Runs host-specific initialization routines that are specified by an administrator for all users. 

-rw-r--r-- root

The following table summarizes the major Solaris Secure Shell commands.

Table 12–2 Solaris Secure Shell Commands

Command 

Description 

ssh

A program for logging in to a remote machine and for executing commands on a remote machine. The command is intended to replace rlogin and rsh, and provide secure encrypted communications between two untrusted hosts over an insecure network. X11 connections and arbitrary TCP/IP ports can also be forwarded over the secure channel.

sshd

The daemon for Secure listens. This daemon listens for connections from clients and provides secure encrypted communications between two untrusted hosts over an insecure network. 

ssh-keygen

Generates and manages authentication keys for ssh.

ssh-agent

A program that holds private keys that are used for public key authentication. ssh-agent is started at the beginning of an X-session or a login session, and all other windows or programs are started as clients to the ssh-agent program. Through the use of environment variables, the agent can be located and automatically used for authentication when users log in to other machines while using ssh.

ssh-add 

Adds RSA or DSA identities (keys) to the authentication agent, ssh-agent.

scp 

Securely copies files between hosts on a network by using ssh for data transfer. Unlike rcp, scp asks for passwords or passphrases (if they are needed for authentication).

sftp 

An interactive file transfer program, similar to ftp, that performs all operations over an encrypted ssh transport. sftp connects and logs into the specified host name and then enters an interactive command mode.

Chapter 13 Introduction to SEAM

This chapter provides an introduction to Sun Enterprise Authentication Mechanism (SEAM).

What Is SEAM?

SEAM is a client/server architecture that provides secure transactions over networks. SEAM offers strong user authentication, as well as data integrity and data privacy. Authentication guarantees that the identities of both the sender and the recipient of a network transaction are true. SEAM can also verify the validity of data being passed back and forth (integrity) and encrypt the data during transmission (privacy). Using SEAM, you can log on to other machines, execute commands, exchange data, and transfer files securely. Additionally, SEAM provides authorization services, which allows administrators to restrict access to services and machines. Moreover, as a SEAM user, you can regulate other people's access to your account.

SEAM is a single-sign-on system, which means that you only need to authenticate yourself to SEAM once per session, and all subsequent transactions during the session are automatically secured. After SEAM has authenticated you, you do not need to authenticate yourself every time you use a SEAM-based command such as ftp or rsh, or access data on an NFS file system. Thus, you do not have to send your password over the network, where it can be intercepted, each time you use these services.

SEAM is based on the Kerberos V5 network authentication protocol that was developed at the Massachusetts Institute of Technology (MIT). People who have used Kerberos V5 should therefore find SEAM very familiar. Since Kerberos V5 is a de facto industry standard for network security, SEAM promotes interoperability with other systems. In other words, because SEAM works with systems that use Kerberos V5, it allows for secure transactions even over heterogeneous networks. Moreover, SEAM provides authentication and security both between domains and within a single domain.


Note –

Because SEAM is based on, and designed to interoperate with, Kerberos V5, this manual often uses the terms “Kerberos” and “SEAM” more or less interchangeably, for example, “Kerberos realm” or “SEAM-based utility.” Moreover, “Kerberos” and “Kerberos V5” are used interchangeably. The manual draws distinctions when necessary.


SEAM allows for flexibility in running Solaris applications. You can configure SEAM to allow both SEAM-based and non-SEAM-based requests for network services such as the NFS service, telnet, and ftp. As a result, current Solaris applications still work even if they are running on systems on which SEAM is not installed. Of course, you can also configure SEAM to allow only SEAM-based network requests.

Additionally, applications do not have to remain committed to SEAM if other security mechanisms are developed. Because SEAM is designed to integrate modularly into the Generic Security Service (GSS) API, applications that make use of the GSS-API can utilize whichever security mechanism best suits its needs.

How SEAM Works

The following is an overview of the SEAM authentication system. For a more detailed description, see How the Authentication System Works.

From the user's standpoint, SEAM is mostly invisible after the SEAM session has been started. Commands such as rsh or ftp work pretty much in their usual fashion. Initializing a SEAM session often involves no more than logging in and providing a Kerberos password.

The SEAM system revolves around the concept of a ticket. A ticket is a set of electronic information that serves as identification for a user or a service such as the NFS service. Just as your driver's license identifies you and indicates what driving permissions you have, so a ticket identifies you and your network access privileges. When you perform a SEAM-based transaction (for example, if you rlogin in to another machine), you transparently send a request for a ticket to a Key Distribution Center, or KDC. The KDC accesses a database to authenticate your identity and returns a ticket that grants you permission to access the other machine. “Transparently” means that you do not need to explicitly request a ticket; it happens as part of the rlogin command. Because only the authenticated client can get a ticket for a specific service, another client cannot use rlogin under an assumed identity.

Tickets have certain attributes associated with them. For example, a ticket can be forwardable (which means that it can be used on another machine without a new authentication process), or postdated (not valid until a specified time). How tickets are used (for example, which users are allowed to obtain which types of ticket) is set by policies that are determined when SEAM is installed or administered.


Note –

You will frequently see the terms credential and ticket. In the greater Kerberos world, they are often used interchangeably. Technically, however, a credential is a ticket plus the session key for that session. This difference is explained in more detail in Gaining Access to a Service Using SEAM.


The following sections further explain the SEAM authentication process.

Initial Authentication: the Ticket-Granting Ticket

Kerberos authentication has two phases: an initial authentication that allows for all subsequent authentications, and the subsequent authentications themselves.

The following figure shows how the initial authentication takes place.

Figure 13–1 Initial Authentication for SEAM Session

Flow diagram shows a client requesting a TGT from the KDC, and then decrypting the TGT that the KDC returns to the client.

  1. A client (a user, or a service such as NFS) begins a SEAM session by requesting a ticket-granting ticket (TGT) from the Key Distribution Center (KDC). This request is often done automatically at login.

    A ticket-granting ticket is needed to obtain other tickets for specific services. Think of the ticket-granting ticket as similar to a passport. Like a passport, the ticket-granting ticket identifies you and allows you to obtain numerous “visas,” where the “visas” (tickets) are not for foreign countries but for remote machines or network services. Like passports and visas, the ticket-granting ticket and the other various tickets have limited lifetimes. The difference is that “Kerberized” commands notice that you have a passport and obtain the visas for you. You don't have to perform the transactions yourself.

    Another analogy for the ticket-granting ticket is that of a three-day ski pass which is good at four different ski resorts. You show the pass at whichever resort you decide to go to (until it expires) and you receive a lift ticket for that resort. Once you have the lift ticket, you can ski all you want at that resort. If you go to another resort the next day, you once again show your pass, and you get an additional lift ticket for the new resort. The difference is that the SEAM-based commands notice that you have the weekend ski pass, and get the lift ticket for you, so you don't have to perform the transactions yourself.

  2. The KDC creates a ticket–granting ticket and sends it back, in encrypted form, to the client. The client decrypts the ticket-granting ticket by using the client's password.

  3. Now in possession of a valid ticket-granting ticket, the client can request tickets for all sorts of network operations, such as rlogin or telnet, for as long as the ticket-granting ticket lasts. This ticket usually lasts for a few hours. Each time the client performs a unique network operation, it requests a ticket for that operation from the KDC.

Subsequent Authentications

After the client has received the initial authentication, each individual authentication follows the pattern that is shown in the following figure.

Figure 13–2 Obtaining Access to a Service

Flow diagram shows the client using a TGT to request a ticket from the KDC, and then using the returned ticket for access to the server.

  1. The client requests a ticket for a particular service (say, to rlogin into another machine) from the KDC by sending the KDC its ticket-granting ticket as proof of identity.

  2. The KDC sends the ticket for the specific service to the client.

    For example, suppose user joe wants to access an NFS file system that has been shared with krb5 authentication required. Since he is already authenticated (that is, he already has a ticket-granting ticket), as he attempts to access the files, the NFS client system automatically and transparently obtains a ticket from the KDC for the NFS service.

    For example, suppose the user joe uses rlogin on the server boston. Since he is already authenticated (that is, he already has a ticket-granting ticket), he automatically and transparently obtains a ticket as part of the rlogin command. This ticket allows him to rlogin into boston as often as he wants until it expires. If joe wants to rlogin into the machine denver, he obtains another ticket, as in Step 1.

  3. The client sends the ticket to the server.

    When using the NFS service, the NFS client automatically and transparently sends the ticket for the NFS service to the NFS server.

  4. The server allows the client access.

These steps make it appear that the server doesn't ever communicate with the KDC. The server does, though; it registers itself with the KDC, just as the first client does. For simplicity's sake, we have left that part out.

The SEAM Remote Applications

What are the SEAM-based (or “Kerberized”) commands that a user such as joe can use? They are:

These applications are the same as the Solaris applications of the same name, except that they use Kerberos principals to authenticate transactions, thereby providing Kerberos-based security. (See Principals for information on principals.)

Principals

A client in SEAM is identified by its principal. A principal is a unique identity to which the KDC can assign tickets. A principal can be a user, such as joe, or a service, such as nfs or telnet.

By convention, a principal name is divided into three parts: the primary, the instance, and the realm. A typical SEAM principal would be, for example, joe/admin@ENG.EXAMPLE.COM, where:

The following are all valid principal names:

Realms

A realm is a logical network, similar to a domain, which defines a group of systems under the same master KDC. Figure 13–3 shows how realms can relate to one another. Some realms are hierarchical (one realm being a superset of the other realm). Otherwise, the realms are non-hierarchical (or “direct”) and the mapping between the two realms must be defined. A feature of SEAM is that it permits authentication across realms. Each realm only needs to have a principal entry for the other realm in its KDC. The feature is called cross-realm authentication.

Figure 13–3 Realms

Diagram shows the ENG.EXAMPLE.COM realm in a non-hierarchical relationship with SEAMCO.COM, and in a hierarchical relationship with EXAMPLE.COM.

Realms and Servers

Each realm must include a server that maintains the master copy of the principal database. This server is called the master KDC server. Additionally, each realm should contain at least one slave KDC server, which contains duplicate copies of the principal database. Both the master KDC server and the slave KDC server create tickets that are used to establish authentication.

The realm can also include two additional types of SEAM servers. A SEAM network application server is a server that provides access to Kerberized applications (such as ftp, telnet and rsh). Realms can also include NFS servers, which provide NFS services by using Kerberos authentication. If you have installed SEAM 1.0 or 1.0.1, the realm might include a SEAM network application server, which provides access to Kerberized applications (such as ftp, telnet, and rsh).

The following figure shows what a hypothetical realm might contain.

Figure 13–4 A Typical Realm

Diagram shows a typical realm, EXAMPLE.COM, which contains a master KDC, three clients, two slave KDCs, and two application servers.

SEAM Security Services

In addition to providing secure authentication of users, SEAM provides two security services:

Currently, of the various Kerberized applications which are part of SEAM, only the ftp command allows users to change security service at runtime (“on the fly”). Developers can design their RPC-based applications to choose a security service by using the RPCSEC_GSS programming interface.

SEAM Releases

Components of the SEAM product have been included in four releases. The following table describes which components are included in each release. All components are described in the following sections.

Table 13–1 SEAM Release Contents

Release Name 

Contents 

SEAM 1.0 in Solaris Easy Access Server (SEAS) 3.0 

Full release of SEAM for the Solaris 2.6 and 7 releases 

SEAM in the Solaris 8 release 

SEAM client software only 

SEAM 1.0.1 in the Solaris 8 Admin Pack 

SEAM KDC and remote applications for the Solaris 8 release 

SEAM in the Solaris 9 release 

SEAM KDC and client software only 

SEAM 1.0.2 

SEAM remote applications for the Solaris 9 release 

SEAM 1.0 Components

Similar to the MIT distribution of Kerberos V5, SEAM includes the following:

In addition, SEAM includes the following:

SEAM Components in the Solaris 8 Release

The Solaris 8 release included only the client-side portions of SEAM, so many components are not included. This product enables systems that run the Solaris 8 release to become SEAM clients without having to install SEAM separately. To use these capabilities, you must install a KDC that uses either SEAS 3.0 or the Solaris 8 Admin Pack, the MIT distribution, or Windows2000. The client-side components are not useful without a configured KDC to distribute tickets. The following components were included in this release:

SEAM 1.0.1 Components

The SEAM 1.0.1 release includes all components of the SEAM 1.0 release that are not already included in the Solaris 8 release. The components are as follows:

SEAM Components in the Solaris 9 Release

The Solaris 9 release includes all components of the SEAM 1.0 release, except for the remote applications and the preconfiguration procedure.

SEAM 1.0.2 Components

The SEAM 1.0.2 release includes the remote applications. These applications are the only part of SEAM 1.0 that have not been incorporated into the Solaris 9 release. The components for the remote applications are as follows:

Chapter 14 Planning for SEAM

This chapter should be studied by administrators who are involved in the installation and maintenance of SEAM. The chapter discusses several installation and configuration issues that administrators must resolve before they install or configure SEAM.

This is a list of the issues that a system administrator or other knowledgeable support staff should resolve:

Why Plan for SEAM?

Before you install SEAM, you must resolve several configuration issues. Although changing the configuration after the initial install is not impossible, it becomes more difficult with each new client that is added to the system. In addition, some changes require a full re-installation, so it is better to consider long-term goals when you planning your SEAM configuration.

Realms

A realm is logical network, similar to a domain, which defines a group of systems that are under the same master KDC. As with establishing a DNS domain name, issues such as the realm name, the number and size of each realm, and the relationship of a realm to other realms for cross-realm authentication should be resolved before you configure SEAM.

Realm Names

Realm names can consist of any ASCII string. Usually, the realm name is the same as your DNS domain name, in uppercase. This convention helps differentiate problems with SEAM from problems with the DNS namespace, while using a name that is familiar. If you do not use DNS or you choose to use a different string, then you can use any string. However, the configuration process requires more work. The use of realm names that follow the standard Internet naming structure is wise.

Number of Realms

The number of realms that your installation requires depends on several factors:

Realm Hierarchy

When you are configuring multiple realms for cross-realm authentication, you need to decide how to tie the realms together. You can establish a hierarchical relationship between the realms that provides automatic paths to the related domains. Of course, all realms in the hierarchical chain must be configured properly. The automatic paths can ease the administration burden. However, if there are many levels of domains, you might not want to use the default path because it requires too many transactions.

You can also choose to establish the connection directly. A direct connection is most useful when too many levels exist between two hierarchical domains or when there is no hierarchal relationship. The connection must be defined in the /etc/krb5/krb5.conf file on all hosts that use the connection. So, some additional work is required. For an introduction, see Realms and for the configuration procedures for multiple realms, see Configuring Cross-Realm Authentication.

Mapping Host Names Onto Realms

The mapping of host names onto realm names is defined in the domain_realm section of the krb5.conf file. These mappings can be defined for a whole domain and for individual hosts, depending on the requirements. See the krb5.conf(4) man page for more information.

Client and Service Principal Names

When you are using SEAM, it is strongly recommended that DNS services already be configured and running on all hosts. If DNS is used, it must be enabled on all systems or on none of them. If DNS is available, then the principal should contain the Fully Qualified Domain Name (FQDN) of each host. For example, if the host name is boston, the DNS domain name is example.com, and the realm name is EXAMPLE.COM, then the principal name for the host should be host/boston.example.com@EXAMPLE.COM. The examples in this book use the FQDN for each host.

For the principal names that include the FQDN of an host, it is important to match the string that describes the DNS domain name in the /etc/resolv.conf file. SEAM requires that the DNS domain name be in lowercase letters when you are entering the FQDN for a principal. The DNS domain name can include uppercase and lowercase letters, but only use lowercase letters when you are creating a host principal. For example, it doesn't matter if the DNS domain name is example.com, Example.COM, or any other variation. The principal name for the host would still be host/boston.example.com@EXAMPLE.COM.

SEAM can run without DNS services, but some key capabilities, such as the ability to communicate with other realms, will not work. If DNS is not configured, then a simple host name can be used as the instance name. In this case, the principal would be host/boston@EXAMPLE.COM. If DNS is enabled later, all host principals must be deleted and replaced in the KDC database.

Ports for the KDC and Admin Services

By default, port 88 and port 750 are used for the KDC, and port 749 is used for the KDC administration daemon. Different port numbers can be used. However, if you change the port numbers, then the /etc/services and /etc/krb5/krb5.conf files must be changed on every client. In addition, the /etc/krb5/kdc.conf file on each KDC must be updated.

Slave KDCs

Slave KDCs generate credentials for clients just as the master KDC does. The slave KDCs provide backup if the master becomes unavailable. Each realm should have at least one slave KDC. Additional slave KDCs might required, depending on these factors:

It is possible to add too many slave KDCs. Remember that the KDC database must be propagated to each server, so the more KDC servers that are installed, the longer it can take to get the data updated throughout the realm. Also, since each slave retains a copy of the KDC database, more slaves increase the risk of a security breach.

In addition, one or more slave KDCs can easily be configured to be swapped with the master KDC. The advantage to following this procedure on at least one slave KDC is that if the master KDC fails for any reason, you will have a system preconfigured that will be easy to swap as the master KDC. For instructions on how to configure a swappable slave KDC, see Swapping a Master KDC and a Slave KDC.

Database Propagation

The database that is stored on the master KDC must be regularly propagated to the slave KDCs. One of the first issues to resolve is how often to update the slave KDCs. The desire to have up-to-date information that is available to all clients needs to be weighed against the amount of time it takes to complete the update. For more information about database propagation, see Administering the Kerberos Database.

In large installations with many KDCs in one realm, it is possible for one or more slaves to propagate the data so that the process is done in parallel. This strategy reduces the amount of time that the update takes, but it also increases the level of complexity in administering the realm.

Clock Synchronization

All hosts that participate in the Kerberos authentication system must have their internal clocks synchronized within a specified maximum amount of time. Known as clock skew, this feature provides another Kerberos security check. If the clock skew is exceeded between any of the participating hosts, requests are rejected.

One way to synchronize all the clocks is to use the Network Time Protocol (NTP) software. See Synchronizing Clocks between KDCs and SEAM Clients for more information. Other ways of synchronizing the clocks are available, so the use of NTP is not required. However, some form of synchronization should be used to prevent access failures because of clock skew.

Online Help URL

The online help URL is used by the SEAM Administration Tool, so the URL should be defined properly to enable the “Help Contents“ menu to work. The HTML version of this manual can be installed on any appropriate server. Alternately, you can decide to use the collections at http://docs.sun.com.

The URL should point to the section titled “SEAM Administration Tool” in the “Administering Principals and Policies” chapter in this book. You can choose another HTML page, if another location is more appropriate.

Chapter 15 Configuring SEAM (Tasks)

This chapter provides configuration and installation procedures network application servers.

Configuring SEAM (Task Map)

Parts of the configuration process depend on other parts and must be done in a specific order. These procedures often establish services that are required to use SEAM. Other procedures are not dependent on any order, and can be done when appropriate. The following task map shows a suggested order for a SEAM installation.

Table 15–1 First Steps: SEAM Configuration Order

Task 

Description 

For Instructions 

1. Plan for your SEAM installation 

Lets you resolve configuration issues before you start the software configuration process. Planning ahead saves you time and other resources in the long run. 

Chapter 14, Planning for SEAM

2. (Optional) Install NTP 

Configures the Network Time Protocol (NTP) software, or another clock synchronization protocol. In order for SEAM to work properly, the clocks on all systems in the realm must be synchronized. 

Synchronizing Clocks between KDCs and SEAM Clients

3. Configure the master KDC server 

Configures and builds the master KDC server and database for a realm. 

How to Configure a Master KDC

4. (Optional) Configure a slave KDC server 

Configures and builds a slave KDC server for a realm. 

How to Configure a Slave KDC

5. (Optional) Increase security on the KDC servers 

Prevents security breaches on the KDC servers. 

How to Restrict Access to KDC Servers

6. (Optional) Configure swappable KDC servers 

Makes the task of swapping the master KDC and a slave KDC easier. 

How to Configure a Swappable Slave KDC

Once the required steps have been completed, the following procedures can be used when required.

Table 15–2 Next Steps: Additional SEAM Tasks

Task 

Description 

For Instructions 

Configure cross-realm authentication 

Enables communications from one realm to another realm. 

Configuring Cross-Realm Authentication

Configure SEAM clients 

Enables a client to use SEAM services. 

Configuring SEAM Clients

Configure SEAM NFS server 

Enables a server to share a file system that requires Kerberos authentication. 

Configuring SEAM NFS Servers

Configuring KDC Servers

After you install the SEAM software, you must configure the KDC servers. Configuring a master KDC and at least one slave KDC provides the service that issues credentials. These credentials are the basis for SEAM, so the KDCs must be installed before you attempt other tasks.

The most significant difference between a master KDC and a slave KDC is that only the master KDC can handle database administration requests. For instance, changing a password or adding a new principal must be done on the master KDC. These changes can then be propagated to the slave KDCs. Both the slave KDC and master KDC generate credentials. This feature provides redundancy in case the master KDC cannot respond.

How to Configure a Master KDC

In this procedure, the following configuration parameters are used:

  1. Complete the prerequisites for configuring a master KDC.

    This procedure requires that DNS must be running. For specific naming instructions if this master is to be swappable, see Swapping a Master KDC and a Slave KDC.

  2. Become superuser on the master KDC.

  3. Edit the Kerberos configuration file (krb5.conf).

    You need to change the realm names and the names of the servers. See the krb5.conf(4) man page for a full description of this file.


    kdc1 # cat /etc/krb5/krb5.conf
    [libdefaults]
            default_realm = EXAMPLE.COM
    
    [realms]
                    EXAMPLE.COM = {
                    kdc = kdc1.example.com
                    kdc = kdc2.example.com
                    admin_server = kdc1.example.com
            }
    
    [domain_realm]
            .example.com = EXAMPLE.COM
    #
    # if the domain name and realm name are equivalent, 
    # this entry is not needed
    #
    [logging]
            default = FILE:/var/krb5/kdc.log
            kdc = FILE:/var/krb5/kdc.log
    
    [appdefaults]
        gkadmin = {
            help_url = http://denver:8888/ab2/coll.384.1/SEAM/@AB2PageView/6956
            }

    In this example, the lines for domain_realm, kdc, admin_server, and all domain_realm entries were changed. In addition, the line that defines the help_url was edited.

  4. Edit the KDC configuration file (kdc.conf).

    You need to change the realm name. See the kdc.conf(4) man page for a full description of this file.


    kdc1 # cat /etc/krb5/kdc.conf
    [kdcdefaults]
            kdc_ports = 88,750
    
    [realms]
            EXAMPLE.COM= {
                    profile = /etc/krb5/krb5.conf
                    database_name = /var/krb5/principal
                    admin_keytab = /etc/krb5/kadm5.keytab
                    acl_file = /etc/krb5/kadm5.acl
                    kadmind_port = 749
                    max_life = 8h 0m 0s
                    max_renewable_life = 7d 0h 0m 0s
            }

    In this example, the realm name definition in the realms section was changed.

  5. Create the KDC database by using the kdb5_util command.

    The kdb5_util command creates the KDC database. Also, when used with the -s option, this command creates a stash file that is used to authenticate the KDC to itself before the kadmind and krb5kdc daemons are started.


    kdc1 # /usr/sbin/kdb5_util create -r EXAMPLE.COM -s
    Initializing database '/var/krb5/principal' for realm 'EXAMPLE.COM'
    master key name 'K/M@EXAMPLE.COM'
    You will be prompted for the database Master Password.
    It is important that you NOT FORGET this password.
    Enter KDC database master key: <type the key>
    Re-enter KDC database master key to verify: <type it again>
    

    The -r option followed by the realm name is not required if the realm name is equivalent to the domain name in the server's name space.

  6. Edit the Kerberos access control list file (kadm5.acl).

    Once populated, the /etc/krb5/kadm5.acl file should contain all principal names that are allowed to administer the KDC. The first entry that is added might look similar to the following:


    kws/admin@EXAMPLE.COM   *

    This entry gives the kws/admin principal in the EXAMPLE.COM realm the ability to modify principals or policies in the KDC. The default installation includes an asterisk (*) to match all admin principals. This default could be a security risk, so it is more secure to include a list of all of the admin principals. See the kadm5.acl(4) man page for more information.

  7. Start the kadmin.local command.

    The next sub-steps create principals that are used by SEAM.


    kdc1 # /usr/sbin/kadmin.local
    kadmin.local: 
    1. Add administration principals to the database.

      You can add as many admin principals as you need. You must add at least one admin principal to complete the KDC configuration process. For this example, a kws/admin principal is added. You can substitute an appropriate principal name instead of “kws.”


      kadmin.local: addprinc kws/admin
      Enter password for principal kws/admin@EXAMPLE.COM: <type the password>
      Re-enter password for principal kws/admin@EXAMPLE.COM: <type it again>
      Principal "kws/admin@EXAMPLE.COM" created.
      kadmin.local: 
    2. Create a keytab file for the kadmind service.

      This command sequence creates a special keytab file with principal entries for kadmin and changepw. These principals are needed for the kadmind service. Note that when the principal instance is a host name, the FQDN must be entered in lowercase letters, regardless of the case of the domainname in the /etc/resolv.conf file.


      kadmin.local: ktadd -k /etc/krb5/kadm5.keytab kadmin/kdc1.example.com
      Entry for principal kadmin/kdc1.example.com with kvno 3, encryption type DES-CBC-CRC
                added to keytab WRFILE:/etc/krb5/kadm5.keytab.
      kadmin.local: ktadd -k /etc/krb5/kadm5.keytab changepw/kdc1.example.com
      Entry for principal changepw/kdc1.example.com with kvno 3, encryption type DES-CBC-CRC 
                added to keytab WRFILE:/etc/krb5/kadm5.keytab.
      kadmin.local: 
    3. Quit kadmin.local.

      You have added all of the required principals for the next steps.


      kadmin.local: quit
      
  8. Start the Kerberos daemons.


    kdc1 # /etc/init.d/kdc start
    kdc1 # /etc/init.d/kdc.master start
    
  9. Start kadmin.

    At this point, you can add principals by using the SEAM Administration Tool. To do so, you must log on with one of the admin principal names that you created earlier in this procedure. However, the following command-line example is shown for simplicity.


    kdc1 # /usr/sbin/kadmin -p kws/admin
    Enter password: <Type kws/admin password>
    kadmin: 
    1. Create the master KDC host principal.

      The host principal is used by Kerberized applications (such as klist and kprop) . Note that when the principal instance is a host name, the FQDN must be entered in lowercase letters, regardless of the case of the domainname in the /etc/resolv.conf file.


      kadmin: addprinc -randkey host/kdc1.example.com
      Principal "host/kdc1.example.com@EXAMPLE.COM" created.
      kadmin: 
    2. (Optional) Create the master KDC root principal.

      This principal is used for authenticated NFS-mounting. So, the principal might not be necessary on a master KDC. Note that when the principal instance is a host name, the FQDN must be entered in lowercase letters, regardless of the case of the domainname in the /etc/resolv.conf file.


      kadmin: addprinc root/kdc1.example.com
      Enter password for principal root/kdc1.example.com@EXAMPLE.COM: <type the password>
      Re-enter password for principal root/kdc1.example.com@EXAMPLE.COM: <type it again>
      Principal "root/kdc1.example.com@EXAMPLE.COM" created.
      kadmin: 
    3. Add the master KDC's host principal to the master KDC's keytab file.

      Adding the host principal to the keytab file allows this principal to be used automatically.


      kadmin: ktadd host/kdc1.example.com
      kadmin: Entry for principal host/kdc1.example.com with
        kvno 3, encryption type DES-CBC-CRC added to keytab
        WRFILE:/etc/krb5/krb5.keytab
      kadmin: 
    4. Quit kadmin.


      kadmin: quit
      
  10. Add an entry for each KDC into the propagation configuration file (kpropd.acl).

    See the kprop(1M) man page for a full description of this file.


    kdc1 # cat /etc/krb5/kpropd.acl
    host/kdc1.example.com@EXAMPLE.COM
    host/kdc2.example.com@EXAMPLE.COM
  11. (Optional) Synchronize the master KDCs clock by using NTP or another clock synchronization mechanism.

    It is not required to install and use the Network Time Protocol (NTP). However, every clock must be within the default time that is defined in the libdefaults section of the krb5.conf file in order for authentication to succeed. See Synchronizing Clocks between KDCs and SEAM Clients for information about NTP.

How to Configure a Slave KDC

In this procedure, a new slave KDC named kdc3 is configured. This procedure uses the following configuration parameters:

  1. Complete the prerequisites for configuring a slave KDC.

    The master KDC must be configured. For specific instructions if this slave is to be swappable, see Swapping a Master KDC and a Slave KDC.

  2. On the master KDC, become superuser.

  3. On the master KDC, start kadmin.

    You must log on with one of the admin principal names that you created when you configure the master KDC.


    kdc1 # /usr/sbin/kadmin -p kws/admin
    Enter password: <Enter kws/admin password>
    kadmin: 
    1. On the master KDC, add slave host principals to the database, if not already done.

      In order for the slave to function, it must have a host principal. Note that when the principal instance is a host name, the FQDN must be entered in lowercase letters, regardless of the case of the domainname in the /etc/resolv.conf file.


      kadmin: addprinc -randkey host/kdc3.example.com
      Principal "host/kdc3@EXAMPLE.COM" created.
      kadmin: 
    2. (Optional) On the master KDC, create the slave KDC root principal.

      This principal is only needed if the slave will be NFS-mounting an authenticated file system. Note that when the principal instance is a host name, the FQDN must be entered in lowercase letters, regardless of the case of the domainname in the /etc/resolv.conf file.


      kadmin: addprinc root/kdc3.example.com
      Enter password for principal root/kdc3.example.com@EXAMPLE.COM: <type the password>
      Re-enter password for principal root/kdc3.example.com@EXAMPLE.COM: <type it again>
      Principal "root/kdc3.example.com@EXAMPLE.COM" created.
      kadmin: 
    3. Quit kadmin.


      kadmin: quit
      
  4. On the master KDC, edit the Kerberos configuration file (krb5.conf).

    You need to add an entry for each slave. See the krb5.conf(4) man page for a full description of this file.


    kdc1 # cat /etc/krb5/krb5.conf
    [libdefaults]
            default_realm = EXAMPLE.COM
    
    [realms]
                    EXAMPLE.COM = {
                    kdc = kdc1.example.com
                    kdc = kdc2.example.com
                    kdc = kdc3.example.com
                    admin_server = kdc1.example.com
            }
    
    [domain_realm]
            .example.com = EXAMPLE.COM
    #
    # if the domain name and realm name are equivalent, 
    # this entry is not needed
    #        
    [logging]
            default = FILE:/var/krb5/kdc.log
            kdc = FILE:/var/krb5/kdc.log
    
    [appdefaults]
        gkadmin = {
            help_url = http://denver:8888/ab2/coll.384.1/SEAM/@AB2PageView/6956
  5. On the master KDC, add an entry for each slave KDC into the database propagation configuration file (kpropd.acl).

    See the kprop(1M) man page for a full description of this file.


    kdc1 # cat /etc/krb5/kpropd.acl
    host/kdc1.example.com@EXAMPLE.COM
    host/kdc2.example.com@EXAMPLE.COM
    host/kdc3.example.com@EXAMPLE.COM
    
  6. On all slave KDCs, copy the KDC administration files from the master KDC server.

    This step needs to be followed on all slave KDCs, since the master KDC server has updated information that each KDC server needs. You can use ftp or a similar transfer mechanism to grab copies of the following files from the master KDC:

    • /etc/krb5/krb5.conf

    • /etc/krb5/kdc.conf

    • /etc/krb5/kpropd.acl

  7. On the new slave, add the slave's host principal to the slave's keytab file by using kadmin.

    You must log on with one of the admin principal names that you created when you configure the master KDC. This entry allows kprop and other Kerberized applications to function. Note that when the principal instance is a host name, the FQDN must be entered in lowercase letters, regardless of the case of the domainname in the /etc/resolv.conf file.


    kdc3 # /usr/sbin/kadmin -p kws/admin
    Enter password: <Type kws/admin password>
    kadmin: ktadd host/kdc3.example.com
    kadmin: Entry for principal host/kdc3.example.com with
      kvno 3, encryption type DES-CBC-CRC added to keytab
      WRFILE:/etc/krb5/krb5.keytab
    kadmin: quit
    
  8. On the master KDC, add slave KDC names to the cron job, which automatically runs the backups, by running crontab -e.

    Add the name of each slave KDC server at the end of the kprop_script line.


    10 3 * * * /usr/lib/krb5/kprop_script kdc2.example.com kdc3.example.com
    

    You might also want to change the time of the backups. This configuration starts the backup process every day at 3:10 AM.

  9. On the master KDC, back up and propagate the database by using kprop_script.

    If a backup copy of the database is already available, it is not necessary to complete another backup. See How to Manually Propagate the Kerberos Database to the Slave KDCs for further instructions.


    kdc1 # /usr/lib/krb5/kprop_script kdc3.example.com
    Database propagation to kdc3.example.com: SUCCEEDED
  10. On the new slave, create a stash file by using kdb5_util.


    kdc3 # /usr/sbin/kdb5_util stash
    kdb5_util: Cannot find/read stored master key while reading master key
    kdb5_util: Warning: proceeding without master key
    
    Enter KDC database master key: <type the key>
    
  11. (Optional) On the new slave KDC, synchronize the master KDCs clock by using NTP or another clock synchronization mechanism.

    It is not required to install and use the Network Time Protocol (NTP). However, every clock must be within the default time that is defined in the libdefaults section of the krb5.conf file in order for authentication to succeed. See Synchronizing Clocks between KDCs and SEAM Clients for information about NTP.

  12. On the new slave, start the KDC daemon (krb5kdc).


    kdc3 # /etc/init.d/kdc start
    

Configuring Cross-Realm Authentication

You have several ways of linking realms together so that users in one realm can be authenticated in another realm. Normally, this cross-realm authentication is accomplished by establishing a secret key that is shared between the two realms. The relationship of the realms can be either hierarchal or directional (see Realm Hierarchy).

How to Establish Hierarchical Cross-Realm Authentication

The example in this procedure uses two realms, ENG.EAST.EXAMPLE.COM and EAST.EXAMPLE.COM. Cross-realm authentication will be established in both directions. This procedure must be completed on the master KDC in both realms.

  1. Complete the prerequisites for establishing hierarchical cross-realm authentication.

    The master KDC for each realm must be configured. To fully test the authentication process, several clients or slave KDCs must be installed.

  2. Become superuser on the first master KDC.

  3. Create ticket-granting ticket service principals for the two realms.

    You must log on with one of the admin principal names that was created when you configured the master KDC.


    # /usr/sbin/kadmin -p kws/admin
    Enter password: <Type kws/admin password>
    kadmin: addprinc krbtgt/ENG.EAST.EXAMPLE.COM@EAST.EXAMPLE.COM
    Enter password for principal krgtgt/ENG.EAST.EXAMPLE.COM@EAST.EXAMPLE.COM: <type the password>
    kadmin: addprinc krbtgt/EAST.EXAMPLE.COM@ENG.EAST.EXAMPLE.COM
    Enter password for principal krgtgt/EAST.EXAMPLE.COM@ENG.EAST.EXAMPLE.COM: <type the password>
    kadmin: quit
    

    Note –

    The password that is entered for each service principal must be identical in both KDCs. Thus, the password for the service principal krbtgt/ENG.EAST.EXAMPLE.COM@EAST.EXAMPLE.COM must be the same in both realms.


  4. Add entries to the Kerberos configuration file to define domain names for every realm (krb5.conf).


    # cat /etc/krb5/krb5.conf
    [libdefaults]
     .
     .
    [domain_realm]
            .eng.east.example.com = ENG.EAST.EXAMPLE.COM
            .east.example.com = EAST.EXAMPLE.COM
    

    In this example, domain names for the ENG.EAST.EXAMPLE.COM and EAST.EXAMPLE.COM realms are defined. It is important to include the subdomain first, since the file is searched top down.

  5. Copy the Kerberos configuration file to all clients in this realm.

    In order for cross-realm authentication to work, all systems (including slave KDCs and other servers) must have the new version of the Kerberos configuration file (/etc/krb5/krb5.conf) installed.

  6. Repeat these steps in the second realm.

How to Establish Direct Cross-Realm Authentication

The example in this procedure uses two realms: ENG.EAST.EXAMPLE.COM and SALES.WEST.EXAMPLE.COM. Cross-realm authentication will be established in both directions. This procedure must be completed on the master KDC in both realms.

  1. Complete the prerequisites for establishing direct cross-realm authentication.

    The master KDC for each realm must be configured. To fully test the authentication process, several clients or slave KDCs must be installed.

  2. Become superuser on one of the master KDC servers.

  3. Create ticket-granting ticket service principals for the two realms.

    You must log on with one of the admin principal names that was created when you configured the master KDC.


    # /usr/sbin/kadmin -p kws/admin
    Enter password: <Type kws/admin password>
    kadmin: addprinc krbtgt/ENG.EAST.EXAMPLE.COM@SALES.WEST.EXAMPLE.COM
    Enter password for principal 
      krgtgt/ENG.EAST.EXAMPLE.COM@SALES.WEST.EXAMPLE.COM: <type the password>
    kadmin: addprinc krbtgt/SALES.WEST.EXAMPLE.COM@ENG.EAST.EXAMPLE.COM
    Enter password for principal 
      krgtgt/SALES.WEST.EXAMPLE.COM@ENG.EAST.EXAMPLE.COM: <type the password>
    kadmin: quit
    

    Note –

    The password that is entered for each service principal must be identical in both KDCs. Thus, the password for the service principal krbtgt/ENG.EAST.EXAMPLE.COM@SALES.WEST.EXAMPLE.COM must be the same in both realms.


  4. Add entries in the Kerberos configuration file to define the direct path to the remote realm (krb5.conf).

    This example shows the clients in the ENG.EAST.EXAMPLE.COM realm. You would need to swap the realm names to get the appropriate definitions in the SALES.WEST.EXAMPLE.COM realm.


    # cat /etc/krb5/krb5.conf
    [libdefaults]
     .
     .
    [capaths]
        ENG.EAST.EXAMPLE.COM = {
            SALES.WEST.EXAMPLE.COM = .
        }
    
        SALES.WEST.EXAMPLE.COM = {
             ENG.EAST.EXAMPLE.COM = .
        }
    
  5. Copy the Kerberos configuration file to all clients in the current realm.

    In order for cross-realm authentication to work, all systems (including slave KDCs and other servers) must have the new version of the Kerberos configuration file (krb5.conf) installed.

  6. Repeat these steps for the second realm.

Configuring SEAM NFS Servers

NFS services use UNIX user IDs (UIDs) to identify a user and cannot directly use principals. To translate the principal to a UID, a credential table that maps user principals to UNIX UIDs must be created. The procedures in this section focus on the tasks that are necessary to configure a SEAM NFS server, to administer the credential table, and to initiate Kerberos security modes for NFS-mounted file systems. The following task map describes the tasks that are covered in this section.

Table 15–3 Configuring SEAM NFS Servers (Task Map)

Task 

Description 

For Instructions 

Configure a SEAM NFS server 

Enables a server to share a file system that requires Kerberos authentication. 

How to Configure SEAM NFS Servers

Create a credential table 

Generates a credential table. 

How to Create a Credential Table

Change the credential table that maps user principles to UNIX UIDs 

Updates information in the credential table. 

How to Add a Single Entry to the Credential Table

Share a file system with Kerberos authentication 

Shares a file system with security modes so that Kerberos authentication is required. 

How to Set Up a Secure NFS Environment With Multiple Kerberos Security Modes

How to Configure SEAM NFS Servers

In this procedure, the following configuration parameters are used:

  1. Complete the prerequisites for configuring a SEAM NFS server.

    The master KDC must be configured. To fully test the process, you need several clients.

  2. (Optional) Install the NTP client or other clock synchronization mechanism.

    It is not required to install and use the Network Time Protocol (NTP). However, every clock must be within the default time that is defined in the libdefaults section of the krb5.conf file in order for authentication to succeed. See Synchronizing Clocks between KDCs and SEAM Clients for information about NTP.

  3. Start kadmin.

    You can use the SEAM Administration Tool to add a principal, as explained in How to Create a New Principal. To do so, you must log on with one of the admin principal names that you created when you configured the master KDC. However, the following example shows how to add the required principals by using the command line.


    denver # /usr/sbin/kadmin -p kws/admin
    Enter password: <Type kws/admin password>
    kadmin: 
    1. Create the server's NFS service principal.

      Note that when the principal instance is a host name, the FQDN must be entered in lowercase letters, regardless of the case of the domainname in the /etc/resolv.conf file.


      kadmin: addprinc -randkey nfs/denver.example.com
      Principal "nfs/denver.example.com" created.
      kadmin:
    2. (Optional) Create a root principal for the NFS server.


      kadmin: addprinc root/denver.example.com
      Enter password for principal root/denver.example.com@EXAMPLE.COM: <type the password>
      Re-enter password for principal root/denver.example.com@EXAMPLE.COM: <type it again>
      Principal "root/denver.example.com@EXAMPLE.COM" created.
      kadmin: 
    3. Add the server's NFS service principal to the server's keytab file.


      kadmin: ktadd nfs/denver.example.com
      kadmin: Entry for principal nfs/denver.example.com with
        kvno 3, encryption type DES-CBC-CRC added to keytab
        WRFILE:/etc/krb5/krb5.keytab
      kadmin: 
    4. Quit kadmin.


      kadmin: quit
      
  4. Create the gsscred table.

    See How to Create a Credential Table for more information.

  5. Share the NFS file system with Kerberos security modes.

    See How to Set Up a Secure NFS Environment With Multiple Kerberos Security Modes for more information.

  6. On each client, authenticate both the user principal and the root principal.

How to Create a Credential Table

The gsscred credential table is used by an NFS server to map SEAM principals to a UID. In order for NFS clients to mount file systems from an NFS server with Kerberos authentication, this table must be created or made available.

  1. Edit /etc/gss/gsscred.conf and change the mechanism.

    Change the mechanism to files.

  2. Create the credential table by using gsscred.


    # gsscred -m kerberos_v5 -a
    

    The gsscred command gathers information from all sources that are listed with the passwd entry in the /etc/nsswitch.conf file. You might need to temporarily remove the files entry, if you do not want the local password entries included in the credential table. See the gsscred(1M) man page for more information.

How to Add a Single Entry to the Credential Table

This procedure requires that the gsscred table has already been created on the NFS server.

  1. Become superuser on a NFS server.

  2. Add an entry to the table by using gsscred.


    # gsscred -m mech [ -n name [ -u uid ]] -a
    

    mech

    Defines the security mechanism to be used. 

    name

    Defines the principal name for the user, as defined in the KDC. 

    uid

    Defines the UID for the user, as defined in the password database. 

    -a

    Adds the UID to principal name mapping.  

Example—Adding a Single Entry to the Credential Table

In the following example, an entry is added for the user named sandy, which is mapped to UID 3736. The UID is pulled from the password file if it is not included on the command line.


# gsscred -m kerberos_v5 -n sandy -u 3736 -a

How to Set Up a Secure NFS Environment With Multiple Kerberos Security Modes

  1. Become superuser on the NFS server.

  2. Verify that there is a NFS service principal in the keytab file.

    The klist command reports if there is a keytab file and displays the principals. If the results show that there is no keytab file or that there is no NFS service principal, you need to verify the completion of all of the steps in How to Configure SEAM NFS Servers.


    # klist -k
    Keytab name: FILE:/etc/krb5/krb5.keytab
    KVNO Principal
    ---- ---------------------------------------------------------
       3 nfs/denver.example.com@EXAMPLE.COM
  3. Enable Kerberos security modes in the /etc/nfssec.conf file.

    Edit the /etc/nfssec.conf file and remove the “#” from in front of the Kerberos security modes.


    # cat /etc/nfssec.conf
     .
     .
    #
    # Uncomment the following lines to use Kerberos V5 with NFS
    #
    krb5            390003  kerberos_v5     default -               # RPCSEC_GSS
    krb5i           390004  kerberos_v5     default integrity       # RPCSEC_GSS
    krb5p           390005  kerberos_v5     default privacy         # RPCSEC_GSS
  4. Edit the /etc/dfs/dfstab file and add the sec= option with the required security modes to the appropriate entries.


    share -F nfs -o sec=mode file-system
    

    mode

    Specifies the security modes to be used when sharing. When using multiple security modes, the first mode in the list is used as the default by the automounter. 

    file-system

    Defines the path to the file system to be shared. 

    All clients that attempt to access files from the named file system require Kerberos authentication. To access files, both the user principal and the root principal on the NFS client should be authenticated.

  5. Make sure that the NFS service is running on the server.

    If this command is the first share command or set of share commands that you have initiated, it is likely that the NFS daemons are not running. The following commands kill the daemons and restart them.


    # /etc/init.d/nfs.server stop
    # /etc/init.d/nfs.server start
    
  6. (Optional) If the automounter is being used, edit the auto_master database to select a security mode other than the default.

    You need not follow this procedure if you are not using the automounter to access the file system or if the default selection for the security mode is acceptable.


    file-system  auto_home  -nosuid,sec=mode
    
  7. (Optional) Manually issue the mount command to access the file system by using a non-default mode.

    Alternatively, you could use the mount command to specify the security mode, but this alternative does not take advantage of the automounter:


    # mount -F nfs -o sec=mode file-system
    

Example—Sharing a File System With One Kerberos Security Mode

In this example, the dfstab file line means that Kerberos authentication must succeed before any files can be accessed through the NFS service.


# grep krb /etc/dfs/dfstab
share -F nfs -o sec=krb5 /export/home

Example—Sharing a File System With Multiple Kerberos Security Modes

In this example, all three Kerberos security modes have been selected. If no security mode is specified when a mount request is made, the first mode that is listed is used on all NFS V3 clients (in this case, krb5). See the nfssec.conf(4) man page for more information.


# grep krb /etc/dfs/dfstab
share -F nfs -o sec=krb5:krb5i:krb5p /export/home

Configuring SEAM Clients

SEAM clients include any host, not a KDC server, on the network that needs to use SEAM services. This section provides a procedure for installing a SEAM client, as well as specific information about using root authentication to mount NFS file systems.

How to Configure a SEAM Client

In this procedure, the following configuration parameters are used:

  1. Become superuser.

  2. Edit the Kerberos configuration file (krb5.conf).

    To change the file from the SEAM default version, you need to change the realm names and the names of the servers. You also need to identify the path to the help files for gkadmin.


    kdc1 # cat /etc/krb5/krb5.conf
    [libdefaults]
            default_realm = EXAMPLE.COM
    
    [realms]
                    EXAMPLE.COM = {
                    kdc = kdc1.example.com
                    kdc = kdc2.example.com
                    admin_server = kdc1.example.com
            }
    
    [domain_realm]
            .example.com = EXAMPLE.COM
    #
    # if the domain name and realm name are equivalent, 
    # this entry is not needed
    #
    [logging]
            default = FILE:/var/krb5/kdc.log
            kdc = FILE:/var/krb5/kdc.log
    
    [appdefaults]
        gkadmin = {
            help_url = http://denver:8888/ab2/coll.384.1/SEAM/@AB2PageView/6956
    
  3. (Optional) Synchronize the client's clock with the master KDC's clock by using NTP or another clock synchronization mechanism.

    It is not required to install and use the Network Time Protocol (NTP). However, every clock must be within the default time that is defined in the libdefaults section of the krb5.conf file in order for authentication to succeed. See Synchronizing Clocks between KDCs and SEAM Clients for information about NTP.

  4. (Optional) Create a user principal if a user principal does not already exist.

    You need to create a user principal only if the user associated with this host does not have a principal assigned already. See How to Create a New Principal for instructions on using the SEAM Administration Tool. The following is a command-line example.


    client1 # /usr/sbin/kadmin -p kws/admin
    Enter password: <Type kws/admin password>
    kadmin: addprinc mre
    Enter password for principal mre@EXAMPLE.COM: <type the password>
    Re-enter password for principal mre@EXAMPLE.COM: <type it again>
    kadmin: 
  5. Create a root principal.

    Note that when the principal instance is a host name, the FQDN must be entered in lowercase letters, regardless of the case of the domainname in the /etc/resolv.conf file.


    kadmin: addprinc root/client1.example.com
    Enter password for principal root/client1.example.com@EXAMPLE.COM: <type the password>
    Re-enter password for principal root/client1.example.com@EXAMPLE.COM: <type it again>
    kadmin: quit
    
  6. (Optional) To use Kerberos with NFS, enable Kerberos security modes in the /etc/nfssec.conf file.

    Edit the /etc/nfssec.conf file and remove the “#” from in front of the Kerberos security modes.


    # cat /etc/nfssec.conf
     .
     .
    #
    # Uncomment the following lines to use Kerberos V5 with NFS
    #
    krb5            390003  kerberos_v5     default -               # RPCSEC_GSS
    krb5i           390004  kerberos_v5     default integrity       # RPCSEC_GSS
    krb5p           390005  kerberos_v5     default privacy         # RPCSEC_GSS
  7. (Optional) If you want a user on the SEAM client to automatically mount Kerberized NFS file systems that use Kerberos authentication, you must authenticate the root user.

    This process is done most securely by using the kinit command. However, users will need to use kinit as root every time they need to mount a file system that is secured by Kerberos. You can choose to use a keytab file instead. For detailed information about the keytab file requirement, see Setting Up Root Authentication to Mount NFS File Systems.


    client1 # /usr/bin/kinit root/client1.example.com
    Password for root/client1.example.com@EXAMPLE.COM: <Type password>
    

    To use the keytab file option, add the root principal to the client's keytab by using kadmin:


    client1 # /usr/sbin/kadmin -p kws/admin
    Enter password: <Type kws/admin password>
    kadmin: ktadd root/client1.example.com
    kadmin: Entry for principal root/client.example.com with
      kvno 3, encryption type DES-CBC-CRC added to keytab
      WRFILE:/etc/krb5/krb5.keytab
    kadmin: quit
    
  8. If you want the client to warn users about Kerberos ticket expiration, create an entry in the /etc/krb5/warn.conf file.

    See the warn.conf(4) man page for more information.

Example—Setting Up a SEAM Client Using a Non-SEAM KDC

It is possible to set up a SEAM client to work with a non-SEAM KDC. In this case, a line must be included in the /etc/krb5/krb5.conf file in the realms section. This line changes the protocol that is used when the client is communicating with the Kerberos password-changing server. The format of this line follows.


[realms]
                EXAMPLE.COM = {
                kdc = kdc1.example.com
                kdc = kdc2.example.com
                admin_server = kdc1.example.com
                kpasswd_protocol = SET_CHANGE
        }

Setting Up Root Authentication to Mount NFS File Systems

If users want to access a non-Kerberized NFS file system, either the NFS file system can be mounted as root, or the file system can be accessed automatically through the automounter whenever users access it (without requiring root permissions).

Mounting a Kerberized NFS file system is very much the same, but it does incur an additional obstacle. To mount a Kerberized NFS file system, users must use the kinit command as root to obtain credentials for the client's root principal, because a client's root principal is typically not in the client's keytab. This step is required even when the automounter is set up. This step also forces all users to know their system's root password and the root principal's password.

To bypass this step, you can add a client's root principal to the client's keytab file, which automatically provides credentials for root. Although this solution enables users to mount NFS file systems without running the kinit command and enhances ease-of-use, it is a security risk. For example, if someone gains access to a system with the root principal in its keytab, this person can obtain credentials for root. So make sure that you take the appropriate security precautions. See Administering Keytab Files for more information.

Synchronizing Clocks between KDCs and SEAM Clients

All hosts that participate in the Kerberos authentication system must have their internal clocks synchronized within a specified maximum amount of time (known as clock skew). This requirement provides another Kerberos security check. If the clock skew is exceeded between any of the participating hosts, client requests are rejected.

The clock skew also determines how long application servers must keep track of all Kerberos protocol messages, in order to recognize and reject replayed requests. So, the longer the clock skew value, the more information that application servers have to collect.

The default value for the maximum clock skew is 300 seconds (five minutes). You can change this default in the libdefaults section of the krb5.conf file.


Note –

For security reasons, do not increase the clock skew beyond 300 seconds.


Since it is important to maintain synchronized clocks between the KDCs and SEAM clients, you should use the Network Time Protocol (NTP) software to synchronize them. NTP public domain software from the University of Delaware is included in the Solaris software, starting with the Solaris 2.6 release.


Note –

Another way to synchronize clocks is to use the rdate command and cron jobs, a process that can be less involved than using NTP. However, this section will continue to focus on using NTP. And, if you use the network to synchronize the clocks, the clock synchronization protocol must itself be secure.


NTP enables you to manage precise time or network clock synchronization, or both, in a network environment. NTP is basically a server/client implementation. You pick one system to be the master clock (the NTP server). Then, you set up all your other systems (the NTP clients) to synchronize their clocks with the master clock.

To synchronize the clocks, NTP uses the xntpd daemon, which sets and maintains a UNIX system time-of-day in agreement with Internet standard time servers. The following shows an example of this server/client NTP implementation.

Figure 15–1 Synchronizing Clocks by Using NTP

Diagram shows a central NTP server as the master clock for NTP clients and Kerberos clients that are running the xntpd daemon.

To ensure that the KDCs and SEAM clients maintain synchronized clocks, implement the following steps:

  1. Set up an NTP server on your network (this server can be any system, except the master KDC). See “Managing Network Time Protocol (Tasks)” in System Administration Guide: Resource Management and Network Services to find the NTP server task.

  2. As you configure the KDCs and SEAM clients on the network, set them up to be NTP clients of the NTP server. See “Managing Network Time Protocol (Tasks)” in System Administration Guide: Resource Management and Network Services to find the NTP client task.

Swapping a Master KDC and a Slave KDC

You should use the procedures in this section to make the swap of a master KDC with a slave KDC easier. You should swap the master KDC with a slave KDC only if the master KDC server fails for some reason, or if the master KDC needs to be re-installed (for example, because new hardware is installed).

How to Configure a Swappable Slave KDC

Perform this procedure on the slave KDC server that you want to have available to become the master KDC.

  1. Use alias names for the master KDC and the swappable slave KDC during the KDC installation.

    When you define the host names for the KDCs, make sure that each system has an alias included in DNS. Also, use the alias names when you define the hosts in the /etc/krb5/krb5.conf file.

  2. Follow the steps to install a slave KDC.

    Prior to any swap, this server should function as any other slave KDC in the realm. See How to Configure a Slave KDC for instructions.

  3. Move the master KDC commands.

    To prevent the master KDC commands from being run from this slave KDC, move the kprop, kadmind and kadmin.local commands to a reserved place.


    kdc4 # mv /usr/lib/krb5/kprop /usr/lib/krb5/kprop.save
    kdc4 # mv /usr/lib/krb5/kadmind /usr/lib/krb5/kadmind.save
    kdc4 # mv /usr/sbin/kadmin.local /usr/sbin/kadmin.local.save
    
  4. Comment out the kprop line in the root crontab file.

    This step prevents the slave KDC from propagating its copy of the KDC database.


    kdc4 # crontab -e
    #ident  "@(#)root       1.20    01/11/06 SMI"
    #
    # The root crontab should be used to perform accounting data collection.
    #
    # The rtc command is run to adjust the real time clock if and when
    # daylight savings time changes.
    #
    10 3 * * * /usr/sbin/logadm
    15 3 * * 0 /usr/lib/fs/nfs/nfsfind
    1 2 * * * [ -x /usr/sbin/rtc ] && /usr/sbin/rtc -c > /dev/null 2>&1
    30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
    #10 3 * * * /usr/lib/krb5kprop_script kdc1.example.sun.com #SUNWkr5ma
    

How to Swap a Master KDC and a Slave KDC

This procedure requires that the slave KDC server has been set up as a swappable slave (see How to Configure a Swappable Slave KDC). In this procedure, the master KDC server that is being swapped out is named kdc1. The slave KDC that will become the new master KDC is named kdc4.

  1. On the old master KDC, kill the kadmind process.


    kdc1 # /etc/init.d/kdc.master stop
    

    When you kill the kadmind process, you prevent any changes from being made to the KDC database.

  2. On the old master KDC, comment out the kprop line in the root crontab file.


    kdc1 # crontab -e
    #ident  "@(#)root       1.20    01/11/06 SMI"
    #
    # The root crontab should be used to perform accounting data collection.
    #
    # The rtc command is run to adjust the real time clock if and when
    # daylight savings time changes.
    #
    10 3 * * * /usr/sbin/logadm
    15 3 * * 0 /usr/lib/fs/nfs/nfsfind
    1 2 * * * [ -x /usr/sbin/rtc ] && /usr/sbin/rtc -c > /dev/null 2>&1
    30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
    #10 3 * * * /usr/lib/krb5/kprop_script kdc2.example.sun.com #SUNWkr5ma
    

    This step prevents the old master from propagating its copy of the KDC database.

  3. On the old master KDC, run kprop_script to back up and propagate the database.


    kdc1 # /usr/lib/krb5/kprop_script kdc4.example.com
    Database propagation to kdc4.example.com: SUCCEEDED
  4. On the old master KDC, move the master KDC commands.

    To prevent the master KDC commands from being run, move the kprop, kadmind and kadmin.local commands to a reserved place.


    kdc4 # mv /usr/lib/krb5/kprop /usr/lib/krb5/kprop.save
    kdc4 # mv /usr/lib/krb5/kadmind /usr/lib/krb5/kadmind.save
    kdc4 # mv /usr/sbin/kadmin.local /usr/sbin/kadmin.local.save
    kdc4 # mv /etc/krb5/kadm5.acl /etc/krb5/kadm5.acl.save
    
  5. On the DNS server, change the alias names for the master KDC.

    To change the servers, edit the example.com zone file and change the entry for masterkdc.


    masterkdc IN CNAME kdc4
  6. On the DNS server, restart the Internet domain name server.

    Run the following command on both servers to get the new alias information:


    # pkill -1 in.named
  7. On the new master KDC, move the master KDC commands.


    kdc4 # mv /usr/lib/krb5/kprop.save /usr/lib/krb5/kprop
    kdc4 # mv /usr/lib/krb5/kadmind.save /usr/lib/krb5/kadmind
    kdc4 # mv /usr/sbin/kadmin.local.save /usr/sbin/kadmin.local
    
  8. On the new master KDC, edit the Kerberos access control list file (kadm5.acl).

    Once populated, the /etc/krb5/kadm5.acl file should contain all principal names that are allowed to administer the KDC. The first entry that is added might look similar to the following:


    kws/admin@EXAMPLE.COM   *

    This entry gives the kws/admin principal in the EXAMPLE.COM realm the ability to modify principals or policies in the KDC. The default installation includes an asterisk (*) to match all admin principals. This default could be a security risk, so it is more secure to include a list of all of the admin principals. See the kadm5.acl(4) man page for more information.

  9. On the new master KDC, create a keytab file for kadmin by using kadmin.local.

    This command sequence creates a special keytab file with principal entries for admin and changepw. These principals are needed for the kadmind service.


    kdc4 # /usr/sbin/kadmin.local
    kadmin.local: ktadd -k /etc/krb5/kadm5.keytab kadmin/kdc4.example.com
    Entry for principal kadmin/kdc4.example.com with kvno 3, encryption type DES-CBC-CRC
              added to keytab WRFILE:/etc/krb5/kadm5.keytab.
    kadmin.local: ktadd -k /etc/krb5/kadm5.keytab changepw/kdc4.example.com
    Entry for principal changepw/kdc4.example.com with kvno 3, encryption type DES-CBC-CRC 
              added to keytab WRFILE:/etc/krb5/kadm5.keytab.
    kadmin.local: quit
    
  10. On the new master KDC, start kadmind.


    kdc4 # /etc/init.d/kdc.master start
    
  11. Enable the kprop line in the root crontab file.


    kdc4 # crontab -e
    #ident  "@(#)root       1.19    98/07/06 SMI"
    #
    # The root crontab should be used to perform accounting data collection.
    #
    # The rtc command is run to adjust the real time clock if and when
    # daylight savings time changes.
    #
    10 3 * * * /usr/sbin/logadm
    15 3 * * 0 /usr/lib/fs/nfs/nfsfind
    1 2 * * * [ -x /usr/sbin/rtc ] && /usr/sbin/rtc -c > /dev/null 2>&1
    30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
    10 3 * * * /usr/lib/krb5/kprop_script kdc1.example.sun.com #SUNWkr5ma
    

Administering the Kerberos Database

The Kerberos database is the backbone of Kerberos and must be maintained properly. This section provides some procedures on how to administer the Kerberos database, such as backing up and restoring the database, setting up parallel propagation, and administering the stash file. The steps to initially set up the database are in How to Configure a Master KDC.

Backing Up and Propagating the Kerberos Database

Propagating the Kerberos database from the master KDC to the slave KDCs is one of the most important configuration tasks. If propagation doesn't happen often enough, the master KDC and the slave KDCs will lose synchronization. So, if the master KDC goes down, the slave KDCs will not have the most recent database information. Also, if a slave KDC has been configured as a master KDC for purposes of load balancing, the clients that use that slave KDC as a master KDC will not have the latest information. Therefore, you must make sure that propagation occurs often enough, based on how often you change the Kerberos database.

When you configure the master KDC, you set up the kprop_script command in a cron job to automatically back up the Kerberos database to the /var/krb5/slave_datatrans dump file and propagate it to the slave KDCs. But, as with any file, the Kerberos database can become corrupted. If data corruption occurs on a slave KDC, you might never notice, since the next automatic propagation of the database installs a fresh copy. However, if corruption occurs on the master KDC, the corrupted database is propagated to all of the slave KDCs during the next propagation. And, the corrupted backup overwrites the previous uncorrupted backup file on the master KDC.

Because there is no “safe” backup copy in this scenario, you should also set up a cron job to periodically copy the slave_datatrans dump file to another location or to create another separate backup copy by using the dump command of kdb5_util. Then, if your database becomes corrupted, you can restore the most recent backup on the master KDC by using the load command of kdb5_util.

Another important note: Because the database dump file contains principal keys, you need to protect the file from being accessed by unauthorized users. By default, the database dump file has read and write permissions only as root. To protect against unauthorized access, use only the kprop command to propagate the database dump file, which encrypts the data that is being transferred. Also, kprop propagates the data only to the slave KDCs, which minimizes the chance of accidentally sending the database dump to unauthorized hosts.


Caution – Caution –

If the Kerberos database is updated after it has been propagated and if the database subsequently is corrupted before the next propagation, the KDC slaves will not contain the updates. The updates will be lost. For this reason, if you add significant updates to the Kerberos database before a regularly scheduled propagation, you should manually propagate the database to avoid data loss.


The kpropd.acl File

The kpropd.acl file on a KDC provides a list of host principal names, one per line, that specifies the systems from which the KDC can receive an updated database through propagation. If the master KDC is used to propagate all the slave KDCs, the kpropd.acl file on each slave needs to contain only the host principal name of the master KDC.

However, the SEAM installation and subsequent configuration steps in this book instruct you to add the same kpropd.acl file to the master KDC and the slave KDCs. This file contains all the KDC host principal names. This configuration allows you to propagate from any KDC, in case the propagating KDCs become temporarily unavailable. And, by keeping an identical copy on all KDCs, you make the configuration easy to maintain.

The kprop_script Command

The kprop_script command uses the kprop command to propagate the Kerberos database to other KDCs. If the kprop_script command is run on a slave KDC, it propagates the slave KDC's copy of the Kerberos database to other KDCs. The kprop_script accepts a list of host names for arguments, separated by spaces, which denote the KDCs to propagate.

When the kprop_script is run, it creates a backup of the Kerberos database to the /var/krb5/slave_datatrans file and copies the file to the specified KDCs. The Kerberos database is locked until the propagation is finished.

How to Back Up the Kerberos Database

  1. Become superuser on the master KDC.

  2. Back up the Kerberos database by using the dump command of the kdb5_util command.


    # /usr/sbin/kdb5_util dump [-verbose] [-d dbname] [filename [principals...]]

    -verbose

    Prints the name of each principal and policy that is being backed up. 

    dbname

    Defines the name of the database to back up. Note that “.db” is appended to whatever database name is specified, and you can specify an absolute path for the file. If the -d option is not specified, the default database name is /var/krb5/principal, which actually becomes /var/krb5/principal.db.

    filename

    Defines the file that is used to back up the database. You can specify an absolute path for the file. If you don't specify a file, the database is dumped to standard output. 

    principal

    Defines a list of one or more principals (separated by a space) to back up. You must use fully-qualified principal names. If you don't specify any principals, the entire database is backed up. 

Example—Backing Up the Kerberos Database

In the following example, the Kerberos database is backed up to a file called dumpfile. Because the -verbose option is specified, each principal is printed as it is backed up.


# kdb5_util dump -verbose dumpfile 
kadmin/kdc1.eng.example.com@ENG.EXAMPLE.COM 
krbtgt/eng.example.com@ENG.EXAMPLE.COM 
kadmin/history@ENG.EXAMPLE.COM 
pak/admin@ENG.EXAMPLE.COM 
pak@ENG.EXAMPLE.COM
changepw/kdc1.eng.example.com@ENG.EXAMPLE.COM

In the following example, the pak and pak/admin principals are backed up from the Kerberos database.


# kdb5_util dump -verbose dumpfile pak/admin@ENG.EXAMPLE.COM pak@ENG.EXAMPLE.COM
pak/admin@ENG.EXAMPLE.COM
pak@ENG.EXAMPLE.COM

How to Restore the Kerberos Database

  1. Become superuser on the master KDC.

  2. Restore the Kerberos database by using the load command of kdb_util.


    # /usr/sbin/kdb5_util load [-verbose] [-d dbname] [-update] [filename] 

    -verbose

    Prints the name of each principal and policy that is being restored. 

    dbname

    Defines the name of the database to restore. Note that “.db” is appended to whatever database name is specified, and you can specify an absolute path for the file. If the -d option is not specified, the default database name is /var/krb5/principal, which actually becomes /var/krb5/principal.db.

    -update

    Updates the existing database. Otherwise, a new database is created or the existing database is overwritten. 

    filename

    Defines the file from which to restore the database. You can specify an absolute path for the file.  

Example—Restoring the Kerberos Database

In the following example, the database called database1.db is restored into the current directory from the dumpfile file. Since the -update option isn't specified, a new database is created by the restore.


# kdb5_util load -d database1 dumpfile

How to Manually Propagate the Kerberos Database to the Slave KDCs

This procedure shows you how to propagate the Kerberos database by using the kprop command. Use this procedure if you need to synchronize a slave KDC with the master KDC outside the periodic cron job. And, unlike the kprop_script, you can use kprop to propagate just the current database backup without first making a new backup of the Kerberos database.

  1. Become superuser on the master KDC.

  2. (Optional) Back up the database by using the kdb5_util command.


    # /usr/sbin/kdb5_util dump /var/krb5/slave_datatrans
    
  3. Propagate the database to a slave KDC by using the kprop command.


    # /usr/lib/krb5/kprop -f /var/krb5/slave_datatrans slave_KDC
    

If you want to back up the database and propagate it to a slave KDC outside the periodic cron job, you can also use the kprop_script command as follows:


# /usr/lib/krb5/kprop_script slave_KDC

Setting Up Parallel Propagation

In most cases, the master KDC is used exclusively to propagate its Kerberos database to the slave KDCs. However, if your site has many slave KDCs, you might consider load-sharing the propagation process, known as parallel propagation.

Parallel propagation allows specific slave KDCs to share the propagation duties with the master KDC. This sharing of duties enables the propagation to be done faster and to lighten the work for the master KDC.

For example, say your site has one master KDC and six slave KDCs (shown in Figure 15–2), where slave-1 through slave-3 consist of one logical grouping and slave-4 through slave-6 consist of another logical grouping. To set up parallel propagation, you could have the master KDC propagate the database to slave-1 and slave-4, and those KDC slaves could in turn propagate the database to the KDC slaves in their group.

Figure 15–2 Example of Parallel Propagation Configuration

Diagram shows a master KDC with two propagation slaves. Each propagation slave propagates to its slaves the master KDC database.

How to Set Up Parallel Propagation

The following is not a detailed step-by-step procedure, but a high-level list of configuration steps to enable parallel propagation.

  1. On the master KDC, change the kprop_script entry in its cron job to include arguments for only the KDC slaves that will perform the succeeding propagation (propagation slaves).

  2. On each propagation slave, add a kprop_script entry to its cron job, which must include arguments for the slaves to propagate. To successfully propagate in parallel, the cron job should be set up to run after the propagation slave is itself propagated with the new Kerberos database.


    Note –

    How long it will take for a propagation slave to be propagated depends on factors such as network bandwidth and the size of the database.


  3. On each slave KDC, set up the appropriate permissions to be propagated. This step is done by adding the host principal name of its propagating KDC to its kpropd.acl file.

Example—Setting Up Parallel Propagation

Using the example in Figure 15–2, the master KDC's kprop_script entry would look similar to the following:


0 3 * * * /usr/lib/krb5/kprop_script slave-1.example.com slave-4.example.com

The slave-1's kprop_script entry would look similar to the following:


0 4 * * * /usr/lib/krb5/kprop_script slave-2.example.com slave-3.example.com

Note that the propagation on the slave starts an hour after it is propagated by the master.

The kpropd.acl file on the propagation slaves would contain the following entry:


host/master.example.com@EXAMPLE.COM

The kpropd.acl file on the KDC slaves being propagated by slave-1 would contain the following entry:


host/slave-1.example.com@EXAMPLE.COM

Administering the Stash File

The stash file contains the master key for the Kerberos database, which is automatically created when you create a Kerberos database. If the stash file gets corrupted, you can use the stash command of the kdb5_util utility to replace the corrupted file. The only time you should need to remove a stash file is after removing the Kerberos database with the destroy command of kdb5_util. Because the stash file isn't automatically removed with the database, you have to remove it to finish the cleanup.

How to Remove a Stash File

  1. Become superuser on the KDC that contains the stash file.

  2. Remove the stash file.


    # rm stash-file
    

    In this example, stash-file is the path to the stash file. By default, the stash file is located at /var/krb5/.k5.realm.

If you need to re-create the stash file, you can use the -f option of the kdb5_util command.

Increasing Security

Follow these steps to increase security on SEAM application servers and on KDC servers.

How to Restrict Access to KDC Servers

Both master KDC servers and slave KDC servers have copies of the KDC database stored locally. Restricting access to these servers so that the databases are secure is important to the overall security of the SEAM installation.

  1. Disable remote services in the /etc/inetd.conf file.

    To provide a secure KDC server, all nonessential network services should be disabled by commenting out the entry that starts the service in the /etc/inetd.conf file. In most circumstances, the only services that would need to run would be time and krdb5_kprop. In addition, any services that use loopback tli (ticlts, ticotsord, and ticots) can be left enabled. After you edit the file, it should look similar to the following (to shorten the example many comments have been removed):


    kdc1 # cat /etc/inetd.conf
    #
    #ident  "@(#)inetd.conf 1.33    98/06/02 SMI"   /* SVr4.0 1.5   */
      .
      .
    #name     dgram   udp     wait    root    /usr/sbin/in.tnamed     in.tnamed
    #
    #shell    stream  tcp     nowait  root    /usr/sbin/in.rshd       in.rshd
    #login    stream  tcp     nowait  root    /usr/sbin/in.rlogind    in.rlogind
    #exec     stream  tcp     nowait  root    /usr/sbin/in.rexecd     in.rexecd
    #comsat   dgram   udp     wait    root    /usr/sbin/in.comsat     in.comsat
    #talk     dgram   udp     wait    root    /usr/sbin/in.talkd      in.talkd
    #
    #uucp     stream  tcp     nowait  root    /usr/sbin/in.uucpd      in.uucpd
    #
    #finger   stream  tcp     nowait  nobody  /usr/sbin/in.fingerd    in.fingerd
    #
    # Time service is used for clock synchronization.
    #
    time      stream  tcp     nowait  root    internal
    time      dgram   udp     wait    root    internal
    # 
      .
      .
    #
    100234/1  tli rpc/ticotsord wait  root    /usr/lib/gss/gssd     gssd 
    #dtspc    stream  tcp     nowait  root    /usr/dt/bin/dtspcd      /usr/dt/bin/dtspcd
    #100068/2-5 dgram rpc/udp wait    root    /usr/dt/bin/rpc.cmsd    rpc.cmsd
    100134/1 tli rpc/ticotsord wait   root    /usr/lib/ktkt_warnd kwarnd
    krb5_prop stream  tcp     nowait  root    /usr/lib/krb5/kpropd  kpropd

    Reboot the KDC server after the changes are made.

  2. Restrict access to the hardware that supports the KDC.

    In order to restrict physical access, make sure that the KDC server and its monitor are located in a secure facility. Users should not be able to access this server in any way.

  3. Store KDC database backups on local disks or on the KDC slaves.

    Make tape backups of your KDC only if the tapes are stored securely. Follow the same practice for copies of keytab files. It would be best to store these files on a local file system that is not shared to other systems. The storage file system can be on either the master KDC server or any of the slave KDCs.

Chapter 16 SEAM Error Messages and Troubleshooting

This chapter provides resolutions for error messages that you might receive when you use SEAM, as well as some troubleshooting tips for various problems. This is a list of the error message and troubleshooting information in this chapter.

SEAM Error Messages

This section provides information about SEAM error messages, including why each error occurs and a way to fix it.

SEAM Administration Tool Error Messages


Unable to view the list of principals or policies; use the Name field.

Cause:

The admin principal that you logged in with does not have the list privilege (l) in the Kerberos ACL file (kadm5.acl), so you cannot view the principal list or policy list.

Solution:

You must enter the principal and policy names in the Name field to work on them, or you need to log on with a principal that has the appropriate privileges.


JNI: Java array creation failed


JNI: Java class lookup failed


JNI: Java field lookup failed


JNI: Java method lookup failed


JNI: Java object lookup failed


JNI: Java object field lookup failed


JNI: Java string access failed


JNI: Java string creation failed

Cause:

A serious problem exists with the Java Native Interface that is used by the SEAM Administration Tool (gkadmin).

Solution:

Exit gkadmin and restart it. If the problem persists, please report a bug.

Common SEAM Error Messages (A-M)

This section provides an alphabetical list (A-M) of common error messages for the SEAM commands, SEAM daemons, PAM framework, GSS interface, the NFS service, and the Kerberos library.


major_error minor_error gssapi error importing name

Cause:

An error occurred while a service name was being imported.

Solution:

Make sure that the service principal is in the host's keytab file.


Bad krb5 admin server hostname while initializing kadmin interface

Cause:

An invalid host name is configured for admin_server in the krb5.conf file.

Solution:

Make sure that the correct host name for the master KDC is specified on the admin_server line in the krb5.conf file.


Cannot contact any KDC for requested realm

Cause:

No KDC responded in the requested realm.

Solution:

Make sure that at least one KDC (either the master or slave) is reachable or that the krb5kdc daemon is running on the KDCs. Check the /etc/krb5/krb5.conf file for the list of configured KDCs (kdc = kdc_name).


Cannot determine realm for host

Cause:

Kerberos cannot determine the realm name for the host.

Solution:

Make sure that there is a default realm name, or that the domain name mappings are set up in the Kerberos configuration file (krb5.conf).


Cannot find KDC for requested realm

Cause:

No KDC was found in the requested realm.

Solution:

Make sure that the Kerberos configuration file (krb5.conf) specifies a KDC in the realm section.


cannot initialize realm realm_name

Cause:

The KDC might not have a stash file.

Solution:

Make sure that the KDC has a stash file. If not, create a stash file by using the kdb5_util command, and try running the krb5kdccommand again. The easiest way to start krb5kdc is to run the /etc/init.d/kdc script.


Cannot resolve KDC for requested realm

Cause:

Kerberos cannot determine any KDC for the realm.

Solution:

Make sure that the Kerberos configuration file (krb5.conf) specifies a KDC in the realm section.


Cannot reuse password

Cause:

The password that you entered has been used before by this principal.

Solution:

Choose a password that has not been chosen before, at least not within the number of passwords that are kept in the KDC database for each principal (this policy is enforced by the principal's policy).


Can't get forwarded credentials

Cause:

Credential forwarding could not be established.

Solution:

Make sure that the principal has forwardable credentials.


Can't open/find Kerberos configuration file

Cause:

The Kerberos configuration file (krb5.conf) was unavailable.

Solution:

Make sure that the krb5.conf file is available in the correct location and has the correct permissions. This file should be writable by root and readable by everyone else.


Client/server realm mismatch in initial ticket request

Cause:

A realm mismatch between the client and server occurred in the initial ticket request.

Solution:

Make sure that the server you are communicating with is in the same realm as the client, or that the realm configurations are correct.


Client or server has a null key

Cause:

The principal has a null key.

Solution:

Modify the principal to have a non-null key by using the cpw command of kadmin.


Communication failure with server while initializing kadmin interface

Cause:

The host that was entered for the admin server, also called the master KDC, did not have the kadmind daemon running.

Solution:

Make sure that you specified the correct host name for the master KDC. If you specified the correct host name, make sure that kadmind is running on the master KDC that you specified.


Credentials cache file permissions incorrect

Cause:

You do not have the appropriate read or write permissions on the credentials cache (/tmp/krb5cc_uid).

Solution:

Make sure that you have read and write permissions on the credentials cache.


Credentials cache I/O operation failed XXX

Cause:

Kerberos had a problem writing to the system's credentials cache (/tmp/krb5cc_uid).

Solution:

Make sure that the credentials cache has not been removed, and that there is space left on the device by using the df command.


Decrypt integrity check failed

Cause:

You might have an invalid ticket.

Solution:
  1. Make sure that your credentials are valid. Destroy your tickets with kdestroy and create new tickets with kinit.

  2. Make sure that the target host has a keytab file with the correct version of the service key. Use kadmin to view the key version number of the service principal (for example, host/FQDN_hostname) in the Kerberos database. Also, use klist -k on the target host to make sure that it has the same key version number.


df: cannot statvfs filesystem: Invalid argument

Cause:

The df command cannot access the Kerberized NFS file system, which is currently mounted, to generate its report, because you no longer have the appropriate root credentials. Destroying your credentials for a mounted Kerberized file system does not automatically unmount the file system.

Solution:

You must create new root credentials to access the Kerberized file system. If you no longer require access to the Kerberized file system, unmount the file system.


failed to obtain credentials cache

Cause:

During kadmin initialization, a failure occurred when kadmin tried to obtain credentials for the admin principal.

Solution:

Make sure that you used the correct principal and password when you executed kadmin.


Field is too long for this implementation

Cause:

The message size that was being sent by a Kerberized application was too long. The maximum message size that can be handled by Kerberos is 65535 bytes. In addition, there are limits on individual fields within a protocol message that is sent by Kerberos.

Solution:

Make sure that your Kerberized applications are sending valid message sizes.


GSS-API (or Kerberos) error

Cause:

This message is a generic GSS-API or Kerberos error message and can be caused by several different problems.

Solution:

Check the /etc/krb5/kdc.log file to find the more specific GSS-API error message that was logged when this error occurred.


Hostname cannot be canonicalized

Cause:

Kerberos cannot make the host name fully qualified.

Solution:

Make sure that the host name is defined in DNS and that the host-name-to-address and address-to-host-name mappings are consistent.


Illegal cross-realm ticket

Cause:

The ticket sent did not have the correct cross-realms. The realms might not have the correct trust relationships set up.

Solution:

Make sure that the realms you are using have the correct trust relationships.


Improper format of Kerberos configuration file

Cause:

The Kerberos configuration file (krb5.conf) has invalid entries.

Solution:

Make sure that all the relations in the krb5.conf file are followed by the “=” sign and a value. Also, verify that the brackets are present in pairs for each subsection.


Inappropriate type of checksum in message

Cause:

The message contained an invalid checksum type.

Solution:

Check which valid checksum types are specified in the krb5.conf and kdc.conf files.


Incorrect net address

Cause:

There was a mismatch in the network address. The network address in the ticket that was being forwarded was different from the network address where the ticket was processed. This message might occur when tickets are being forwarded.

Solution:

Make sure that the network addresses are correct. Destroy your tickets with kdestroy, and create new tickets with kinit.


Invalid flag for file lock mode

Cause:

An internal Kerberos error occurred.

Solution:

Please report a bug.


Invalid message type specified for encoding

Cause:

Kerberos could not recognize the message type that was sent by the Kerberized application.

Solution:

If you are using a Kerberized application that was developed by your site or a vendor, make sure that it is using Kerberos correctly.


Invalid number of character classes

Cause:

The password that you entered for the principal does not contain enough password classes, as enforced by the principal's policy.

Solution:

Make sure that you enter a password with the minimum number of password classes that the policy requires.


KADM err: Memory allocation failure

Cause:

There is not enough memory to run kadmin.

Solution:

Free up memory and try running kadmin again.


KDC can't fulfill requested option

Cause:

The KDC did not allow the requested option. A possible problem might be that postdating or forwardable options were being requested, and the KDC did not allow it. Another problem might be that you requested the renewal of a TGT, but you didn't have a renewable TGT.

Solution:

Determine if you are requesting an option that either the KDC does not allow or if you are requesting a type of ticket that is not available.


KDC policy rejects request

Cause:

The KDC policy did not allow the request. For example, the request to the KDC did not have an IP address in its request, or forwarding was requested, but the KDC did not allow it.

Solution:

Make sure that you are using kinit with the correct options. If necessary, modify the policy that is associated with the principal or change the principal's attributes to allow the request. You can modify the policy or principal by using kadmin.


KDC reply did not match expectations

Cause:

The KDC reply did not contain the expected principal name, or other values in the response were incorrect.

Solution:

Make sure that the KDC you are communicating with complies with RFC1510, or that the request you are sending is a Kerberos V5 request, or that the KDC is available.


Key table entry not found

Cause:

There is no entry for the service principal in the network application server's keytab file.

Solution:

Add the appropriate service principal to the server's keytab file so that it can provide the Kerberized service.


Key version number for principal in key table is incorrect

Cause:

A principal's key version is different in the keytab file and in the Kerberos database. Either a service's key has been changed, or you might be using an old service ticket.

Solution:

If a service's key has been changed (for example, by using kadmin), you need to extract the new key and store it in the host's keytab file where the service is running.

Alternately, you might be using an old service ticket that has an older key. You might want to run the kdestroy command and then the kinit command again.


login: load_modules: can not open module /usr/lib/security/pam_krb5.so.1

Cause:

Either the Kerberos PAM module is missing or it is not a valid executable binary.

Solution:

Make sure that the Kerberos PAM module is in the /usr/lib/security directory and that it is a valid executable binary. Also, make sure that the /etc/pam.conf file contains the correct path to pam_krb5.so.1.


Looping detected inside krb5_get_in_tkt

Cause:

Kerberos made several attempts to get the initial tickets but failed.

Solution:

Make sure that at least one KDC is responding to authentication requests.


Master key does not match database

Cause:

The loaded database dump was not created from a database that contains the master key, which is located in /var/krb5/.k5.REALM.

Solution:

Make sure that the master key in the loaded database dump matches the master key that is located in /var/krb5/.k5.REALM.


Matching credential not found

Cause:

The matching credential for your request was not found. Your request requires credentials that are unavailable in the credentials cache.

Solution:

Destroy your tickets with kdestroy and create new tickets with kinit.


Message out of order

Cause:

Messages that were sent using sequential-order privacy arrived out of order. Some messages might have been lost in transit.

Solution:

You should reinitialize the Kerberos session.


Message stream modified

Cause:

There was a mismatch between the computed checksum and the message checksum. The message might have been modified while in transit, which can indicate a security leak.

Solution:

Make sure that the messages are being sent across the network correctly. Since this message can also indicate the possible tampering of messages while they are being sent, destroy your tickets using kdestroy and reinitialize the Kerberos services that you are using.

Common SEAM Error Messages (N-Z)

This section provides an alphabetical list (N-Z) of common error messages for the SEAM commands, SEAM daemons, PAM framework, GSS interface, the NFS service, and the Kerberos library.


No credentials cache file found

Cause:

Kerberos could not find the credentials cache (/tmp/krb5cc_uid).

Solution:

Make sure that the credential file exists and is readable. If it isn't, try performing the kinit again.


Operation requires “privilege” privilege

Cause:

The admin principal that was being used does not have the appropriate privilege configured in the kadm5.acl file.

Solution:

Use a principal that has the appropriate privileges. Or, configure the principal that was being used to have the appropriate privileges by modifying the kadm5.acl file. Usually, a principal with “/admin” as part of its name has the appropriate privileges.


PAM-KRB5: Kerberos V5 authentication failed: password incorrect

Cause:

Your UNIX password and Kerberos passwords are different. Most non-Kerberized commands, such as login, are set up through PAM to automatically authenticate with Kerberos by using the same password that you specified for your UNIX password. If your passwords are different, the Kerberos authentication fails.

Solution:

You must enter your Kerberos password when prompted.


Password is in the password dictionary

Cause:

The password that you entered is in a password dictionary that is being used. Your password is not a good choice for a password.

Solution:

Choose a password that has a mix of password classes.


Permission denied in replay cache code

Cause:

The system's replay cache could not be opened. The server might have been first run under a user ID different than your current user ID.

Solution:

Make sure that the replay cache has the appropriate permissions. The replay cache is stored on the host where the Kerberized server application is running (/usr/tmp/rc_service_name). Instead of changing the permissions on the current replay cache, you can also remove the replay cache before you run the Kerberized server under a different user ID.


Protocol version mismatch

Cause:

Most likely, a Kerberos V4 request was sent to the KDC. SEAM supports only the Kerberos V5 protocol.

Solution:

Make sure that your applications are using the Kerberos V5 protocol.


Request is a replay

Cause:

The request has already been sent to this server and processed. The tickets might have been stolen, and someone else is trying to reuse the tickets.

Solution:

Wait for a few minutes and reissue the request.


Requested principal and ticket don't match

Cause:

The service principal that you are connecting to and the service ticket that you have do not match.

Solution:

Make sure that DNS is functioning properly. If you are using another vendor's software, make sure that the software is using principal names correctly.


Requested protocol version not supported

Cause:

Most likely, a Kerberos V4 request was sent to the KDC. SEAM supports only the Kerberos V5 protocol.

Solution:

Make sure that your applications are using the Kerberos V5 protocol.


Required parameters in krb5.conf missing while initializing kadmin interface

Cause:

There is a missing parameter (such as the admin_server parameter) in the krb5.conf file.

Solution:

Determine which parameter is missing and add it to the krb5.conf file.


Server rejected authentication (during sendauth exchange)

Cause:

The server that you are trying to communicate with rejected the authentication. Most often this error occurs during Kerberos database propagation. Some common causes might be problems with the kpropd.acl file, DNS, or the keytab file.

Solution:

If you get this error when you are running applications other than kprop, investigate whether the server's keytab file is correct.


Set gss service nfs@<host> failed. Check nfs service credential.

Cause:

This message is generated by syslog after a share command has failed with an “invalid argument” message. The most likely cause of this message is that either there is no keytab file or that there is no NFS service principle in the keytab file.

Solution:

To isolate the problem, run klist -k to see if the keytab file exists and if there is an NFS service principal for the host in the keytab file.


The ticket isn't for us


Ticket/authenticator don't match

Cause:

There was a mismatch between the ticket and authenticator. The principal name in the request might not have matched the service principal's name, because the ticket was being sent with an FQDN name of the principal while the service expected non-FQDN, or vice versa.

Solution:

If you get this error when you are running applications other than kprop, investigate whether the server's keytab file is correct.


Ticket expired

Cause:

Your ticket times have expired.

Solution:

Destroy your tickets with kdestroy and create new tickets with kinit.


Ticket is ineligible for postdating

Cause:

The principal does not allow its tickets to be postdated.

Solution:

Modify the principal with kadmin to allow postdating.


Ticket not yet valid

Cause:

The postdated ticket is not valid yet.

Solution:

Create new tickets with the correct date, or wait until the current tickets are valid.


Truncated input file detected

Cause:

The database dump file that was being used in the operation is not a complete dump file.

Solution:

Create the dump file again, or use a different database dump file.


Wrong principal in request

Cause:

There was an invalid principal name in the ticket. This error might indicate a DNS or FQDN problem.

Solution:

Make sure that the principal of the service matches the principal in the ticket.

SEAM Troubleshooting

This section provides troubleshooting information for the SEAM software.

Problems Mounting a Kerberized NFS File System

In this example, the setup allows one reference to the different interfaces and allows a single service principal instead of three service principals in the server's keytab file.

Problems Authenticating as root

If authentication fails when you try to become superuser on your system and you have already added the root principal to your host's keytab file, there are two potential problems to check. First, make sure that the root principal in the keytab file has a fully-qualified name as its instance. If it does, check the /etc/resolv.conf file to make sure that the system is correctly set up as a DNS client.

Chapter 17 Administering Principals and Policies (Tasks)

This chapter provides procedures for managing principals and the policies that are associated with them. This chapter also shows how to manage a host's keytab file.

This chapter should be used by anyone who needs to administer principals and policies. Before you use this chapter, you should be familiar with principals and policies, including any planning considerations. Refer to Chapter 13, Introduction to SEAM and Chapter 14, Planning for SEAM, respectively.

This is a list of the information in this chapter.

Ways to Administer Principals and Policies

The Kerberos database on the master KDC contains all of your realm's Kerberos principals, their passwords, policies, and other administrative information. To create and delete principals, and to modify their attributes, you can use the kadmin or gkadmin commands.

The kadmin command provides an interactive command-line interface that enables you to maintain Kerberos principals, policies, and keytab files. There are two versions of the kadmin command:

Other than kadmin using Kerberos to authenticate the user, the capabilities of the two versions are identical. The local version is necessary to enable you to set up enough of the database so that you can use the remote version.

Also, SEAM provides the SEAM Administration Tool, gkadmin, which is an interactive graphical user interface (GUI) that provides essentially the same capabilities as the kadmin command. See SEAM Administration Tool for more information.

SEAM Administration Tool

The SEAM Administration Tool is an interactive graphical user interface (GUI) that enables you to maintain Kerberos principals and policies. This tool provides much the same capabilities as the kadmin command. However, this tool does not support the management of keytab files. You must use the kadmin command to administer keytab files, which is described in Administering Keytab Files.

Similar to the kadmin command, the SEAM Tool uses Kerberos authentication and encrypted RPC to operate securely from anywhere on the network. The SEAM Tool enables you to do the following:

The SEAM Tool also provides context-sensitive help and general online help.

The following task maps provide pointers to the various tasks that you can do with the SEAM Tool:

Also, go to SEAM Tool Panel Descriptions for descriptions of all the principal attributes and policy attributes that you can either specify or view in the SEAM Tool.

Command-Line Equivalents of the SEAM Tool

This section lists the kadmin commands that provide the same capabilities as the SEAM Tool. These commands can be used without running an X Window system. Even though most procedures in this chapter use the SEAM Tool, many procedures also provide corresponding examples that use the command-line equivalents.

Table 17–1 Command-Line Equivalents of the SEAM Tool

SEAM Tool Procedure 

Equivalent kadmin Command

View the list of principals 

list_principals or get_principals

View a principal's attributes 

get_principal

Create a new principal 

add_principal

Duplicate a principal 

No command-line equivalent 

Modify a principal 

modify_principal or change_password

Delete a principal 

delete_principal

Set up defaults for creating new principals 

No command-line equivalent 

View the list of policies 

list_policies or get_policies

View a policy's attributes 

get_policy

Create a new policy 

add_policy

Duplicate a policy 

No command-line equivalent 

Modify a policy 

modify_policy

Delete a policy 

delete_policy

Files Modified by the SEAM Tool

The only file that the SEAM Tool modifies is the $HOME/.gkadmin file. This file contains the default values for creating new principals. You can update this file by choosing Properties from the Edit menu.

Print and Online Help Features of the SEAM Tool

The SEAM Tool provides both print features and online help features. From the Print menu, you can send the following to a printer or a file:

From the Help menu, you can access context-sensitive help and general help. When you choose Context-Sensitive Help from the Help menu, the Context-Sensitive Help window is displayed and the tool is switched to help mode. In help mode, when you click on any fields, labels, or buttons on the window, help on that item is displayed in the Help window. To switch back to the tool's normal mode, click Dismiss in the Help window.

You can also choose Help Contents, which opens an HTML browser that provides pointers to the general overview and task information that is provided in this chapter.

Working With Large Lists in the SEAM Tool

As your site starts to accumulate a large number of principals and policies, the time it takes the SEAM Tool to load and display the principal and policy lists will become increasingly longer. Thus, your our productivity with the tool will increase. There are several ways to work around this problem.

First, you can completely eliminate the time to load the lists by not having the SEAM Tool load the lists. You can set this option by choosing Properties from the Edit menu, and unchecking the Show Lists field. Of course, when the tool doesn't load the lists, it can't display the lists, and you can no longer use the list panels to select principals or policies. Instead, you must type a principal or policy name in the new Name field that is provided, then select the operation that you want to perform on it. In effect, typing a name is equivalent to selecting an item from the list.

Another way to work with large lists is to cache them. In fact, caching the lists for a limited time is set as the default behavior for the SEAM Tool. The SEAM Tool must still initially load the lists into the cache, but after that, the tool can use the cache rather than retrieve the lists again. This option eliminates the need to keep loading the lists from the server, which is what takes so long.

You can set list caching by choosing Properties from the Edit menu. There are two cache settings. You can choose to cache the list forever, or you can specify a time limit when the tool must reload the lists from the server into the cache.

Caching the lists still enables you to use the list panels to select principals and policies, so it doesn't affect how you use the SEAM Tool as the first option does. Also, even though caching doesn't enable you to see the changes of others, you can still see the latest list information based on your changes, since your changes update the lists both on the server and in the cache. And, if you want to update the cache to see other changes and get the lastest copy of the lists, you can use the Refresh menu whenever you want to refresh the cache from the server.

How to Start the SEAM Tool

  1. Start the SEAM Tool by using the gkadmin command.


    $ /usr/sbin/gkadmin
    

    The SEAM Administration Login window is displayed.

    Dialog box titled SEAM Administration Login shows four fields for Principal Name, Password, Realm, and Master KDC. Shows OK and Start Over buttons.
  2. If you don't want to use the default values, specify new default values.

    The window automatically fills in with default values. The default principal name is determined by taking your current identity from the USER environment variable and appending /admin to it (username/admin). The default Realm and Master KDC fields are selected from the /etc/krb5/krb5.conf file. If you ever want to retrieve the default values, click Start Over.


    Note –

    The administration operations that each Principal Name can perform are dictated by the Kerberos ACL file, /etc/krb5/kadm5.acl. For information about limited privileges, see Using the SEAM Tool With Limited Kerberos Administration Privileges.


  3. Enter a password for the specified principal name.

  4. Click OK.

    The following window is displayed.

    Dialog box titled Seam Administration Tool shows a list of principals and a list filter. Shows Modify, Create New, Delete, and Duplicate buttons.

Administering Principals

This section provides the step-by-step instructions to administer principals with the SEAM Tool. This section also provides examples of equivalent command lines, when available.

Administering Principals (Task Map)

Task 

Description 

For Instructions 

View the list of principals 

View the list of principals by clicking the Principals tab. 

How to View the List of Principals

View a principal's attributes 

View a principal's attributes by selecting the Principal in the Principal List, then clicking the Modify button. 

How to View a Principal's Attributes

Create a new principal 

Create a new principal by clicking the Create New button in the Principal List panel. 

How to Create a New Principal

Duplicate a principal 

Duplicate a principal by selecting the principal to duplicate in the Principal List, then clicking the Duplicate button. 

How to Duplicate a Principal

Modify a principal 

Modify a principal by selecting the principal to modify in the Principal List, then clicking the Modify button. 

Note that you cannot modify a principal's name. To rename a principal, you must duplicate the principal, specify a new name for it, save it, and then delete the old principal. 

How to Modify a Principal

Delete a principal 

Delete a principal by selecting the principal to delete in the Principal List, then clicking the Delete button. 

How to Delete a Principal

Set up defaults for creating new principals 

Set up defaults for creating new principals by choosing Properties from the Edit menu. 

How to Set Up Defaults for Creating New Principals

Modify the Kerberos administration privileges (kadm5.acl File)

Command-line only. The Kerberos administration privileges determine what operations a principal can perform on the Kerberos database, such as add and modify. You need to edit the /etc/krb5/kadm5.acl file to modify the Kerberos administration privileges for each principal.

How to Modify the Kerberos Administration Privileges

Automating the Creation of New Principals

Even though the SEAM Tool provides ease-of-use, it doesn't provide a way to automate the creation of new principals. Automation is especially useful if you need to add 10 or even 100 new principals in a short time. However, by using the kadmin.local command in a Bourne shell script, you can do just that.

The following shell script line is an example of how automate the creation of new principals:

sed -e 's/^\(.*\)$/ank +needchange -pw \1 \1/' < princnames |
        time /usr/sbin/kadmin.local> /dev/null

This example is split over two lines readability. The script reads in a file called princnames that contains principal names and their passwords, and adds them to the Kerberos database. You would have to create the princnames file, which contains a principal name and its password on each line, separated by one or more spaces. The +needchange option configures the principal so that the user is prompted for a new password during login with the principal for the first time. This practice helps to ensure that the passwords in the princnames file are not a security risk.

You can build more elaborate scripts. For example, your script could use the information in the name service to obtain the list of user names for the principal names. What you do and how you do it is determined by your site needs and your scripting expertise.

How to View the List of Principals

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Principals tab.

    The list of principals is displayed.

    Dialog box titled Seam Administration Tool shows a list of principals and a list filter. Shows Modify, Create New, Delete, and Duplicate buttons.
  3. Display a specific principal or a sublist of principals.

    Type a filter string in the Filter field, and press Return. If the filter succeeds, the list of principals that match the filter is displayed.

    The filter string must consist of one or more characters. Because the filter mechanism is case sensitive, you need to use the appropriate uppercase and lowercase letters for the filter. For example, if you type the filter string ge, the filter mechanism displays only the principals with the ge string in them (for example, george or edge).

    If you want to display the entire list of principals, click Clear Filter.

Example—Viewing the List of Principals (Command Line)

In the following example, the list_principals command of kadmin is used to list all the principals that match test*. Wildcards can be used with the list_principals command.


kadmin: list_principals test*
test1@EXAMPLE.COM
test2@EXAMPLE.COM
kadmin: quit

How to View a Principal's Attributes

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Principals tab.

  3. Select the principal in the list that you want to view, then click Modify.

    The Principal Basics panel that contains some of the principal's attributes is displayed.

  4. Continue to click Next to view all the principal's attributes.

    Three windows contain attribute information. Choose Context-Sensitive Help from the Help menu to get information about the various attributes in each window. Or, for all the principal attribute descriptions, go to SEAM Tool Panel Descriptions.

  5. When you are finished viewing, click Cancel.

Example—Viewing a Principal's Attributes

The following example shows the first window when you are viewing the jdb/admin principal.

Dialog box titled SEAM Administration Tool shows account data for the jdb/admin principal.  Shows account expiration date and comments.

Example—Viewing a Principal's Attributes (Command Line)

In the following example, the get_principal command of kadmin is used to view the attributes of the jdb/admin principal.


kadmin: getprinc jdb/admin
Principal: jdb/admin@EXAMPLE.COM
Expiration date: Fri Aug 25 17:19:05 PDT 2000
Last password change: [never]
Password expiration date: Wed Apr 14 11:53:10 PDT 1999
Maximum ticket life: 1 day 16:00:00
Maximum renewable life: 1 day 16:00:00
Last modified: Thu Jan 14 11:54:09 PST 1999 (admin/admin@EXAMPLE.COM)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 1
Key: vno 1, DES cbc mode with CRC-32, no salt
Attributes: REQUIRES_HW_AUTH
Policy: [none]
kadmin: quit

How to Create a New Principal

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.


    Note –

    If you are creating a new principal that might need a new policy, you should create the new policy before you create the new principal. Go to How to Create a New Policy.


  2. Click the Principals tab.

  3. Click New.

    The Principal Basics panel that contains some attributes for a principal is displayed.

  4. Specify a principal name and a password.

    Both the principal name and password are mandatory.

  5. Specify values for the principal's attributes, and continue to click Next to specify more attributes.

    Three windows contain attribute information. Choose Context-Sensitive Help from the Help menu to get information about the various attributes in each window. Or, for all the principal attribute descriptions, go to SEAM Tool Panel Descriptions.

  6. Click Save to save the principal, or click Done on the last panel.

  7. If needed, set up Kerberos administration privileges for the new principal in the /etc/krb5/kadm5.acl file.

    See How to Modify the Kerberos Administration Privileges for more details.

Example—Creating a New Principal

The following example shows the Principal Basics panel when a new principal called pak is created. The policy is set to testuser.

Dialog box titled SEAM Administration Tool shows account data for the pak principal.  Shows password, account expiration date, and testuser policy.

Example—Creating a New Principal (Command Line)

In the following example, the add_principal command of kadmin is used to create a new principal called pak. The principal's policy is set to testuser.


kadmin: add_principal -policy testuser pak
Enter password for principal "pak@EXAMPLE.COM": <type the password>
Re-enter password for principal "pak@EXAMPLE.COM": <type the password again>
Principal "pak@EXAMPLE.COM" created.
kadmin: quit

How to Duplicate a Principal

This procedure explains how to use all or some of the attributes of an existing principal to create a new principal. No command-line equivalent exists for this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Principals tab.

  3. Select the principal in the list that you want to duplicate, then click Duplicate.

    The Principal Basics panel is displayed. All the attributes of the selected principal are duplicated except for the Principal Name and Password fields, which are empty.

  4. Specify a principal name and a password.

    Both the principal name and the password are mandatory. To make an exact duplicate of the principal you selected, click Save and skip to Step 7.

  5. Specify different values for the principal's attributes, and continue to click Next to specify more attributes.

    Three windows contain attribute information. Choose Context-Sensitive Help from the Help menu to get information about the various attributes in each window. Or, for all the principal attribute descriptions, go to SEAM Tool Panel Descriptions.

  6. Click Save to save the principal, or click Done on the last panel.

  7. If needed, set up Kerberos administration privileges for the principal in /etc/krb5/kadm5.acl file.

    See How to Modify the Kerberos Administration Privileges for more details.

How to Modify a Principal

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Principals tab.

  3. Select the principal in the list that you want to modify, then click Modify.

    The Principal Basics panel that contains some of the attributes for the principal is displayed.

  4. Modify the principal's attributes, and continue to click Next to modify more attributes.

    Three windows contain attribute information. Choose Context-Sensitive Help from the Help menu to get information about the various attributes in each window. Or, for all the principal attribute descriptions, go to SEAM Tool Panel Descriptions.


    Note –

    You cannot modify a principal's name. To rename a principal, you must duplicate the principal, specify a new name for it, save it, and then delete the old principal.


  5. Click Save to save the principal, or click Done on the last panel.

  6. Modify the Kerberos administration privileges for the principal in the /etc/krb5/kadm5.acl file.

    See How to Modify the Kerberos Administration Privileges for more details.

Example—Modifying a Principal's Password (Command Line)

In the following example, the change_password command of kadmin is used to modify the password for the jdb principal. The change_password command does not let you change the password to a password that is in the principal's password history.


kadmin: change_password jdb
Enter password for principal "jdb": <type the new password>
Re-enter password for principal "jdb": <type the password again>
Password for "jdb@EXAMPLE.COM" changed.
kadmin: quit

To modify other attributes for a principal, you must use the modify_principal command of kadmin.

How to Delete a Principal

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Principals tab.

  3. Select the principal in the list that you want to delete, then click Delete.

    After you confirm the deletion, the principal is deleted.

  4. Remove the principal from the Kerberos access control list (ACL) file, /etc/krb5/kadm5.acl.

    See How to Modify the Kerberos Administration Privileges for more details.

Example—Deleting a Principal (Command Line)

In the following example, the delete_principal command of kadmin is used to delete the jdb principal.


kadmin: delete_principal pak
Are you sure you want to delete the principal "pak@EXAMPLE.COM"? (yes/no): yes
Principal "pak@EXAMPLE.COM" deleted.
Make sure that you have removed this principal from all ACLs before reusing.
kadmin: quit

How to Set Up Defaults for Creating New Principals

No command-line equivalent exists for this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Choose Properties from the Edit Menu.

    The Properties window is displayed.

    Dialog box titled Properties shows defaults for new principals and list controls. Defaults for principals cover security and other options.
  3. Select the defaults that you want when you create new principals.

    Choose Context-Sensitive Help from the Help menu for information about the various attributes in each window.

  4. Click Save.

How to Modify the Kerberos Administration Privileges

Even though your site probably has many user principals, you usually want only a few users to be able to administer the Kerberos database. Privileges to administer the Kerberos database are determined by the Kerberos access control list (ACL) file, kadm5.acl. The kadm5.acl file enables you to allow or disallow privileges for individual principals. Or, you can use the '*' wildcard in the principal name to specify privileges for groups of principals.

  1. Become superuser on the master KDC.

  2. Edit the /etc/krb5/kadm5.acl file.

    An entry in the kadm5.acl file must have the following format:


    principal   privileges  [principal-target]

    principal

    Specifies the principal to which the privileges are granted. Any part of the principal name can include the '*' wildcard, which is useful for providing the same privileges for a group of principals. For example, if you want to specify all principals with the admin instance, you would use */admin@realm.

    Note that a common use of an admin instance is to grant separate privileges (such as administration access to the Kerberos database) to a separate Kerberos principal. For example, the user jdb might have a principal for his administrative use, called jdb/admin. This way, the user jdb obtains jdb/admin tickets only when he or she actually needs to use those privileges.

    privileges

    Specifies which operations can or cannot be performed by the principal. This field consists of a string of one or more of the following list of characters or their uppercase counterparts. If the character is uppercase (or not specified), then the operation is disallowed. If the character is lowercase, then the operation is permitted. 

     

    a

    [Dis]allows the addition of principals or policies. 

     

    d

    [Dis]allows the deletion of principals or policies. 

     

    m

    [Dis]allows the modification of principals or polices. 

     

    c

    [Dis]allows the changing of passwords for principals. 

     

    i

    [Dis]allows inquiries to the Kerberos database. 

     

    l

    [Dis]allows the listing of principals or policies in the Kerberos database. 

     

    x or *

    Allows all privileges (admcil).

    principal-target

    When a principal is specified in this field, the privileges apply to principal only when the principal operates on the principal_target. Any part of the principal name can include the '*' wildcard, which is useful to group principals.

Example—Modifying the Kerberos Administration Privileges

The following entry in the kadm5.acl file gives any principal in the EXAMPLE.COM realm with the admin instance all the privileges on the Kerberos database.


*/admin@EXAMPLE.COM *

The following entry in the kadm5.acl file gives the jdb@EXAMPLE.COM principal the privilege to add, list, and inquire about any principal that has the root instance.


jdb@EXAMPLE.COM ali */root@EXAMPLE.COM

Administering Policies

This section provides step-by-step instructions to administer policies with the SEAM Tool. This section also provides examples of equivalent command lines, when available.

Administering Policies (Task Map)

Task 

Description 

For Instructions 

View the list of policies 

View the list of policies by clicking the Policies tab. 

How to View the List of Policies

View a policy's attributes 

View a policy's attributes by selecting the policy in the Policy List, then clicking the Modify button. 

How to View a Policy's Attributes

Create a new policy 

Create a new policy by clicking the Create New button in the Policy List panel. 

How to Create a New Policy

Duplicate a policy 

Duplicate a policy by selecting the policy to duplicate in the Policy List, then clicking the Duplicate button. 

How to Duplicate a Policy

Modify a policy 

Modify a policy by selecting the policy to modify in the Policy List, then clicking the Modify button. 

Note that you cannot modify a policy's name. To rename a policy, you must duplicate the policy, specify a new name for it, save it, and then delete the old policy. 

How to Modify a Policy

Delete a policy 

Delete a policy by selecting the policy to delete in the Policy List, then clicking the Delete button. 

How to Delete a Policy

How to View the List of Policies

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Policies tab.

    The list of policies is displayed.

    Dialog box titled SEAM Administration Tool shows a list of policies and a policy filter. Shows Modify, Create New, Delete, and Duplicate buttons.
  3. Display a specific policy or a sublist of policies.

    Type a filter string in the Filter field, and press Return. If the filter succeeds, the list of policies that match the filter is displayed.

    The filter string must consist of one or more characters. Because the filter mechanism is case sensitive, you need to use the appropriate uppercase and lowercase letters for the filter. For example, if you type the filter string ge, the filter mechanism displays only the policies with the ge string in them (for example, george or edge).

    If you want to display the entire list of policies, click Clear Filter.

Example—Viewing the List of Policies (Command Line)

In the following example, the list_policies command of kadmin is used to list all the policies that match *user*. Wildcards can be used with the list_policies command.


kadmin: list_policies *user*
testuser
enguser
kadmin: quit

How to View a Policy's Attributes

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Policies tab.

  3. Select the policy in the list that you want to view, then click Modify.

    The Policy Details panel is displayed.

  4. When you are finished viewing, click Cancel.

Example—Viewing a Policy's Attributes

The following example shows the Policy Details panel when you are viewing the test policy.

Dialog box titled SEAM Administration Tool shows policy details of the enguser policy. Shows Save, Previous, Done, and Cancel buttons

Example—Viewing a Policy's Attributes (Command Line)

In the following example, the get_policy command of kadmin is used to view the attributes of the enguser policy.


kadmin: get_policy enguser
Policy: enguser
Maximum password life: 2592000
Minimum password life: 0
Minimum password length: 8
Minimum number of password character classes: 2
Number of old keys kept: 3
Reference count: 0
kadmin: quit

The reference count is the number of principals that use this policy.

How to Create a New Policy

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Policies tab.

  3. Click New.

    The Policy Details panel is displayed.

  4. Specify a name for the policy in the Policy Name field.

    The policy name is mandatory.

  5. Specify values for the policy's attributes.

    Choose Context-Sensitive Help from the Help menu for information about the various attributes in this window. Or, go to Table 17–5 for all the policy attribute descriptions.

  6. Click Save to save the policy, or click Done.

Example—Creating a New Policy

In the following example, a new policy called build11 is created. The Minimum Password Classes is set to 3.

Dialog box titled SEAM Administration Tool shows policy details of the build11 policy.  Shows Save, Previous, Done, and Cancel buttons.

Example—Creating a New Policy (Command Line)

In the following example, the add_policy command of kadmin is used to create the build11 policy. This policy requires at least 3 character classes in a password.


$ kadmin
kadmin: add_policy -minclasses 3 build11
kadmin: quit

How to Duplicate a Policy

This procedure explains how to use all or some of the attributes of an existing policy to create a new policy. No command-line equivalent exists for this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Policies tab.

  3. Select the policy in the list that you want to duplicate, then click Duplicate.

    The Policy Details panel is displayed. All the attributes of the selected policy are duplicated, except for the Policy Name field, which is empty.

  4. Specify a name for the duplicated policy in the Policy Name field.

    The policy name is mandatory. To make an exact duplicate of the policy you selected, click Save and skip to Step 6.

  5. Specify different values for the policy's attributes.

    Choose Context-Sensitive Help from the Help menu for information about the various attributes in this window. Or, go to Table 17–5 for all the policy attribute descriptions.

  6. Click Save to save the policy, or click Done.

How to Modify a Policy

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Policies tab.

  3. Select the policy in the list that you want to modify, then click Modify.

    The Policy Details panel is displayed.

  4. Modify the policy's attributes.

    Choose Context-Sensitive Help from the Help menu for information about the various attributes in this window. Or, go to Table 17–5 for all the policy attribute descriptions.


    Note –

    You cannot modify a policy's name. To rename a policy, you must duplicate the policy, specify a new name for it, save it, and then delete the old policy.


  5. Click Save to save the policy, or click Done.

Example—Modifying a Policy (Command Line)

In the following example, the modify_policy command of kadmin is used to modify the minimum length of a password to five characters for the build11 policy.


$ kadmin
kadmin: modify_policy -minlength 5 build11
kadmin: quit

How to Delete a Policy

An example of the command-line equivalent follows this procedure.

  1. If necessary, start the SEAM Tool.

    See How to Start the SEAM Tool for details.

  2. Click the Policies tab.


    Note –

    Before you delete a policy, you must cancel the policy from all principals that are currently using it. To do so, you need to modify the principals' Policy attribute. The policy cannot be deleted if any principal is using it.


  3. Select the policy in the list that you want to delete, then click Delete.

    After you confirm the deletion, the policy is deleted.

Example—Deleting a Policy (Command Line)

In the following example, the delete_policy command of the kadmin command is used to delete the build11 policy.


kadmin: delete_policy build11 
Are you sure you want to delete the policy "build11"? (yes/no): yes
kadmin: quit

Before you delete a policy, you must cancel the policy from all principals that are currently using it. To do so, you need to use the modify_principal -policy command of kadmin on the affected principals. The delete_policy command fails if the policy is in use by a principal.

SEAM Tool Reference

This section provides reference information for the SEAM Tool.

SEAM Tool Panel Descriptions

This section provides descriptions for each principal and policy attribute that you can either specify or view in the SEAM Tool. The attributes are organized by the panel in which they are displayed.

Table 17–2 Attributes for the Principal Basics Panel

Attribute 

Description 

Principal Name 

The name of the principal (the primary/instance part of a fully-qualified principal name). A principal is a unique identity to which the KDC can assign tickets.

If you are modifying a principal, you cannot edit a principal's name. 

Password 

The password for the principal. You can use the Generate Random Password button to create a random password for the principal. 

Policy 

A menu of available policies for the principal. 

Account Expires 

The date and time on which the principal's account expires. When the account expires, the principal can no longer get a ticket-granting ticket (TGT) and might be unable to log in. 

Last Principal Change  

The date on which information for the principal was last modified. (Read-only) 

Last Changed By 

The name of the principal that last modified the account for this principal. (Read-only) 

Comments 

Comments that are related to the principal (for example, “Temporary Account”). 

Table 17–3 Attributes for the Principal Details Panel

Attribute 

Description 

Last Success 

The date and time when the principal last logged in successfully. (Read-only) 

Last Failure 

The date and time when the last login failure for the principal occurred. (Read-only) 

Failure Count 

The number of times that there has been a login failure for the principal. (Read-only) 

Last Password Change 

The date and time when the principal's password was last changed. (Read-only) 

Password Expires 

The date and time when the principal's current password expires. 

Key Version 

The key version number for the principal. This attribute is normally changed only when a password has been compromised. 

Maximum Lifetime (seconds) 

The maximum length of time for which a ticket can be granted for the principal (without renewal). 

Maximum Renewal (seconds) 

The maximum length of time for which an existing ticket can be renewed for the principal. 

Table 17–4 Attributes of the Principal Flags Panel

Attribute (Radio Buttons) 

Description 

Disable Account 

When checked, prevents the principal from logging in. This attribute provides an easy way to temporarily freeze a principal account. 

Require Password Change 

When checked, expires the principal's current password, which forces the user to use the kpasswd command to create a new password. This attribute is useful if there is a security breach, and you need to make sure that old passwords are replaced.

Allow Postdated Tickets 

When checked, allows the principal to obtain postdated tickets.  

For example, you might need to use postdated tickets for cron jobs that must run after hours, but you cannot obtain tickets in advance because of short ticket lifetimes.

Allow Forwardable Tickets 

When checked, allows the principal to obtain forwardable tickets. 

Forwardable tickets are tickets that are forwarded to the remote host to provide a single-sign-on session. For example, if you are using forwardable tickets and you authenticate yourself through ftp or rsh, then other services, such as NFS services, are available without your being prompted for another password.

Allow Renewable Tickets 

When checked, allows the principal to obtain renewable tickets. 

A principal can automatically extend the expiration date or time of a ticket that is renewable (rather than having to get a new ticket after the first ticket expires). Currently, the NFS service is the ticket service that can renew tickets. 

Allow Proxiable Tickets 

When checked, allows the principal to obtain proxiable tickets. 

A proxiable ticket is a ticket that can be used by a service on behalf of a client to perform an operation for the client. With a proxiable ticket, a service can take on the identity of a client and obtain a ticket for another service, but the service cannot obtain a ticket-granting ticket. 

Allow Service Tickets 

When checked, allows service tickets to be issued for the principal. 

You should not allow service tickets to be issued for the kadmin/hostname and changepw/hostname principals. This practice ensures that these principals can only update the KDC database.

Allow TGT-Based Authentication 

When checked, allows the service principal to provide services to another principal. More specifically, this attribute allows the KDC to issue a service ticket for the service principal. 

This attribute is valid only for service principals. When unchecked, service tickets cannot be issued for the service principal. 

Allow Duplicate Authentication 

When checked, allows the user principal to obtain service tickets for other user principals. 

This attribute is valid only for user principals. When unchecked, the user principal can still obtain service tickets for service principals, but not for other user principals. 

Required Preauthentication 

When checked, the KDC will not send a requested ticket-granting ticket (TGT) to the principal until the KDC can authenticate (through software) that the principal is really the principal that is requesting the TGT. This preauthentication is usually done through an extra password, for example, from a DES card. 

When unchecked, the KDC does not need to preauthenticate the principal before the KDC sends a requested TGT to the principal. 

Required Hardware Authentication 

When checked, the KDC will not send a requested ticket-granting ticket (TGT) to the principal until the KDC can authenticate (through hardware) that it is really the principal that is requesting the TGT. Hardware preauthentication can occur, for example, on a Java ring reader. 

When unchecked, the KDC does not need to preauthenticate the principal before the KDC sends a requested TGT to the principal. 

Table 17–5 Attributes for the Policy Basics Pane

Attribute 

Description 

Policy Name 

The name of the policy. A policy is a set of rules that govern a principal's password and tickets. 

If you are modifying a policy, you cannot edit a policy's name. 

Minimum Password Length 

The minimum length for the principal's password. 

Minimum Password Classes 

The minimum number of different character types that are required in the principal's password. 

For example, a minimum classes value of 2 means that the password must have at least two different character types, such as letters and numbers (hi2mom). A value of 3 means that the password must have at least three different character types, such as letters, numbers, and punctuation (hi2mom!). And so on.  

A value of 1 sets no restriction on the number of password character types. 

Saved Password History 

The number of previous passwords that have been used by the principal, and a list of the previous passwords that cannot be reused. 

Minimum Password Lifetime (seconds) 

The minimum time that the password must be used before it can be changed. 

Maximum Password Lifetime (seconds) 

The maximum time that the password can be used before it must be changed. 

Principals Using This Policy 

The number of principals to which this policy currently applies. (Read-only) 

Using the SEAM Tool With Limited Kerberos Administration Privileges

All features of the SEAM Administration Tool are available if your admin principal has all the privileges to administer the Kerberos database. But it is possible to have limited privileges, such as being allowed to view only the list of principals or to change a principal's password. With limited Kerberos administration privileges, you can still use the SEAM Tool. However, various parts of the SEAM Tool will change based on the Kerberos administration privileges that you do not have. Table 17–6 shows how the SEAM Tool changes based on your Kerberos administration privileges.

The most visual change to the SEAM Tool occurs when you don't have the list privilege. Without the list privilege, the List panels do not display the list of principals and polices for you to manipulate. Instead, you must use the Name field in the List panels to specify a principal or policy that you want to manipulate.

If you login to the SEAM Tool, and you do not have sufficient privileges to perform tasks with it, the following message displays and you are sent back to the SEAM Administration Login window:


Insufficient privileges to use gkadmin: ADMCIL. Please try using another principal.

To change the privileges for a principal to administer the Kerberos database, go to How to Modify the Kerberos Administration Privileges.

Table 17–6 Using SEAM Tool With Limited Kerberos Administration Privileges

Disallowed Privilege 

Change the SEAM Tool 

a (add)

The Create New and Duplicate buttons are unavailable in the Principal List and Policy List panels. Without the add privilege, you cannot create new principals or policies or duplicate them. 

d (delete)

The Delete button is unavailable in the Principal List and Policy List panels. Without the delete privilege, you cannot delete principal or policies. 

m (modify)

The Modify button is unavailable in the Principal List and Policy List panels. Without the modify privilege, you cannot modify principal or policies.  

Also, with the Modify button unavailable, you cannot modify a principal's password, even if you have the change password privilege. 

c (change password)

The Password field in the Principal Basics panel is read-only and cannot be changed. Without the change password privilege, you cannot modify a principal's password.  

Note that even if you have the change password privilege, you must also have the modify privilege to change a principal's password. 

i (inquiry to database)

The Modify and Duplicate buttons are unavailable in the Principal List and Policy List panels. Without the inquiry privilege, you cannot modify or duplicate a principal or policy.  

Also, with the Modify button unavailable, you cannot modify a principal's password, even if you have the change password privilege. 

l (list)

The list of principals and policies in the List panels are unavailable. Without the list privilege, you must use the Name field in the List panels to specify the principal or policy that you want to manipulate. 

Administering Keytab Files

Every host that provides a service must have a local file, called a keytab (short for key table). The keytab contains the principal for the appropriate service, called a service key. A service key is used by a service to authenticate itself to the KDC and is known only by Kerberos and the service itself. For example, if you have a Kerberized NFS server, that server must have a keytab file that contains its nfs service principal.

To add a service key to a keytab file, you add the appropriate service principal to a host's keytab file by using the ktadd command of kadmin. Because you are adding a service principal to a keytab file, the principal must already exist in the Kerberos database so that kadmin can verify its existence. On the master KDC, the keytab file is located at /etc/krb5/kadm5.keytab, by default. On application servers that provide Kerberized services, the keytab file is located at /etc/krb5/krb5.keytab, by default.

A keytab is analogous to a user's password. Just as it is important for users to protect their passwords, it is equally important for application servers to protect their keytab files. You should always store keytab files on a local disk, and make them readable only by the root user. Also, you should never send a keytab file over an unsecured network.

There is also a special instance to add a root principal to a host's keytab file. If you want a user on the SEAM client to mount Kerberized NFS file systems that use Kerberos authentication automatically, you must add the client's root principal to the client's keytab file. Otherwise, users must use the kinit command as root to obtain credentials for the client's root principal whenever they want to mount a Kerberized NFS file system, even when they are using the automounter. See Setting Up Root Authentication to Mount NFS File Systems for detailed information.


Note –

When you set up a master KDC, you need to add the kadmind and changepw principals to the kadm5.keytab file. This step enables the KDC to decrypt administrators' Kerberos tickets to determine whether it should give the administrators access to the database.


Another command that you can use to administer keytab files is the ktutil command. ktutil is an interactive command that enables you to manage a local host's keytab file without having Kerberos administration privileges, because ktutil doesn't interact with the Kerberos database as kadmin does. So, after a principal is added to a keytab file, you can use ktutil to view the keylist in a keytab file or to temporarily disable authentication for a service.

Administering Keytabs Task Map

Task 

Description 

For Instructions 

Add a service principal to a keytab file 

Use the ktadd command of kadmin to add a service principal to a keytab file.

How to Add a Service Principal to a Keytab File

Remove a service principal from a keytab file 

Use the ktremove command of kadmin to remove a service from a keytab file.

How to Remove a Service Principal From a Keytab File

Display the keylist (Principals) in a keytab file 

Use the ktutil command to display the keylist in a keytab file.

How to Display the Keylist (Principals) in a Keytab File

Temporarily disable authentication for a service on a host 

This procedure is a quick way to temporarily disable authentication for a service on a host without having to have kadmin privileges.

Before you use ktutil to delete the service principal from the server's keytab file, copy the original keytab file to a temporary location. When you want to enable the service again, copy the original keytab file back to its proper location.

How to Temporarily Disable Authentication for a Service on a Host

How to Add a Service Principal to a Keytab File

  1. Make sure that the principal already exists in the Kerberos database.

    See How to View the List of Principals for more information.

  2. Become superuser on the host that needs a principal added to its keytab file.

  3. Start the kadmin command.


    # /usr/sbin/kadmin
    
  4. Add a principal to a keytab file by using the ktadd command.


    kadmin: ktadd [-k keytab] [-q] [principal | -glob principal-exp]

    -k keytab

    Specifies the keytab file. By default, /etc/krb5/krb5.keytab is used.

    -q

    Displays less verbose information. 

    principal

    Specifies the principal to be added to the keytab file. You can add the following service principals: host, root, nfs, and ftp.

    -glob principal-exp

    Specifies the principal expressions. All principals that match the principal.are added to the keytab file. The rules for principal expression are the same as for the list_principals command of kadmin.

  5. Quit the kadmin command.


    kadmin: quit
    

Example—Adding a Service Principal to a Keytab File

In the following example, the kadmin/admin and kadmin/changepw principals are added to a master KDC's keytab file. For this example, the keytab file must be the file that is specified in the kdc.conf file.


kdc1 # /usr/sbin/kadmin.local
kadmin.local: ktadd -k /etc/krb5/kadm5.keytab kadmin/admin kadmin/changepw
Entry for principal kadmin/admin@EXAMPLE.COM with kvno 3, encryption type DES-CBC-CRC
  added to keytab WRFILE:/etc/krb5/kadm5.keytab.
Entry for principal kadmin/changepw@EXAMPLE.COM with kvno 3, encryption type DES-CBC-CRC
  added to keytab WRFILE:/etc/krb5/kadm5.keytab.
kadmin.local: quit

In the following example, denver's host principal is added to denver's keytab file, so that the KDC can authenticate denver's network services.


denver # /usr/sbin/kadmin
kadmin: ktadd host/denver@example.com@EXAMPLE.COM
kadmin: Entry for principal host/denver@example.com@EXAMPLE.COM with kvno 2,
  encryption type DES-CBC-CRC added to keytab WRFILE:/etc/krb5/krb5.keytab.
kadmin: quit

How to Remove a Service Principal From a Keytab File

  1. Become superuser on the host with a service principal that must be removed from its keytab file.

  2. Start the kadmin command.


    # /usr/sbin/kadmin
    
  3. (Optional) To display the current list of principals (keys) in the keytab file, use the ktutil command.

    See How to Display the Keylist (Principals) in a Keytab File for detailed instructions.

  4. Remove a principal from the keytab file by using the ktremove command.


    kadmin: ktremove [-k keytab] [-q] principal [kvno | all | old ]

    -k keytab

    Specifies the keytab file. By default, /etc/krb5/krb5.keytab is used.

    -q

    Displays less verbose information. 

    principal

    Specifies the principal to be removed from the keytab file. 

    kvno

    Removes all entries for the specified principal whose key version number matches kvno.

    all

    Removes all entries for the specified principal. 

    old

    Removes all entries for the specified principal, except those principals with the highest key version number. 

  5. Quit the kadmin command.


    kadmin: quit
    

Example—Removing a Service Principal From a Keytab

In the following example, denver's host principal is removed from denver's keytab file.


denver # /usr/sbin/kadmin
kadmin: ktremove host/denver.example.com@EXAMPLE.COM
kadmin: Entry for principal host/denver.example.com@EXAMPLE.COM with kvno 3
  removed from keytab WRFILE:/etc/krb5/krb5.keytab.
kadmin: quit

How to Display the Keylist (Principals) in a Keytab File

  1. Become superuser on the host with the keytab file.


    Note –

    Although you can create keytab files that are owned by other users, the default location for the keytab file requires root ownership.


  2. Start the ktutil command.


    # /usr/bin/ktutil
    
  3. Read the keytab file into the keylist buffer by using the read_kt command.


    ktutil: read_kt keytab
    
  4. Display the keylist buffer by using the list command.


    ktutil: list
    

    The current keylist buffer is displayed.

  5. Quit the ktutil command.


    ktutil: quit
    

Example—Displaying the Keylist (Principals) in a Keytab File

The following example displays the keylist in the /etc/krb5/krb5.keytab file on the denver host.


denver # /usr/bin/ktutil
    ktutil: read_kt /etc/krb5/krb5.keytab
    ktutil: list
slot KVNO Principal
---- ---- ---------------------------------------
   1    5 host/denver@EXAMPLE.COM
    ktutil: quit

How to Temporarily Disable Authentication for a Service on a Host

At times, you might need to temporarily disable the authentication mechanism for a service, such as rlogin or ftp, on a network application server. For example, you might want to stop users from logging in to a system while you are performing maintenance procedures. The ktutil command enables you to accomplish this task by removing the service principal from the server's keytab file, without requiring kadmin privileges. To enable authentication again, you just need to copy the original keytab file that you saved back to its original location.


Note –

By default, most services are set up to require authentication. If a service is not set up to require authentication, then the service will still work, even if you disable authentication for the service.


  1. Become superuser on the host with the keytab file.


    Note –

    Although you can create keytab files that are owned by other users, the default location for the keytab file requires root ownership.


  2. Save the current keytab file to a temporary file.

  3. Start the ktutil command.


    # /usr/bin/ktutil
    
  4. Read the keytab file into the keylist buffer by using the read_kt command.


    ktutil: read_kt keytab
    
  5. Display the keylist buffer by using the list command.


    ktutil: list
    

    The current keylist buffer is displayed. Note the slot number for the service that you want to disable.

  6. To temporarily disable a host's service, remove the specific service principal from the keylist buffer by using the delete_entry command.


    ktutil: delete_entry slot-number
    

    In this example, slot-number specifies the slot number of the service principal to be deleted, which is displayed by the list command.

  7. Write the keylist buffer to the keytab file by using the write_kt command.


    ktutil: write_kt keytab
    
  8. Quit the ktutil command.


    ktutil: quit
    
  9. When you want to re-enable the service, copy the temporary (original) keytab file back to its original location.

Example—Temporarily Disabling a Service on a Host

In the following example, the host service on the denver host is temporarily disabled. To enable the host service back on denver, you would copy the krb5.keytab.temp file to the /etc/krb5/krb5.keytab file.


denver # cp /etc/krb5/krb5.keytab /etc/krb5/krb5.keytab.temp
denver # /usr/bin/ktutil
    ktutil:read_kt /etc/krb5/krb5.keytab
    ktutil:list
slot KVNO Principal
---- ---- ---------------------------------------
   1    8 root/denver@EXAMPLE.COM
   2    5 host/denver@EXAMPLE.COM
    ktutil:delete_entry 2
    ktutil:list
slot KVNO Principal
---- ---- --------------------------------------
   1    8 root/denver@EXAMPLE.COM
    ktutil:write_kt /etc/krb5/krb5.keytab
    ktutil: quit

Chapter 18 Using SEAM (Tasks)

This chapter is intended for anyone on a system with SEAM installed on it. This chapter includes information on tickets: obtaining, viewing, and destroying them. This chapter also includes information on choosing or changing a Kerberos password.

This is a list of the information in this chapter:

For an overview of SEAM, see Chapter 13, Introduction to SEAM.

Ticket Management

This section explains how to obtain, view, and destroy tickets. For an introduction to tickets, see How SEAM Works.

Do You Need to Worry About Tickets?

With SEAM 1.0 or 1.0.1 installed, Kerberos is built into the login command, and you will obtain tickets automatically when you log in.

Most of the Kerberized commands also automatically destroy your tickets when they exit. However, you might want to explicitly destroy your Kerberos tickets with kdestroy when you are finished with them, just to be sure. See How to Destroy Tickets for more information on kdestroy.

For information on ticket lifetimes, see Ticket Lifetimes.

How to Create a Ticket

Normally, a ticket is created automatically when you log in, and you need not do anything special to obtain a ticket. However, you might need to create a ticket if your ticket expires.

To create a ticket, use the kinit command.


% /usr/bin/kinit
 

kinit prompts you for your password. For the full syntax of the kinit command, see the kinit(1) man page.

Example—Creating a Ticket

This example shows a user, jennifer, creating a ticket on her own system:


% kinit
Password for jennifer@ENG.EXAMPLE.COM:  <type password>
 

Here, the user david creates a ticket that is valid for three hours with the -l option:


% kinit -l 3h david@EXAMPLE.ORG
Password for david@EXAMPLE.ORG:  <type password>
 

This example shows the user david creating a forwardable ticket (with the -f option) for himself. With this forwardable ticket, he can, for example, log in to a second system.


% kinit -f david@EXAMPLE.ORG
Password for david@EXAMPLE.ORG:     <type password>
 

For more on how forwarding tickets works, see Types of Tickets.

How to View Tickets

Not all tickets are alike. One ticket might be, for example, forwardable; another ticket might be postdated; while a third ticket might be both forwardable and postdated. You can see which tickets you have, and what their attributes are, by using the klist command with the -f option:


% /usr/bin/klist -f

The following symbols indicate the attributes that are associated with each ticket, as displayed by klist:

Forwardable 

Forwarded 

Proxiable 

Proxy 

Postdatable 

Postdated 

Renewable 

Initial 

Invalid 

Types of Tickets describes the various attributes that a ticket can have.

Example—Viewing Tickets

This example shows that the user jennifer has an initial ticket, which is forwardable (F) and postdated (d), but not yet validated (i):


% /usr/bin/klist -f
Ticket cache: /tmp/krb5cc_74287
Default principal: jenniferm@ENG.EXAMPLE.COM
 
Valid starting                 Expires                 Service principal
09 Mar 99 15:09:51  09 Mar 99 21:09:51  nfs/EXAMPLE.SUN.COM@EXAMPLE.SUN.COM
        renew until 10 Mar 99 15:12:51, Flags: Fdi
 

The following example shows that the user david has two tickets that were forwarded (f) to his host from another host. The tickets are also forwardable (F):


% klist -f
Ticket cache: /tmp/krb5cc_74287
Default principal: david@EXAMPLE.SUN.COM
 
Valid starting                 Expires                 Service principal
07 Mar 99 06:09:51  09 Mar 99 23:33:51  host/EXAMPLE.COM@EXAMPLE.COM
        renew until 10 Mar 99 17:09:51, Flags: fF
 
Valid starting                 Expires                 Service principal
08 Mar 99 08:09:51  09 Mar 99 12:54:51  nfs/EXAMPLE.COM@EXAMPLE.COM
        renew until 10 Mar 99 15:22:51, Flags: fF

How to Destroy Tickets

Usually, tickets are destroyed automatically when the commands that created them exit. However, you might want to explicitly destroy your Kerberos tickets when you are finished with them, just to be sure. Tickets can be stolen. If tickets are stolen, the person who has stolen them can use them until they expire (although stolen tickets must be decrypted).

To destroy your tickets, use the kdestroy command.


% /usr/bin/kdestroy

kdestroy destroys all your tickets. You cannot use this command to selectively destroy a particular ticket.

If you are going to be away from your system and are concerned about an intruder using your permissions, you should use either kdestroy or a screen saver that locks the screen.


Note –

One way to help ensure that your tickets are always destroyed is to add the kdestroy command to the .logout file in your home directory.

In instances where the PAM module has been configured (which is the default and usual case), tickets are destroyed automatically upon logout. So, adding a call to kdestroy to your .login file is not necessary. However, if the PAM module has not been configured, or if you don't know whether it has been, you might want to add kdestroy to your .login file to ensure that your tickets are destroyed when you exit your system.


Password Management

With SEAM installed, you now have two passwords: your regular Solaris password, and a Kerberos password. You can make both passwords the same, or they can be different.

Non-Kerberized commands, such as login, are typically set up through PAM to authenticate with both Kerberos and UNIX. If you have different passwords, you must provide both passwords to log on with the appropriate authentication. However, if both passwords are the same, the first password you enter for UNIX is also accepted by Kerberos.

Unfortunately, using the same password for both Kerberos and UNIX can compromise security. That is, if someone discovers your Kerberos password, then your UNIX password is no longer a secret. However, using the same passwords for UNIX and Kerberos is still more secure than in a site without Kerberos, because passwords in a Kerberos environment are not sent across the network. Usually, your site will have a policy to help you determine your options.

Your Kerberos password is the only way Kerberos can verify your identity. If someone discovers your Kerberos password, Kerberos security becomes meaningless, because that person can masquerade as you. That person can send email that comes from “you,” read, edit, or delete your files, or log into other hosts as you. No one will be able to tell the difference. For this reason, it is vital that you choose a good password and keep it secret. You should never reveal your password to anyone else, not even your system administrator. Additionally, you should change your password frequently, particularly any time that you believe someone might have discovered it.

Advice on Choosing a Password

Your password can include almost any character that you can type. The main exceptions are the Control keys and the Return key. A good password is a password that you can remember readily, but which no one else can easily guess. Examples of bad passwords include the following:

A good password is at least eight characters long. Moreover, a password should include a mix of characters, such as uppercase and lowercase letters, numbers, and punctuation marks. Examples of passwords that would be good if they didn't appear in this manual include the following:


Caution – Caution –

Don't use these examples. Passwords that appear in manuals are the first passwords that an intruder will try.


Changing Your Password

You can change your Kerberos password in two ways:

After you change your password, it takes some time for the change to propagate through a system (especially over a large network). Depending on how your system is set up, this delay might take anywhere from a few minutes to an hour or more. If you need to get new Kerberos tickets shortly after you change your password, try the new password first. If the new password doesn't work, try again using the old password.

Kerberos V5 allows system administrators to set criteria about allowable passwords for each user. Such criteria is defined by the policy set for each user (or by a default policy). See Administering Policies for more on policies.

For example, suppose that user jennifer's policy (call it jenpol) mandates that passwords be at least eight letters long and include a mix of at least two kinds of characters. kpasswd will therefore reject an attempt to use “sloth” as a password.


% kpasswd
kpasswd: Changing password for jennifer@ENG.EXAMPLE.COM.
Old password:   <jennifer types her existing password>
kpasswd: jennifer@ENG.EXAMPLE.COM's password is controlled by
the policy jenpol
which requires a minimum of 8 characters from at least 2 classes 
(the five classes are lowercase, uppercase, numbers, punctuation,
and all other characters).
New password: <jennifer types  'sloth'>
New password (again):  <jennifer re-types 'sloth'>
kpasswd: New password is too short.
Please choose a password which is at least 4 characters long. 

Here, jennifer uses “slothrop49” as a password. “slothrop49” meets the criteria, because it is over eight letters long and contains two different kinds of characters (numbers and lowercase letters).


% kpasswd
kpasswd: Changing password for jennifer@ENG.EXAMPLE.COM.
Old password:  <jennifer types her existing password>
kpasswd: jennifer@ENG.EXAMPLE.COM's password is controlled by
the policy jenpol
which requires a minimum of 8 characters from at least 2 classes 
(the five classes are lowercase, uppercase, numbers, punctuation,
and all other characters).
New password:  <jennifer types  'slothrop49'>
New password (again):  <jennifer re-types 'slothrop49'>
Kerberos password changed.

Examples—Changing Your Password

In the following example, user david changes both his UNIX password and Kerberos password with passwd.


% passwd
	passwd:  Changing password for david
	Enter login (NIS+) password:         <type the current UNIX password>
	New password:                        <type the new UNIX password>
	Re-enter password:                   <confirm the new UNIX password>
	Old KRB5 password:                   <type the current Kerberos password>
	New KRB5 password:                   <type the new Kerberos password>
	Re-enter new KRB5 password:          <confirm the new Kerberos password>

In the preceding example passwd asks for both the UNIX password and the Kerberos password. However, if try_first_pass is set in the PAM module, the Kerberos password is automatically set to the UNIX password. This is the default configuration. In that case, user david must use kpasswd to set his Kerberos password to something else, as shown next.

This example shows user david changing only his Kerberos password with kpasswd.


% kpasswd
kpasswd: Changing password for david@ENG.EXAMPLE.COM.
Old password:           <type the current Kerberos password>
New password:           <type the new Kerberos password>
New password (again):   <confirm the new Kerberos password>
Kerberos password changed.
 

In this example, user david changes the password for the Kerberos principal david/admin (which is not a valid UNIX user). He must use kpasswd.


% kpasswd david/admin
kpasswd:  Changing password for david/admin.
Old password:           <type the current Kerberos password>
New password:           <type the new Kerberos password>
New password (again):   <type the new Kerberos password>
Kerberos password changed. 
 

Chapter 19 SEAM (Reference)

This chapter lists many of the files, commands, and daemons that are part of the SEAM product. In addition, this chapter provides detailed information about how the Kerberos authentication system works.

This is a list of the reference information in this chapter.

SEAM Files

Table 19–1 SEAM Files

File Name 

Description 

~/.gkadmin

Default values for creating new principals in the SEAM Administration Tool

~/.k5login

List of principals to grant access to a Kerberos account

/etc/init.d/kdc

init script to start or stop krb5kdc

/etc/init.d/kdc.master

init script to start or stop kadmind

/etc/krb5/kadm5.acl

Kerberos access control list file; includes principal names of KDC administrators and their Kerberos administration privileges

/etc/krb5/kadm5.keytab

Keytab file for kadmin service on master KDC

/etc/krb5/kdc.conf

KDC configuration file

/etc/krb5/kpropd.acl

Kerberos database propagation configuration file

/etc/krb5/krb5.conf

Kerberos realm configuration file

/etc/krb5/krb5.keytab

Keytab file for network application servers

/etc/krb5/warn.conf

Kerberos warning configuration file

/etc/pam.conf

PAM configuration file

/tmp/krb5cc_uid

Default credentials cache (uid is the decimal UID of the user)

/tmp/ovsec_adm.xxxxxx

Temporary credentials cache for the lifetime of the password changing operation (xxxxxx is a random string)

/var/krb5/.k5.REALM

KDC stash file; contains encrypted copy of the KDC master key

/var/krb5/kadmin.log

Log file for kadmind

/var/krb5/kdc.log

Log file for the KDC

/var/krb5/principal.db

Kerberos principal database

/var/krb5/principal.kadm5

Kerberos administrative database; contains policy information

/var/krb5/principal.kadm5.lock

Kerberos administrative database lock file

/var/krb5/principal.ok

Kerberos principal database initialization file; created when the Kerberos database is initialized successfully

/var/krb5/slave_datatrans

Backup file of the KDC that the kprop_script script uses for propagation

PAM Configuration File

The default PAM configuration file includes entries for the authentication service, account management, session management, and password management modules.

For the authentication module, the new entries are created for rlogin, login, and dtlogin if SEAM 1.0 or 1.0.1 are installed. An example of these entries follows. All these services use the new PAM library, /usr/lib/security/pam_krb5.so.1, to provide Kerberos authentication.

These entries use the try_first_pass option, which requests authentication by using the user's initial password. Using the initial password means that the user is not prompted for another password, even if multiple mechanisms are listed.


# cat /etc/pam.conf
 .
 .
rlogin auth optional /usr/lib/security/pam_krb5.so.1 try_first_pass
login auth optional /usr/lib/security/pam_krb5.so.1 try_first_pass
dtlogin auth optional /usr/lib/security/pam_krb5.so.1 try_first_pass
other auth optional /usr/lib/security/pam_krb5.so.1 try_first_pass

For the account management module, dtlogin has a new entry that uses the Kerberos library, as follows. An other entry is included to provide a default rule. Currently, no actions are taken by the other entry.


dtlogin account optional /usr/lib/security/pam_krb5.so.1 
other account optional /usr/lib/security/pam_krb5.so.1

The last two entries in the /etc/pam.conf file are shown next. The other entry for session management destroys user credentials. The new other entry for password management selects the Kerberos library.


other session optional /usr/lib/security/pam_krb5.so.1 
other password optional /usr/lib/security/pam_krb5.so.1 try_first_pass

SEAM Commands

This section lists some commands that are included in the SEAM product.

Table 19–2 SEAM Commands

Command 

Description 

/usr/lib/krb5/kprop

Kerberos database propagation program

/usr/sbin/gkadmin

Kerberos database administration GUI program; used to manage principals and policies

/usr/sbin/kadmin

Remote Kerberos database administration program (run with Kerberos authentication); used to manage principals, policies, and keytab files

/usr/sbin/kadmin.local

Local Kerberos database administration program (run without Kerberos authentication; must be run on master KDC); used to manage principals, policies, and keytab files

/usr/sbin/kdb5_util

Creates Kerberos databases and stash files

SEAM Daemons

The following table lists the daemons that the SEAM product uses.

Table 19–3 SEAM Daemons

Daemon 

Description 

/usr/lib/krb5/kadmind

Kerberos database administration daemon

/usr/lib/krb5/kpropd

Kerberos database propagation daemon

/usr/lib/krb5/krb5kdc

Kerberos ticket processing daemon

SEAM Terminology

The following section presents terms and their definitions. Those terms are used throughout the SEAM documentation. In order to grasp SEAM concepts, an understanding of these terms is essential.

Kerberos-Specific Terminology

You need to understand the terms in this section in order to administer KDCs.

The Key Distribution Center or KDC is the component of SEAM that is responsible for issuing credentials. These credentials are created by using information that is stored in the KDC database. Each realm needs at least two KDCs, a master and at least one slave. All KDCs generate credentials, but only the master KDC handles any changes to the KDC database.

A stash file contains an encrypted copy of the master key for the KDC. This key is used when a server is rebooted to automatically authenticate the KDC before starting the kadmind and krb5kdc commands. Because this file includes the master key, the file and any backups of the file should be kept secure. If the encryption is compromised, then the key could be used to access or modify the KDC database.

Authentication-Specific Terminology

You need to know the terms in this section to understand the authentication process. Programmers and system administrators should be familiar with these terms.

A client is the software that runs on a user's workstation. The SEAM software that runs on the client makes many requests during this process. So, it is important to differentiate the actions of this software from the user.

The terms server and service are often used interchangeably. To clarify, the term server is used to define the physical system that SEAM software is running on. The term service corresponds to a particular function that is being supported on a server (for instance, nfs). Documentation often mentions servers as part of a service, but this definition clouds the meaning of the terms. Therefore, the term server refers to the physical system. The term service refers to the software.

The SEAM product includes three types of keys. One key is the private key. The private key is given to each user principal and is known only to the user of the principal and to the KDC. For user principals, the key is based on the user's password. For servers and services, the key is known as a service key. The service key serves the same purpose as the private key, but is used by servers and services. The third type of key is a session key. A session key is a key that is generated by the authentication service or the ticket-granting service. A session key is generated to provide secure transactions between a client and a service.

A ticket is an information packet that is used to securely pass the identity of a user to a server or service. A ticket is valid for only a single client and a particular service on a specific server. A ticket contains the principal name of the service, the principal name of the user, the IP address of the user's host, a time stamp, and a value to define the lifetime of the ticket. A ticket is created with a random session key to be used by the client and the service. After a ticket has been created, it can be reused until the ticket expires.

A credential is a packet of information that includes a ticket and a matching session key. Credentials are often encrypted by using either a private key or a service key, depending on which software decrypts the credential.

An authenticator is another type of information. When used with a ticket, an authenticator can be used to authenticate a user principal. An authenticator includes the principal name of the user, the IP address of the user's host, and a time stamp. Unlike a ticket, an authenticator can be used once only, usually when access to a service is requested. An authenticator is encrypted by using the session key for that client and that server.

Types of Tickets

Tickets have properties that govern how they can be used. These properties are assigned to the ticket when it is created, although you can modify a ticket's properties later. For example, a ticket can change from forwardable to forwarded. You can view ticket properties with the klist command. See How to View Tickets.

Tickets can be described by one or more of the following terms:

Forwardable/forwarded

A forwardable ticket can be sent from one host to another, obviating the need for a client to reauthenticate itself. For example, if the user david obtains a forwardable ticket while on user jennifer's machine, he can log in to his own machine without having to get a new ticket (and thus authenticate himself again). See Example—Creating a Ticket for an example of a forwardable ticket.

Initial

An initial ticket is a ticket that is issued directly, not based on a ticket-granting ticket. Some services, such as applications that change passwords, can require tickets to be marked initial in order to assure themselves that the client can demonstrate a knowledge of its secret key. An initial ticket indicates that the client has recently authenticated itself (instead of relying on a ticket-granting ticket, which might have been around for a long time).

Invalid

An invalid ticket is a postdated ticket that has not yet become usable. An invalid ticket will be rejected by an application server until it becomes validated. To be validated, a ticket must be presented to the KDC by the client in a TGS request, with the VALIDATE flag set, after its start time has passed.

Postdatable/postdated

A postdated ticket is a ticket that does not become valid until some specified time after its creation. Such a ticket is useful, for example, for batch jobs that are intended to be run late at night, since the ticket, if stolen, cannot be used until the batch job is to be run. When a postdated ticket is issued, it is issued as Invalid and remains that way until: its start time has passed, and the client requests validation by the KDC. A postdated ticket is normally valid until the expiration time of the ticket-granting ticket. However, if the ticket is marked renewable, its lifetime is normally set to be equal to the duration of the full life of the ticket-granting ticket.

Proxiable/proxy

At times, it is necessary for a principal to allow a service to perform an operation on its behalf. An example might be when a principal requests a service to run a print job on a third host. The service must be able to take on the identity of the client, but need only do so for that single operation. In that case, the server is said to be acting as a proxy for the client. The principal name of the proxy must be specified when the ticket is created.

A proxiable ticket is similar to a forwardable ticket, except that it is valid only for a single service, whereas a forwardable ticket grants the service the complete use of the client's identity. A forwardable ticket can therefore be thought of as a sort of super-proxy.

Renewable

Because it is a security risk to have tickets with very long lives, tickets can be designated as renewable. A renewable ticket has two expiration times: the time at which the current instance of the ticket expires, and the maximum lifetime for any ticket. If a client wants to continue to use a ticket, the client renews it before the first expiration occurs. For example, a ticket can be valid for one hour, with all tickets having a maximum lifetime of 10 hours. If the client that is holding the ticket wants to keep it for more than an hour, the client must renew it within that hour. When a ticket reaches the maximum ticket lifetime (10 hours), it automatically expires and cannot be renewed.

For information on how to view tickets to see what their attributes are, see How to View Tickets.

Ticket Lifetimes

Any time a principal obtains a ticket, including a ticket–granting ticket, the ticket's lifetime is set as the smallest of the following lifetime values:

Figure 19–1 shows how a TGT's lifetime is determined and where the four lifetime values come from. Even though this figure shows how a TGT's lifetime is determined, basically the same thing happens when any principal obtains a ticket. The only differences are that kinit doesn't provide a lifetime value, and the service principal that provides the ticket provides a maximum lifetime value (instead of the krbtgt/realm principal).

Figure 19–1 How a TGT's Lifetime is Determined

Diagram shows that a ticket lifetime is the smallest value allowed by the kinit command, the user principal, the site default, and the ticket granter.

The renewable ticket lifetime is also determined from the minimum of four values, but renewable lifetime values are used instead, as follows:

Principal Names

Each ticket is identified by a principal name. The principal name can identify a user or a service. Here are examples of several principal names.

Table 19–4 Examples of Principal Names

Principal Name 

Description 

root/boston.example.com@EXAMPLE.COM

A principal that is associated with the root account on an NFS client. This principal is called a root principal and is needed for authenticated NFS-mounting to succeed.

host/boston.example.com@EXAMPLE.COM

A principal that is used by the network applications servers, such as ftpd and telnetd. This principal is also used with the pam_krb5 authentication module. This principal is called a host or service principal.

username@EXAMPLE.COM

A principal for a user. 

username/admin@EXAMPLE.COM

An admin principal that can be used to administer the KDC database.

nfs/boston.example.com@EXAMPLE.COM

A principal that is used by the NFS service. This principal can be used instead of a host principal.

K/M@EXAMPLE.COM

The master key name principal. There is one master key name principal that is associated with each master KDC. 

kadmin/history@EXAMPLE.COM

A principal which includes a key used to keep password histories for other principals. Each master KDC has one of these principals. 

kadmin/kdc1.example.com@EXAMPLE.COM

A principal for the master KDC server that allows access to the KDC by using kadmind.

changepw/kdc1.example.com@EXAMPLE.COM

A principal for the master KDC server that allows access to the KDC when you are changing passwords. 

krbtgt/EXAMPLE.COM@EXAMPLE.COM

This principal is used when you generate a ticket-granting ticket. 

How the Authentication System Works

Applications allow you to log in to a remote system if you can provide a ticket that proves your identity, and a matching session key. The session key contains information that is specific to the user and the service that is being accessed. A ticket and session key are created by the KDC for all users when they first log in. The ticket and the matching session key form a credential. While using multiple networking services, a user can gather many credentials. The user needs to have a credential for each service that runs on a particular server. For instance, access to the ftp service on a server named boston requires one credential. Access to the ftp service on another server requires its own credential.

The process of creating and storing the credentials is transparent. Credentials are created by the KDC that sends the credential to the requester. When received, the credential is stored in a credential cache.

Gaining Access to a Service Using SEAM

In order for a user to access a specific service on a specific server, the user must obtain two credentials. The first credential is for the ticket-granting service (known as the TGT). Once the ticket-granting service has decrypted this credential, the service creates a second credential for the server that the user is requesting access to. This second credential can then be used to request access to the service on the server. After the server has successfully decrypted the second credential, then the user is given access. The following sections describe this process in more detail.

Obtaining a Credential for the Ticket-Granting Service

  1. To start the authentication process, the client sends a request to the authentication server for a specific user principal. This request is sent without encryption. There is no secure information included in the request, so it is not necessary to use encryption.

  2. When the request is received by the authentication service, the principal name of the user is looked up in the KDC database. If a principal matches, the authentication service obtains the private key for that principal. The authentication service then generates a session key to be used by the client and the ticket-granting service (call it session key 1) and a ticket for the ticket-granting service (ticket 1). This ticket is also known as the ticket-granting ticket (TGT). Both the session key and the ticket are encrypted by using the user's private key, and the information is sent back to the client.

  3. The client uses this information to decrypt session key 1 and ticket 1, by using the private key for the user principal. Since the private key should only be known by the user and the KDC database, the information in the packet should be safe. The client stores the information in the credentials cache.

During this process, a user is normally prompted for a password. If the password the user enters is the same as the password that was used to build the private key stored in the KDC database, then the client can successfully decrypt the information that is sent by the authentication service. Now the client has a credential to be used with the ticket-granting service. The client is ready to request a credential for a server.

Figure 19–2 Obtaining a Credential for the Ticket-Granting Service

Flow diagram shows a client requesting a credential for server access from the KDC, and using a password to decrypt the returned credential.

Obtaining a Credential for a Server

  1. To request access to a specific server, a client must first have obtained a credential for that server from the authentication service. See Obtaining a Credential for the Ticket-Granting Service. The client then sends a request to the ticket-granting service, which includes the service principal name, ticket 1, and an authenticator that was encrypted with session key 1. Ticket 1 was originally encrypted by the authentication service by using the service key of the ticket-granting service.

  2. Because the service key of the ticket-granting service is known to the ticket-granting service, ticket 1 can be decrypted. The information in ticket 1 includes session key 1, so the ticket-granting service can decrypt the authenticator. At this point, the user principal is authenticated with the ticket-granting service.

  3. Once the authentication is successful, the ticket-granting service generates a session key for the user principal and the server (session key 2), and a ticket for the server (ticket 2). Session key 2 and ticket 2 are then encrypted by using session key 1. Since session key 1 is known only to the client and the ticket-granting service, this information is secure and can be safely sent over the network.

  4. When the client receives this information packet, the client decrypts the information by using session key 1, which it had stored in the credential cache. The client has obtained a credential to be used with the server. Now the client is ready to request access to a particular service on that server.

Figure 19–3 Obtaining a Credential for a Server

Flow diagram shows a client sending a request encrypted with Session Key 1 to the KDC, and then decrypting the returned credential with the same key.

Obtaining Access to a Specific Service

  1. To request access to a specific service, the client must first have obtained a credential for the ticket-granting service from the authentication server, and a server credential from the ticket-granting service. See Obtaining a Credential for the Ticket-Granting Service and Obtaining a Credential for a Server. The client can send a request to the server including ticket 2 and another authenticator. The authenticator is encrypted by using session key 2.

  2. Ticket 2 was encrypted by the ticket-granting service with the service key for the service. Since the service key is known by the service principal, the service can decrypt ticket 2 and get session key 2. Session key 2 can then be used to decrypt the authenticator. If the authenticator is successfully decrypted, the client is given access to the service.

Figure 19–4 Obtaining Access to a Specific Service

Flow diagram shows a client using Ticket 2 and an authenticator encrypted with Session Key 2 to obtain access permission to the server.

Using the gsscred Table

The gsscred table is used by an NFS server when the server is trying to identify a SEAM user. The NFS services use UNIX IDs to identify users. These IDs are not part of a user principal or credential. The gsscred table provides a mapping from UNIX UIDs (from the password file) to principal names. The table must be created and administered after the KDC database is populated.

When a client request comes in, the NFS services try to map the principal name to a UNIX ID. If the mapping fails, the gsscred table is consulted. With the kerberos_v5 mechanism, a root/hostname principal is automatically mapped to UID 0, and the gsscred table is not consulted. Thus, there is no way to do special remappings of root through the gsscred table.