Solaris Naming Setup and Configuration Guide

Creating a Root Replica Server

To have regularly available NIS+ service, you should always create one or more root replica servers. Having replicas can also speed network-request resolution because multiple servers are available to handle requests.

For performance reasons, you should have no more than a few replicas per domain. If your network includes multiple subnets or different sites connected by a Wide Area Network (WAN), you may need additional replicas:

See Solaris Naming Administration Guide for additional information on how to determine the optimum number of replicas.

"How to Create a Root Replica" shows the machine client1 being configured as a root replica for the doc.com. domain. This procedure uses the NIS+ nisserver script. (You can also use the NIS+ command set to configure a replica server as described in "Using NIS+ Commands to Configure a Replica Server".)

Prerequisites to Running nisserver

Before you can run nisserver to create a replica:

Information You Need

You need:

How to Create a Root Replica

  1. To create a root replica, type the following command as superuser (root) on the NIS+ domain's root master server.


    master1# nisserver -R -d doc.com. -h client1
    This script sets up a NIS+ replica server for domain doc.com.
    Domain name: :doc.com.
    NIS+ server	: :client1
    Is this information correct? (type 'y' to accept, 'n' to change)

    The -R option indicates that a replica should be configured. The -d option specifies the NIS+ domain name (doc.com., in this example). The -h option specifies the client machine (client1, in this example) that will become the root replica.

  2. Type y to continue.

    Typing n causes the script to prompt you for the correct information. (See "How to Change Incorrect Information" for what you need to do if you type n.)


    Is this information correct? (type 'y' to accept, 'n' to change) 
    y
    This script will set up machine "client1" as an NIS+ replica server for domain 
    doc.com. without NIS compatibility. The NIS+ server daemon, rpc.nisd, must 
    be running on client1 with the proper options to serve this domain. 
    Do you want to continue? (type 'y' to continue, 'n' to exit this script)
  3. Type y to continue.

    Typing n safely stops the script. The script will exit on its own if rpc.nisd is not running on the client machine.


    Is this information correct? (type 'y' to continue, 'n' to exit this script)
    y
    The system client1 is now configured as a replica server for domain doc.com..
    The NIS+ server daemon, rpc.nisd, must be running on client1 with the proper 
    options to serve this domain. If you want to run this replica in NIS (YP) 
    compatibility mode, edit the /etc/init.d/rpc file on the replica server '
    to uncomment the line which sets EMULYP to "-Y". This will ensure that 
    rpc.nisd will boot in NIS-compatibility mode. Then, restart rpc.nisd with 
    the "-Y" option. These actions should be taken after this script completes.

    Note -

    The above notice refers to an optional step. You need to modify only the /etc/init.d/rpc file if you want the root replica to be NIS compatible and it is not now NIS compatible. That is, the file needs modification only if you want the root replica to fulfill NIS client requests and it was not already configured as an NIS-compatible server. See "Configuring a Client as an NIS+ Server" for more information on creating NIS-compatible servers.


  4. [Optional] Configure the replica to run in NIS (YP) compatibility mode.

    If you want this replica to run in NIS compatibility mode, follow these steps:

    1. Kill rpc.nisd

    2. Edit the server's /etc/init.d/rpc file to uncomment the line that sets EMULYP to -Y.

      In other words, delete the # character from the start of the EMULYP line.

    3. Restart rpc.nisd.

  5. Load your namespace data on to the new replica server.

    You can do this in two ways:

    • The preferred method of loading data on to a new replica server is to use the NIS+ backup and restore capabilities to back up the master server, then "restore" that data on to the new replica server. This step is described in detail in "How to Load Namespace Data--nisrestore Method".

    • Run nisping. Running nisping initiates a full resynch of all NIS+ data from the master server to this new replica. If your namespace is large, this can take a long time, during which your master server is very busy and slow to respond and your new replica is unable to answer NIS+ requests. This step is described in detail in "How to Load Namespace Data--nisping Method".

    When you have finished loading your namespace data, the machine client1 is now an NIS+ root replica. The new root replica can handle requests from the clients of the root domain. Because there are now two servers available to the domain, information requests can be fulfilled faster.

    Using these procedures, you can create as many root replicas as you need. You can also use these procedures to create replica servers for subdomains.

How to Set Up Multihomed NIS+ Replica Servers

The procedure for setting up a multihomed NIS+ server is the same as setting up a single interface server. The only difference is that there are more interfaces that need to be defined in the hosts database (/etc/hosts and /etc/inet/ipnodes files, and NIS+ hosts and ipnodes tables). Once the host information is defined, use the nisclient and nisserver scripts to set up the multihomed NIS+ server.


Caution - Caution -

When setting up a multihomed NIS+ server, the server's primary name must be the same as the nodename for the system. This is a requirement of both Secured RPC and nisclient.

If these names are different, Secure RPC authentication will fail to work properly causing NIS+ problems.


This procedure shows how to set up any NIS+ non-root master servers. The following example creates a replica for the root domain. For information about setting up a multihomed root server, see "How to Set Up a Multihomed NIS+ Root Master Server".

  1. Add the server host information into the hosts or ipnodes file.

    For example, for the hostB system with three interfaces:


    192.168.11.y hostB hostB-11
    192.168.12.x hostB hostB-12
    192.168.14.z hostB hostB-14
     
  2. On the root master server, use either nispopulate or nisaddent to load the new host information into the hosts or ipnodes table.

    For example:


    hostA# nispopulate -F -d sun.com hosts

    where the example shows sun.com as the NIS+ root domain name. Issue the nispopulate command specifying the name of your NIS+ root domain name.

  3. On the root master server, use the nisclient script to create the credential for the new client.

    For example:


    hostA# nisclient -c -d sun.com hostB

    where the example shows sun.com as the root domain name. Issue the nisclient command specifying the name of your root domain name.

  4. On the non-root master server, use nisclient to start the new server if it is not already running and initialize the machine as a NIS+ client.

    For example:


    hostB# nisclient -i -d sun.com

    where the example shows sun.com as the root domain name. Issue the nisclient command specifying the name of your root domain name.

  5. On the root master server, use nisserver to create a non-root master.

    For example:


    hostA# nisserver -M -d eng.sun.com -h hostB.sun.com.

    where the example shows eng.sun.com as the NIS+ domain name and hostB.sun.com as the fully-qualified hostname for the NIS+ server. Issue the nisserver command specifying the name of your NIS+ domain and the fully-qualified hostname for the NIS+ server.

  6. On the root master server, use nisserver to set up a replica server.

    For example:


    hostA# nisserver -R -d sun.com -h hostB.sun.com.

    where the example shows sun.com as the replica server and hostB.sun.com as the fully-qualified hostname for the NIS+ server. Issue the nisserver command specifying the name of your replica server and NIS+ domain.

After completing the steps for setting up a multihome NIS+ replica server, the remainder of the setup is exactly the same as for a single interface server.