Sun N1 Grid Engine 6.1 Installation Guide

Chapter 5 Upgrading From a Previous Release of N1 Grid Engine Software

This chapter describes the steps necessary to upgrade your existing N1 Grid Engine software to N1 Grid Engine 6.1 software.


Note –

The upgrade procedure is only able to upgrade your software from version 6.0 update 2 or higher. If you are running an older version of the N1 Grid Engine software, such as 5.3 or 6.0, you should upgrade to version 6.0 update 10 and then upgrade again to version 6.1.


About Upgrading the Software

The upgrade procedure is non destructive. The upgrade procedure installs the N1 Grid Engine 6.1 software on the master host by using the cluster configuration information from the older version of the software. The older version of the software will not be removed or modified in any way.


Note –

The LD_LIBRARY_PATH variable is not set in N1 Grid Engine 6.1 software. Remove the existing LD_LIBRARY_PATH settings from 6.0 before you start a 6.1 installation.

Before you begin the upgrade process, make sure that you source the existing sge-root/sge-cell/common/settings.sh or sge-root/sge-cell/common/settings.csh file.


Two upgrade scenarios are available:

Before You Upgrade

Before you start any upgrade procedure, make sure that the cluster holds no running or pending jobs. After the upgrade completes, all jobs will be gone. You should also backup the cluster configuration. To backup the cluster configuration, use the backup functionality of the N1 Grid Engine installer (inst_sge -bup).

ProcedureHow to Upgrade to 6.1 Software Using Classic/Berkeley DB Spooling

  1. Shut down the entire cluster.

    Type the following command:


    % qconf -ke all -ks -km
    
  2. Back up your cluster.

    Type the following command:


    # inst_sge -bup
    
  3. If you use Berkeley DB, update the database structures.


    Note –

    You do not have to perform this step if you use classic spooling.


    Due to changes in the Berkeley DB software, the internal database structures changed. To adjust the structures, follow these steps:

    1. On the qmaster host, type the following command as sgeadmin user:


      # $SGE_ROOT/utilbin/<arch>/db_dump -f /tmp/dump.out -h <db_home> sge
      

      Note –

      If you are using a Berkeley DB RPC Server, you must execute this command on the RPC Server host rather than on the qmaster host.


    2. Verify that the command was executed correctly and the file /tmp/dump.txt is not empty.

    3. If the command succeeded, type the following commands as sgeadmin user to remove the previous Berkeley DB database files:


      # cd db_home
      # rm -f *
      

      Caution – Caution –

      Do not delete the directory. Only delete the files within the directory.


  4. Extract the new binaries and common files to the $SGE_ROOT directory.

    Replace the old files with the new files.

  5. Restore the Berkeley DB database.


    Note –

    You do not have to perform this step if you use classic spooling.


    On the qmaster host, type the following command as sgeadmin user:


    # $SGE_ROOT/utilbin/<arch>/db_load -f /tmp/dump.out -h <db_home> sge
    
  6. Initiate the upgrade procedure.

    Type the following command:


    % ./inst_sge -upd
    

    Follow the instructions and answer the questions presented to you.

    The upgrade will create new settings and rc-script files. The old files will be saved under the same filename with a time stamp attached. The upgrade procedure starts the qmaster and scheduler daemon automatically. You have to start execution host daemons manually.

    After the upgrade procedure completes, check the newly created settings files and copy the new rc-scripts to your startup location (for example, /etc/init.d).

  7. Restart the execution daemons.

    Use the sgeexecd rc file.

    Upgrade is complete.

ProcedureHow to Upgrade to 6.1 Software Using Berkeley DB Server


64-bit SPARC only –

For 64–bit Solaris systems, a 64–bit Berkeley DB server was not available. The 32–bit server was used instead. For doing a backup or a restore, the 32–bit db_dump and db_load binaries are recommended. The Solaris 32–bit packages have to be installed also. For all N1 Grid Engine releases and patches earlier than 6.0 Update 9, use the ./inst_sge -bup command for doing a backup.

In N1 Grid Engine 6.0 Update 10, the backup still looks for the libdb-4.2.so file, which is not available. This leads to a backup failure. In this case, type the following command to enable the backup to function:


% touch $SGE_ROOT/lib/sol-sparc/libdb-4.2.so

  1. Shut down the entire cluster.

    Type the following command:


    % qconf -ke all -ks -km
    
  2. Shut down the Berkeley DB server.

    Type the following command:


    % sgebdb stop
    
  3. Back up your cluster.

    Type the following command:


    # inst_sge -bup
    
  4. Update the database structures.

    Due to changes in the Berkeley DB software, the internal database structures changed. To adjust the structures, follow these steps:

    1. On the qmaster host, type the following command as sgeadmin user:


      # $SGE_ROOT/utilbin/<arch>/db_dump -f /tmp/dump.out -h <db_home> sge
      

      Note –

      If you are using a Berkeley DB RPC Server, you must execute this command on the RPC Server host rather than on the qmaster host.


    2. Verify that the command was executed correctly and the file /tmp/dump.txt is not empty.

    3. If the command succeeded, type the following commands as sgeadmin user to remove the previous Berkeley DB database files:


      # cd db_home
      # rm -f *
      

      Caution – Caution –

      Do not delete the directory. Only delete the files within the directory.


  5. Extract the new binaries and common files to the $SGE_ROOT directory.

    Replace the old files with the new files.

  6. Restore the Berkeley DB database.

    On the qmaster host, type the following command as sgeadmin user:


    # $SGE_ROOT/utilbin/<arch>/db_load -f /tmp/dump.out -h <db_home> sge
    

    Note –

    If you are using a Berkeley DB RPC Server, you must execute this command on the RPC Server host rather than on the qmaster host.


  7. Initiate the upgrade procedure.

    Type the following command:


    % ./inst_sge -upd
    

    Follow the instructions and answer the questions presented to you.

    The upgrade will create new settings and rc-script files. The old files will be saved under the same filename with a time stamp attached. The upgrade procedure starts the qmaster and scheduler daemon automatically. You have to start execution host daemons manually.

    After the upgrade procedure completes, check the newly created settings files and copy the new rc-scripts to your startup location (for example, /etc/init.d).

  8. Restart the execution daemons.

    Use the sgeexecd rc file.

    Upgrade is complete.

ProcedureHow to Upgrade the Software from 5.3 to 6.0 Update 2

Before You Begin

Please review Plan the Installation for the information that you will need during the upgrade process. If you have decided to use an administrative user, as described in User Names, you should create that user now. This procedure assumes that you have already extracted the grid engine software, as described in Loading the Distribution Files on a Workstation.


Note –

While you can run N1 Grid Engine 6.0 software concurrently with your older version of grid engine software, it is better to run the upgrade procedure when there are no running jobs.


  1. Log in to the master host as root.

  2. Load the distribution files.

    For details, see Loading the Distribution Files on a Workstation.

  3. Ensure that you have set the $SGE_ROOT environment variable by typing:


    # echo $SGE_ROOT
    
    • If the $SGE_ROOT environment variable is not set, set it now, by typing:


      # SGE_ROOT=sge-root; export SGE_ROOT
      
  4. Change to the installation directory, sge-root.

    • If the directory where the installation files reside is visible from the master host, change directories (cd) to the installation directory sge-root, and then proceed to Step 4.

    • If the directory is not visible and cannot be made visible, do the following:

      1. Create a local installation directory, sge-root, on the master host.

      2. Copy the installation files to the local installation directory sge-root across the network (for example, by using ftp or rcp).

      3. Change directories (cd) to the local sge-root directory.

  5. Run the upgrade command on the master host, and respond to the prompts.

    This command starts the master host installation procedure. You are asked several questions, and you might be required to run some administrative actions.

    The syntax of the upgrade command is:

    inst_sge -upd 5.3-sge-root-directory 5.3-cell-name

    In the following example, the 5.3 sge-root directory is /sge/gridware and the cell name is default.


    # ./inst_sge -upd /sge/gridware default
    Welcome to the Grid Engine Upgrade
    ----------------------------------
    
    Before you continue with the installation please read these hints:
    
       - Your terminal window should have a size of at least
         80x24 characters
    
       - The INTR character is often bound to the key Ctrl-C.
         The term >Ctrl-C< is used during the upgrade if you
         have the possibility to abort the upgrade
    
    The upgrade procedure will take approximately 5-10 minutes.
    After this upgrade you will get a running qmaster and schedd with
    the configuration of your old installation. If the upgrade was
    successfully completed it is necessary to install your execution hosts
    with the install_execd script.
    
    Hit <RETURN> to continue >> 
  6. Choose an administrative account owner.

    In the following example, the value of sge-root is /opt/n1ge6, and the administrative user is sgeadmin.


    Grid Engine admin user account
    ------------------------------
    
    The current directory
    
       /opt/n1ge6
    
    is owned by user
    
       sgeadmin
    
    If user >root< does not have write permissions in this directory on *all*
    of the machines where Grid Engine will be installed (NFS partitions not
    exported for user >root< with read/write permissions) it is recommended to
    install Grid Engine that all spool files will be created under the user id
    of user >sgeadmin<.
    
    IMPORTANT NOTE: The daemons still have to be started by user >root<.
    
    Do you want to install Grid Engine as admin user >sgeadmin< (y/n) [y] >>
  7. Verify the sge-root directory setting.

    In the following example, the value of sge-root is /opt/n1ge6.


    Checking $SGE_ROOT directory
    ----------------------------
    
    The Grid Engine root directory is:
    
       $SGE_ROOT = /opt/n1ge6
    
    If this directory is not correct (e.g. it may contain an automounter
    prefix) enter the correct path to this directory or hit <RETURN>
    to use default [/opt/n1ge6] >>
  8. Set up the TCP/IP services for the grid engine software.

    1. You will be notified if the TCP/IP services have not been configured.


      Grid Engine TCP/IP service >sge_qmaster<
      ----------------------------------------
      
      There is no service >sge_qmaster< available in your >/etc/services< file
      or in your NIS/NIS+ database.
      
      You may add this service now to your services database or choose a port number.
      It is recommended to add the service now. If you are using NIS/NIS+ you should
      add the service at your NIS/NIS+ server and not to the local >/etc/services<
      file.
      
      Please add an entry in the form
      
         sge_qmaster <port_number>/tcp
      
      to your services database and make sure to use an unused port number.
      
      Please add the service now or press <RETURN> to go to entering a port number >> 
    2. Start a new terminal session or window to add the information /etc/services file or your NIS maps.

    3. Add the correct ports to the /etc/services file or your NIS services map, as described in Network Services.

      The following example shows how you might edit your /etc/services file.


      ...
      sge_qmaster     536/tcp
      sge_execd       537/tcp
      

      Note –

      In this example, the entries for both sge_qmaster and sge_execd are added to /etc/services. Subsequent steps in this example assume that both entries have been made.


      Save your changes.

    4. Return to the window where the installation script is running.


      Please add the service now or press <RETURN> to go to entering a port number >> 

      Press <RETURN>. You will see the following output:


      sge_qmaster 536
      
      Service >sge_qmaster< is now available.
      
      Hit <RETURN> to continue >> 

      Grid Engine TCP/IP service >sge_execd<
      --------------------------------------
      
      Using the service
      
         sge_execd
      
      for communication with Grid Engine.
      
      Hit <RETURN> to continue >> 
  9. Enter the name of your cell.

    The use of grid engine system cells is described in Cells.


    Grid Engine cells
    -----------------
    
    Grid Engine supports multiple cells.
    
    If you are not planning to run multiple Grid Engine clusters or if you don't
    know yet what is a Grid Engine cell it is safe to keep the default cell name
    
       default
    
    If you want to install multiple cells you can enter a cell name now.
    
    The environment variable
    
       $SGE_CELL=<your_cell_name>
    
    will be set for all further Grid Engine commands.
    
    Enter cell name [default] >> 
    • If you have decided to use cells, then enter the cell name now.

    • If you have decided not to use cells, then press <RETURN> to continue.


      Using cell >default<. 
      Hit <RETURN> to continue >> 

    Press <RETURN> to continue.

  10. Specify a spool directory.

    For guidelines on disk space requirements for the spool directory, see Disk Space Requirements. For information on where spool directory is installed, see Spool Directories Under the Root Directory.


    Grid Engine qmaster spool directory
    -----------------------------------
    
    The qmaster spool directory is the place where the qmaster daemon stores
    the configuration and the state of the queuing system.
    
    The admin user >sgeadmin< must have read/write access
    to the qmaster spool directory.
    
    If you will install shadow master hosts or if you want to be able to start
    the qmaster daemon on other hosts (see the corresponding sectionin the
    Grid Engine Installation and Administration Manual for details) the account
    on the shadow master hosts also needs read/write access to this directory.
    
    The following directory
    
    [/opt/n1ge6/default/spool/qmaster]
    
    will be used as qmaster spool directory by default!
    
    Do you want to select another qmaster spool directory (y/n) [n] >> 
    • If you want to accept the default spool directory, press <RETURN> to continue.

    • If you do not want to accept the default spool directory, then answer y.

      In the following example the /my/spool directory is specified as the master host spool directory.


      Do you want to select another qmaster spool directory (y/n) [n] >> y
      
      Please enter a qmaster spool directory now! >>/my/spool
      
  11. Set the correct file permissions.


    Verifying and setting file permissions
    --------------------------------------
    
    Did you install this version with >pkgadd< or did you already
    verify and set the file permissions of your distribution (y/n) [y] >> n
    
    Verifying and setting file permissions
    --------------------------------------
    
    We may now verify and set the file permissions of your Grid Engine
    distribution.
    
    This may be useful since due to unpacking and copying of your distribution
    your files may be unaccessible to other users.
    
    We will set the permissions of directories and binaries to
    
       755 - that means executable are accessible for the world
    
    and for ordinary files to
    
       644 - that means readable for the world
    
    Do you want to verify and set your file permissions (y/n) [y] >> y
    
    Verifying and setting file permissions and owner in >3rd_party<
    Verifying and setting file permissions and owner in >bin<
    Verifying and setting file permissions and owner in >ckpt<
    Verifying and setting file permissions and owner in >examples<
    Verifying and setting file permissions and owner in >install_execd<
    Verifying and setting file permissions and owner in >install_qmaster<
    Verifying and setting file permissions and owner in >mpi<
    Verifying and setting file permissions and owner in >pvm<
    Verifying and setting file permissions and owner in >qmon<
    Verifying and setting file permissions and owner in >util<
    Verifying and setting file permissions and owner in >utilbin<
    Verifying and setting file permissions and owner in >catman<
    Verifying and setting file permissions and owner in >doc<
    Verifying and setting file permissions and owner in >man<
    Verifying and setting file permissions and owner in >inst_sge<
    Verifying and setting file permissions and owner in >bin<
    Verifying and setting file permissions and owner in >lib<
    Verifying and setting file permissions and owner in >utilbin<
    
    Your file permissions were set
    
    Hit <RETURN> to continue >> 
  12. Specify whether all of your grid engine system hosts are located in a single DNS domain.


    Select default Grid Engine hostname resolving method
    ----------------------------------------------------
    
    Are all hosts of your cluster in one DNS domain? If this is
    the case the hostnames
    
       >hostA< and >hostA.foo.com<
    
    would be treated as eqal, because the DNS domain name >foo.com<
    is ignored when comparing hostnames.
    
    Are all hosts of your cluster in a single DNS domain (y/n) [y] >>   
    • If all of your grid engine system hosts are located in a single DNS domain, then answer y.


      Are all hosts of your cluster in a single DNS domain (y/n) [y] >> y 
      
      Ignoring domainname when comparing hostnames.
      
      Hit <RETURN> to continue >> 
    • If all of your grid engine system hosts are not located in a single DNS domain, then answer n.


      Are all hosts of your cluster in a single DNS domain (y/n) [y] >> n 
      
      The domainname is not ignored when comparing hostnames.
      
      Hit <RETURN> to continue >> 
      
      Default domain for hostnames
      ----------------------------
      
      Sometimes the primary hostname of machines returns the short hostname
      without a domain suffix like >foo.com<.
      
      This can cause problems with getting load values of your execution hosts.
      If you are using DNS or you are using domains in your >/etc/hosts< file or
      your NIS configuration it is usually safe to define a default domain
      because it is only used if your execution hosts return the short hostname
      as their primary name.
      
      If your execution hosts reside in more than one domain, the default domain
      parameter must be set on all execution hosts individually.
      
      Do you want to configure a default domain (y/n) [y] >> 

      Press <RETURN> to continue.

      1. If you want to specify a default domain, then answer y.

        In the following example, sun.com is specified as the default domain.


        Do you want to configure a default domain (y/n) [y] >> y
        
        
        Please enter your default domain >> sun.com
        
        Using >sun.com< as default domain. Hit <RETURN> to continue >>
      2. If you do not want to specify a default domain, then answer n.

        In the following example, sun.com is specified as the default domain.


        Do you want to configure a default domain (y/n) [y] >> n
        
  13. Press <RETURN> to continue.


    Making directories
    ------------------
    
    creating directory: default/common
    creating directory: /opt/n1ge6/default/spool/qmaster
    creating directory: /opt/n1ge6/default/spool/qmaster/job_scripts
    Hit <RETURN> to continue >> 
  14. Specify whether you want to use classic spooling or Berkeley DB.

    For more information on how to determine the type of spooling mechanism you want, please see Database Server and Spooling Host.


    Setup spooling
    --------------
    Your SGE binaries are compiled to link the spooling libraries
    during runtime (dynamically). So you can choose between Berkeley DB 
    spooling and Classic spooling method.
    Please choose a spooling method (berkeleydb|classic) [berkeleydb] >> 
    • If you want to specify Berkeley DB spooling, press <RETURN> to continue.


      Please choose a spooling method (berkeleydb|classic) [berkeleydb] >> 
      
      The Berkeley DB spooling method provides two configurations!
      
      1) Local spooling:
      The Berkeley DB spools into a local directory on this host (qmaster host)
      This setup is faster, but you can't setup a shadow master host
      
      2) Berkeley DB Spooling Server:
      If you want to setup a shadow master host, you need to use
      Berkeley DB Spooling Server!
      In this case you have to choose a host with a configured RPC service.
      The qmaster host connects via RPC to the Berkeley DB. This setup is more
      failsafe, but results in a clear potential security hole. RPC communication
      (as used by Berkeley DB) can be easily compromised. Please only use this
      alternative if your site is secure or if you are not concerned about
      security. Check the installation guide for further advice on how to achieve
      failsafety without compromising security.
      
      Do you want to use a Berkeley DB Spooling Server? (y/n) [n] >> 
      • If you want to use a Berkeley DB spooling server, enter y.


        Do you want to use a Berkeley DB Spooling Server? (y/n) [n] >> y
        
        Berkeley DB Setup
        
        -----------------
        Please, log in to your Berkeley DB spooling host and execute "inst_sge -db"
        Please do not continue, before the Berkeley DB installation with
        "inst_sge -db" is completed, continue with <RETURN>
        

        Note –

        Do not press <RETURN> until you have completed the Berkeley DB installation on the spooling server.


        1. Start a new terminal session or window.

        2. Log in to the spooling server.

        3. Install the software, as described in How to Install the Berkeley DB Spooling Server.

        4. After you have installed the software on the spooling server, return to the master installation window, and press <RETURN> to continue.

        5. Enter the name of the spooling server.

          In the following example, vector is the host name of the spooling server.


          Berkeley Database spooling parameters
          -------------------------------------
          
          Please enter the name of your Berkeley DB Spooling Server! >> vector
          
        6. Enter the name of the spooling directory.

          In the following example, /opt/n1ge6/default/spooldb is the spooling directory.


          Please enter the Database Directory now!
          
          Default: [/opt/n1ge6/default/spooldb] >> 
          Dumping bootstrapping information
          Initializing spooling database
          
          Hit <RETURN> to continue >> 
      • If you do not want to use a Berkeley DB spooling server, enter n.


        Do you want to use a Berkeley DB Spooling Server? (y/n) [n] >> n
        
        
        Hit <RETURN> to continue >> 

        Berkeley Database spooling parameters
        -------------------------------------
        
        Please enter the Database Directory now, even if you want to spool locally
        it is necessary to enter this Database Directory. 
        
        Default: [/opt/n1ge6/default/spool/spooldb] >> 

        Specify an alternate directory, or press <RETURN> to continue.


        creating directory: /opt/n1ge6/default/spool/spooldb
        Dumping bootstrapping information
        Initializing spooling database
        
        Hit <RETURN> to continue >> 
    • If you want to specify classic spooling, then enter classic.


      Please choose a spooling method (berkeleydb|classic) [berkeleydb] >> classic
      

      Dumping bootstrapping information
      Initializing spooling database
      
      Hit <RETURN> to continue >> 
  15. Enter a group ID range

    For more information, see Group IDs.


    Grid Engine group id range
    --------------------------
    
    When jobs are started under the control of Grid Engine an additional group id
    is set on platforms which do not support jobs. This is done to provide maximum
    control for Grid Engine jobs.
    
    This additional UNIX group id range must be unused group id's in your system.
    Each job will be assigned a unique id during the time it is running.
    Therefore you need to provide a range of id's which will be assigned
    dynamically for jobs.
    
    The range must be big enough to provide enough numbers for the maximum number
    of Grid Engine jobs running at a single moment on a single host. E.g. a range
    like >20000-20100< means, that Grid Engine will use the group ids from
    20000-20100 and provides a range for 100 Grid Engine jobs at the same time
    on a single host.
    
    You can change at any time the group id range in your cluster configuration.
    
    Please enter a range >> 20000-20100
    
    Using >20000-20100< as gid range. Hit <RETURN> to continue >> 
  16. Verify the spooling directory for the execution daemon.

    For information on spooling, see Spool Directories Under the Root Directory.


    Grid Engine cluster configuration
    ---------------------------------
    
    Please give the basic configuration parameters of your Grid Engine
    installation:
    
       <execd_spool_dir>
    
    The pathname of the spool directory of the execution hosts. User >sgeadmin<
    must have the right to create this directory and to write into it.
    
    Default: [/opt/n1ge6/default/spool] >>  
  17. Enter the email address of the user who should receive problem reports.

    In this example, the user who will receive problem report is me@my.domain.


    Grid Engine cluster configuration (continued)
    ---------------------------------------------
    
    <administator_mail>
    
    The email address of the administrator to whom problem reports are sent.
    
    It's is recommended to configure this parameter. You may use >none<
    if you do not wish to receive administrator mail.
    
    Please enter an email address in the form >user@foo.com<.
    
    Default: [none] >> me@my.domain
    

    Once you answer this question, the installation process is complete. Several screens of information will be displayed before the script exits. The commands that are noted in those screens are also documented in this chapter.

    The upgrade process uses your existing configuration to customize the installation. You will see output similar to the following:


    Creating >act_qmaster< file
    Creating >sgemaster< script
    Creating >sgeexecd< script
    creating directory: /tmp/centry
    Reading in complex attributes.
    Reading in administrative hosts.
    Reading in execution hosts.
    Reading in submit hosts.
    Reading in users:
        User "as114086".
        User "md121042".
    Reading in usersets:
        Userset "defaultdepartment".
        Userset "deadlineusers".
        Userset "admin".
        Userset "bchem1".
        Userset "bchem2".
        Userset "bchem3".
        Userset "bchem4".
        Userset "damtp7".
        Userset "damtp8".
        Userset "damtp9".
        Userset "econ1".
        Userset "staff".
    Reading in calendars:
        Calendar "always_disabled".
        Calendar "always_suspend".
        Calendar "test".
    Reading in projects:
        Project "ap1".
        Project "ap2".
        Project "high".
        Project "low".
        Project "p1".
        Project "p2".
        Project "staff".
    Reading in parallel environments:
        PE "bench_tight".
        PE "make".
    Creating settings files for >.profile/.cshrc<

    Caution – Caution –

    Do not rename any of the binaries of the distribution. If you use any scripts or tools in your cluster that monitor the daemons, make sure to check for the new names.


  18. Create the environment variables for use with the grid engine software.


    Note –

    If no cell name was specified during installation, the value of cell is default.


    • If you are using a C shell, type the following command:


      % source sge-root/cell/common/settings.csh
      
    • If you are using a Bourne shell or Korn shell, type the following command:


      $ . sge-root/cell/common/settings.sh
      
  19. Install or upgrade the execution hosts.

    There are two ways that you can install the N1 Grid Engine 6.1 software on your execution hosts: installation or upgrade. If you install the execution hosts, the local spool directory configuration, and some execd parameters will be overwritten. If you upgrade the execution hosts, those files will remain untouched.

    • Upgrade the software on the execution host.

      You need to log into each execution host, and run the following command:


      # sge-root/inst_sge -x -upd
      
    • Install the software on the execution host.

      1. If you only have a few execution hosts, you can install them interactively.

        You need to log into each execution host, and run the following command:


        # sge-root/inst_sge -x
        

        Complete instructions for installing execution hosts interactively are in How to Install Execution Hosts.

      2. If you have a large number of execution hosts, you should consider installing them non-interactively.

        Instructions for installing execution hosts in an automated way are in Using the inst_sge Utility and a Configuration Template.

  20. If you have configured load sensors on your execution hosts, you will need to copy these load sensors to the new directory location.

  21. Check your complexes.

    Both the structure of complexes and the rules for configuring complexes have changed. You can use qconf -sc to list your complexes. Review the log file that was generated during the master host upgrade, update.pid. the update.pid file will be placed in the master host spool directory, which is sge-root/cell/spool/by default.

    If necessary, you can use qconf -mc to reconfigure your complexes. For details, see Chapter 3, Configuring Complex Resource Attributes, in Sun N1 Grid Engine 6.1 Administration Guide.

  22. Reconfigure your queues.

    During the upgrade process, a single default cluster queue is created. Within this queue you will find all of your installed execution hosts. It is recommended that you reconfigure your queues. For details, see Configuring Queues in Sun N1 Grid Engine 6.1 Administration Guide.