Sun Cluster Software Installation Guide for Solaris OS

Installing the Software

This section provides information and procedures to install software on the cluster nodes.

The following task map lists the tasks that you perform to install software on multiple-node or single-node clusters. Complete the procedures in the order that is indicated.

Table 2–1 Task Map: Installing the Software

Task 

Instructions 

1. Plan the layout of your cluster configuration and prepare to install software. 

How to Prepare for Cluster Software Installation

2. (Optional) Install Cluster Control Panel (CCP) software on the administrative console.

How to Install Cluster Control Panel Software on an Administrative Console

3. Install the Solaris OS on all nodes. 

How to Install Solaris Software

4. (Optional) SPARC: Install Sun StorEdge Traffic Manager software.

SPARC: How to Install Sun Multipathing Software

5. (Optional) SPARC: Install VERITAS File System software.

SPARC: How to Install VERITAS File System Software

6. Install Sun Cluster software packages and any Sun Java System data services for the Solaris 8 or Solaris 9 OS that you will use. 

How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

7. Set up directory paths. 

How to Set Up the Root Environment

8. Establish the cluster or additional cluster nodes. 

Establishing the Cluster

ProcedureHow to Prepare for Cluster Software Installation

Before you begin to install software, make the following preparations.

Steps
  1. Ensure that the hardware and software that you choose for your cluster configuration are supported for this release of Sun Cluster software.

    Contact your Sun sales representative for the most current information about supported cluster configurations.

  2. Read the following manuals for information that can help you plan your cluster configuration and prepare your installation strategy.

  3. Have available all related documentation, including third-party documents.

    The following is a partial list of products whose documentation you might need to reference during cluster installation:

    • Solaris OS

    • Solstice DiskSuite or Solaris Volume Manager software

    • Sun StorEdge QFS software

    • SPARC: VERITAS Volume Manager

    • SPARC: Sun Management Center

    • Third-party applications

  4. Plan your cluster configuration.


    Caution – Caution –

    Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Solaris and Sun Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.

    For example, the Oracle Real Application Clusters Guard option of Oracle Real Application Clusters has special requirements for the hostnames that you use in the cluster. Another example with special requirements is Sun Cluster HA for SAP. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software.

    Also note that both Oracle Real Application Clusters and Sun Cluster HA for SAP are not supported for use in x86 based clusters.


  5. Obtain all necessary patches for your cluster configuration.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

    1. Copy the patches that are required for Sun Cluster into a single directory.

      The directory must be on a file system that is accessible by all nodes. The default patch directory is /var/cluster/patches/.


      Tip –

      After you install Solaris software on a node, you can view the /etc/release file to see the exact version of Solaris software that is installed.


    2. (Optional) If you are using SunPlex Installer, you can create a patch list file.

      If you specify a patch list file, SunPlex Installer only installs the patches that are listed in the patch list file. For information about creating a patch-list file, refer to the patchadd(1M) man page.

    3. Record the path to the patch directory.

Next Steps

If you want to use Cluster Control Panel software to connect from an administrative console to your cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.

Otherwise, choose the Solaris installation procedure to use.

ProcedureHow to Install Cluster Control Panel Software on an Administrative Console


Note –

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.


This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a single interface from which to start the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time.

You can use any desktop machine that runs the Solaris 8 or Solaris 9 OS as an administrative console. In addition, you can also use the administrative console as a documentation server. If you are using Sun Cluster on a SPARC based system, you can use the administrative console as a Sun Management Center console or server as well. See Sun Management Center documentation for information about how to install Sun Management Center software. See the Sun Cluster 3.1 8/05 Release Notes for Solaris OS for additional information about how to install Sun Cluster documentation.

Before You Begin

Ensure that a supported version of the Solaris OS and any Solaris patches are installed on the administrative console. All platforms require at least the End User Solaris Software Group.

Steps
  1. Become superuser on the administrative console.

  2. Insert the Sun Cluster 2 of 2 CD-ROM in the CD-ROM drive of the administrative console.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10 .


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
    
  4. Install the SUNWccon package.


    # pkgadd -d . SUNWccon
    
  5. (Optional) Install the SUNWscman package.


    # pkgadd -d . SUNWscman
    

    When you install the SUNWscman package on the administrative console, you can view Sun Cluster man pages from the administrative console before you install Sun Cluster software on the cluster nodes.

  6. (Optional) Install the Sun Cluster documentation packages.


    Note –

    If you do not install the documentation on your administrative console, you can still view HTML or PDF documentation directly from the CD-ROM. Use a web browser to view the Solaris_arch/Product/sun_cluster/index.html file on the Sun Cluster 2 of 2 CD-ROM, where arch is sparc or x86.


    1. Determine whether the SUNWsdocs package is already installed on the administrative console.


      # pkginfo | grep SUNWsdocs
      application SUNWsdocs     Documentation Navigation for Solaris 9

      If the SUNWsdocs package is not yet installed, you must install it before you install the documentation packages.

    2. Choose the Sun Cluster documentation packages to install.

      The following documentation collections are available in both HTML and PDF format:

      Collection Title 

      HTML Package Name 

      PDF Package Name 

      Sun Cluster 3.1 9/04 Software Collection for Solaris OS (SPARC Platform Edition) 

      SUNWscsdoc

      SUNWpscsdoc

      Sun Cluster 3.1 9/04 Software Collection for Solaris OS (x86 Platform Edition) 

      SUNWscxdoc

      SUNWpscxdoc

      Sun Cluster 3.x Hardware Collection for Solaris OS (SPARC Platform Edition) 

      SUNWschw

      SUNWpschw

      Sun Cluster 3.x Hardware Collection for Solaris OS (x86 Platform Edition) 

      SUNWscxhw

      SUNWpscxhw

      Sun Cluster 3.1 9/04 Reference Collection for Solaris OS 

      SUNWscref

      SUNWpscref

    3. Install the SUNWsdocs package, if not already installed, and your choice of Sun Cluster documentation packages.


      Note –

      All documentation packages have a dependency on the SUNWsdocs package. The SUNWsdocs package must exist on the system before you can successfully install a documentation package on that system.



      # pkgadd -d . SUNWsdocs pkg-list
      
  7. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  8. Create an /etc/clusters file on the administrative console.

    Add your cluster name and the physical node name of each cluster node to the file.


    # vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  9. Create an /etc/serialports file.

    Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.


    # vi /etc/serialports
    node1 ca-dev-hostname port
    node2 ca-dev-hostname port
    
    node1, node2

    Physical names of the cluster nodes

    ca-dev-hostname

    Hostname of the console-access device

    port

    Serial port number

    Note these special instructions to create an /etc/serialports file:

    • For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.

    • For all other console-access devices, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.

    • For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.

  10. (Optional) For convenience, set the directory paths on the administrative console.

    1. Add the /opt/SUNWcluster/bin/ directory to the PATH.

    2. Add the /opt/SUNWcluster/man/ directory to the MANPATH.

    3. If you installed the SUNWscman package, also add the /usr/cluster/man/ directory to the MANPATH.

  11. Start the CCP utility.


    # /opt/SUNWcluster/bin/ccp &
    

    Click the cconsole, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:


    # /opt/SUNWcluster/bin/ctelnet &
    

    See the procedure “How to Remotely Log In to Sun Cluster” in Beginning to Administer the Cluster in Sun Cluster System Administration Guide for Solaris OS for additional information about how to use the CCP utility. Also see the ccp(1M) man page.

Next Steps

Determine whether the Solaris OS is already installed to meet Sun Cluster software requirements.

ProcedureHow to Install Solaris Software

Follow these procedures to install the Solaris OS on each node in the cluster or to install the Solaris OS on the master node that you will flash archive for a JumpStart installation. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.


Tip –

To speed installation, you can install the Solaris OS on each node at the same time.


If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.

Before You Begin

Perform the following tasks:

Steps
  1. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.

      Use the following command to start the cconsole utility:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

      The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  2. Install the Solaris OS as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris OS.


    You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:

    1. Install at least the End User Solaris Software Group.


      Tip –

      To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.


      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem.

        If you intend to use SunPlex Installer to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default that is used by scinstall.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • Specify that slice 7 is at least 20 Mbytes in size.

        If you intend to use SunPlex Installer to install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9 or Solaris 10), also make this file system mount on /sds.


        Note –

        If you intend to use SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, SunPlex Installer must also install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9 or Solaris 10).


      • Create any other file-system partitions that you need, as described in System Disk Partitions.

    3. For ease of administration, set the same root password on each node.

  3. If you are adding a node to an existing cluster, prepare the cluster to accept the new node.

    1. On any active cluster member, start the scsetup(1M) utility.


      # scsetup
      

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task is completed without error.

    5. Quit the scsetup utility.

    6. From the active cluster node, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      
    7. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  4. If you are adding a node and VxVM is installed on any node in the cluster, perform the following tasks.

    1. Ensure that the same vxio number is used on the VxVM-installed nodes.


      # grep vxio /etc/name_to_major
      vxio NNN
      
    2. Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

    3. If the vxio number is already in use on a node that does not have VxVM installed, change the /etc/name_to_major entry to use a different number.

  5. If you installed the End User Solaris Software Group, use the pkgadd command to manually install any additional Solaris software packages that you might need.

    The following Solaris packages are required to support some Sun Cluster functionality.


    Note –

    Install packages in the order in which they are listed in the following table.


    Feature 

    Mandatory Solaris Software Packages 

    RSMAPI, RSMRDT drivers, or SCI-PCI adapters (SPARC based clusters only) 

    Solaris 8 or Solaris 9: SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox

    Solaris 10: SUNWrsm SUNWrsmo

    SunPlex Manager 

    SUNWapchr SUNWapchu

    • For the Solaris 8 or Solaris 9 OS, use the following command:


      # pkgadd -d . packages
      
    • For the Solaris 10 OS, use the following command:


      # pkgadd -G -d . packages
      

      You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.

  6. Install any required Solaris OS patches and hardware-related firmware and patches, including those for storage-array support. Also download any needed firmware that is contained in the hardware patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

  7. x86: Set the default boot file to kadb.


    # eeprom boot-file=kadb
    

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

  8. Update the /etc/inet/hosts file on each node with all IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service. See IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.

  9. If you will use ce adapters for the cluster interconnect, add the following entry to the /etc/system file.


    set ce:ce_taskq_disable=1

    This entry becomes effective after the next system reboot.

  10. (Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.

    Add the following entry to the /etc/system file on each node of the cluster:


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.

Next Steps

If you intend to use Sun multipathing software, go to SPARC: How to Install Sun Multipathing Software.

If you intend to install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

See Also

See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.

ProcedureSPARC: How to Install Sun Multipathing Software

Perform this procedure on each node of the cluster to install and configure Sun multipathing software for fiber channel (FC) storage. Multipathing software manages multiple I/O paths to the shared cluster storage.

Before You Begin

Perform the following tasks:

Steps
  1. Become superuser.

  2. For the Solaris 8 or Solaris 9 OS, install on each node Sun StorEdge Traffic Manager software and any necessary patches.

  3. Enable multipathing functionality.

    • For the Solaris 8 or 9 OS, change the value of the mpxio-disable parameter to no.

      Modify this entry in the /kernel/drv/scsi_vhci.conf file on each node.


      set mpxio-disable=no
    • For the Solaris 10 OS, issue the following command on each node:


      Caution – Caution –

      If Sun Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for using the stmsboot command in a Sun Cluster environment.



      # /usr/sbin/stmsboot -e
      
      -e

      Enables Solaris I/O multipathing

      See the stmsboot(1M) man page for more information.

  4. For the Solaris 8 or Solaris 9 OS, determine whether your version of Sun StorEdge SAN Foundation software includes built-in support for your storage array.

    If the software does not include built-in support for your storage array, edit the /kernel/drv/scsi_vhci.conf file on each node to include the necessary entries. For more information, see the release notes for your storage device.

  5. For the Solaris 8 or Solaris 9 OS, shut down each node and perform a reconfiguration boot.

    The reconfiguration boot creates the new Solaris device files and links.


    # shutdown -y -g0 -i0
    ok boot -r
    
  6. After the reconfiguration reboot is finished on all nodes, perform any additional tasks that are necessary to complete the configuration of your storage array.

    See installation instructions for your storage array in the Sun Cluster Hardware Administration Collection for details.

Troubleshooting

If you installed Sun multipathing software after Sun Cluster software was installed on the cluster, DID mappings might require updating. Issue the following commands on each node of the cluster to regenerate the DID namespace.

# scdidadm -C# scdidadm -r(Solaris 8 or 9 only) # cfgadm -c configure# scgdevs

See the scdidadm(1M), scgdevs(1M)man pages for more information.

Next Steps

If you intend to install VxFS, go to SPARC: How to Install VERITAS File System Software.

Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

ProcedureSPARC: How to Install VERITAS File System Software

Perform this procedure on each node of the cluster.

Steps
  1. Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.

  2. Install any Sun Cluster patches that are required to support VxFS.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

  3. In the /etc/system file on each node, set the following values.


    set rpcmod:svc_default_stksize=0x8000
    set lwp_default_stksize=0x6000

    These changes become effective at the next system reboot.

    • Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.

    • You must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.

Next Steps

Install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).

ProcedureHow to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer)

Follow this procedure to use the Sun JavaTM Enterprise System (Java ES) installer program to perform one or more of the following installation tasks:


Note –

Do not use this procedure to install the following kinds of data service packages:


Before You Begin

Perform the following tasks:

Steps
  1. (Optional) To use the installer program with a GUI, ensure that the display environment of the cluster node to install is set to display the GUI.


    % xhost +
    % setenv DISPLAY nodename:0.0
    
  2. Become superuser on the cluster node to install.

  3. Insert the Sun Cluster 1 of 2 CD-ROM in the CD-ROM drive.

  4. Change to the directory of the CD-ROM where the installer program resides.


    # cd /cdrom/cdrom0/Solaris_arch/
    

    In the Solaris_arch/ directory, arch is sparc or x86.

  5. Start the Java ES installer program.


    # ./installer
    
  6. Follow instructions on the screen to install Sun Cluster framework software and data services on the node.

    When prompted whether to configure Sun Cluster framework software, choose Configure Later.

    After installation is finished, you can view any available installation log. See the Sun Java Enterprise System 2005Q5 Installation Guide for additional information about using the Java ES installer program.

  7. Install additional packages if you intend to use any of the following features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers


    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.


    1. Determine which packages you must install.

      The following table lists the Sun Cluster 3.1 8/05 packages that each feature requires, in the order in which you must install each group of packages. The Java ES installer program does not automatically install these packages.


      Note –

      Install packages in the order in which they are listed in the following table.


      Feature 

      Additional Sun Cluster 3.1 8/05 Packages to Install 

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      • Solaris 8 and 9: SUNWsci SUNWscid SUNWscidx

      • Solaris 10: SUNWscir SUNWsci SUNWscidr SUNWscid

      RSMRDT drivers 

      SUNWscrdt

    2. Insert the Sun Cluster 2 of 2 CD-ROM, if it is not already inserted in the CD-ROM drive.

    3. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10 .


      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      
    4. Install the additional packages.


      # pkgadd -d . packages
      
  8. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  9. Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.

    Sun Cluster software requires at least version 1.4.2_03 of Java software.

    1. Determine what directory the /usr/java/ directory is symbolically linked to.


      # ls -l /usr/java
      lrwxrwxrwx   1 root   other    9 Apr 19 14:05 /usr/java -> /usr/j2se/
    2. Determine what version or versions of Java software are installed.

      The following are examples of commands that you can use to display the version of their related releases of Java software.


      # /usr/j2se/bin/java -version
      # /usr/java1.2/bin/java -version
      # /usr/jdk/jdk1.5.0_01/bin/java -version
      
    3. If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.

      The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.4.2_03 software.


      # rm /usr/java
      # ln -s /usr/j2se /usr/java
      
Next Steps

If you want to install Sun StorEdge QFS file system software, follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

Otherwise, to set up the root user environment, go to How to Set Up the Root Environment.

ProcedureHow to Set Up the Root Environment


Note –

In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See Customizing a User's Work Environment in System Administration Guide, Volume 1 (Solaris 8) or in System Administration Guide: Basic Administration (Solaris 9 or Solaris 10) for more information.


Perform this procedure on each node in the cluster.

Steps
  1. Become superuser on a cluster node.

  2. Modify PATH and MANPATH entries in the .cshrc or .profile file.

    1. Set the PATH to include /usr/sbin/ and /usr/cluster/bin/.

    2. Set the MANPATH to include /usr/cluster/man/.

    See your volume manager documentation and other application documentation for additional file paths to set.

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

Next Steps

Configure Sun Cluster software on the cluster nodes. Go to Establishing the Cluster.