This chapter explains how to install and configure Sun Cluster HA for PostgreSQL and contains the following sections:
Overview of Installing and Configuring Sun Cluster HA for PostgreSQL
Planning the Sun Cluster HA for PostgreSQL Installation and Configuration
Verifying the Sun Cluster HA for PostgreSQL Installation and Configuration
Sun Cluster HA for PostgreSQL enables the Sun Cluster software to manage PostgreSQL by providing components to perform the orderly startup, shutdown, and fault monitoring of PostgreSQL.
You can configure Sun Cluster HA for PostgreSQL as a failover service. You cannot configure Sun Cluster HA for PostgreSQL as a multiple-masters service or as a scalable service.
When a PostgreSQL database cluster is managed by the Sun Cluster HA for PostgreSQL data service, the PostgreSQL instance becomes a failover PostgreSQL resource across the Sun Cluster nodes. The failover is managed by the Sun Cluster HA for PostgreSQL data service, which runs within the global zone and HA containers.
For conceptual information about failover data services, multiple-masters data services, and scalable data services, see Sun Cluster Concepts Guide for Solaris OS.
The following table summarizes the tasks for installing and configuring Sun Cluster HA for PostgreSQL and provides cross-references to detailed instructions for performing these tasks. Perform the tasks in the order that they are listed in the table.
Table 1 Tasks for Installing and Configuring Sun Cluster HA for PostgreSQL
Task |
Instructions |
---|---|
Plan the installation |
Planning the Sun Cluster HA for PostgreSQL Installation and Configuration |
Install and configure the PostgreSQL software | |
Verify the installation and configuration |
How to Verify the Installation and Configuration of PostgreSQL |
Install Sun Cluster HA for PostgreSQL packages | |
Register and configure Sun Cluster HA for PostgreSQL resources | |
Verify the Sun Cluster HA for PostgreSQL installation and configuration |
Verifying the Sun Cluster HA for PostgreSQL Installation and Configuration |
Tune the Sun Cluster HA for PostgreSQL fault monitor | |
Debug Sun Cluster HA for PostgreSQL |
This section contains the information you need to plan your Sun Cluster HA for PostgreSQL installation and configuration.
Sun Cluster HA for PostgreSQL is supported in Solaris Containers, Sun Cluster is offering two concepts for Solaris Containers.
Zones are containers which are running after a reboot of the node. These containers, combined with resource groups having the nodename nodename:zonename as valid “nodename” in the resource groups nodename list.
HA containers are managed by the Solaris Container agent, and are represented by a resource of a resource group.
The PostgreSQL agent offers three options for a cluster configuration. In these three options, two options leverage the Write Ahead Log (WAL) file shipping features and require the installation of PostgreSQL pg_standby utility. The various options for cluster configuration are the following:
Traditional HA configuration with shared storage. In this configuration, you have a cluster with an active PostgreSQL resource, where the database directories reside on a global or a failover file system.
WAL file shipping between two PostgreSQL failover resources. In this configuration, you have two independent PostgreSQL resources in a cluster or in different clusters. One of the resources acts as a primary server and obtains the client requests. The other resource acts as a standby server applying the PostgreSQL WAL files shipped from the primary server.
WAL file shipping without shared storage. This configuration does not require shared storage. The PostgreSQL WAL file shipping replaces the shared storage. This configuration consists of three resource groups. In two single-node resource groups, one resource group contains the designated primary database resource. The other resource group contains the designated standby database resource. The third resource group contains a logical host and a Rolechanger resource. This Rolechanger resource is responsible for transforming the designated standby into an acting primary on a node outage of the designated primary.
The configuration restrictions in the subsections that follow apply only to Sun Cluster HA for PostgreSQL.
Your data service configuration might not be supported if you do not observe these restrictions.
The PostgreSQL database cluster is where the database files and the configuration files are stored. The database cluster, represented by the configuration variable PGDATA, needs to be placed on the shared storage.
Sun Cluster HA for PostgreSQL requires that the PostgreSQL listens at the localhost. Otherwise the monitoring of your data service will not work. For more information, see Preparing Your PostgreSQL Installation for Cluster Control.
The postgresql.conf file is one of the central configuration files for a specific PostgreSQL database cluster.
The postgresql.conf file must be stored in the PGDATA path. You cannot register Sun Cluster HA for PostgreSQL if the file postgresql.conf is not in the directory referenced in the PGDATA variable. The other configuration files can be kept elsewhere. For more information about registration, see Registering and Configuring Sun Cluster HA for PostgreSQL.
Sun Cluster HA for PostgreSQL requires a database to which it can connect and where it can manipulate a table for monitoring purposes. The password policy of this database for access from the localhost must be either trust or password. All other password policies can be whatever is applicable. For more information about setting the password policy, see Registering and Configuring Sun Cluster HA for PostgreSQL. For more information about the password policy, go to http://www.postgresql.org.
The PostgreSQL configuration in an HA container uses the smf component of Sun Cluster HA for Solaris Containers. The registration of the Sun Cluster HA for PostgreSQL data service in an HA container defines an smf service to control the PostgreSQL database. The name of this smf service is generated in this naming scheme: svc:/application/sczone-agents:resource-name. No other smf service with exactly this name can exist.
The associated smf manifest is automatically created during the registration process in this location and naming scheme: /var/svc/manifest/application/sczone-agents/resource-name.xml. No other manifest can coexist with this name.
The pg_standby utility must be configured with a trigger file after a failover from the primary to the standby triggering a role conversion. An automatic failback cannot occur because the old primary is now out of synchronization. To invoke an actual copy, the PostgreSQL user needs to copy, customize, and execute the two example scripts :resilver-step1 and resilver-step2.
To minimize the data loss on a planned failover, you should switch the PostgreSQL transaction logs before you perform the failover. For information about switching transaction logs, see http://www.postgresql.org.
The PostgreSQL WAL file Shipping without shared storage configuration cannot be deployed with HA containers managed by the HA container agent.
The configuration requirements in this section apply only to Sun Cluster HA for PostgreSQL.
If your data service configuration does not conform to these requirements, the data service configuration might not be supported.
The dependencies between the Sun Cluster HA for PostgreSQL components are described in the following table.
Table 2 Dependencies Between Sun Cluster HA for PostgreSQL Components
Component |
Dependency |
---|---|
PostgreSQL resource in a Solaris 10 global zone,zone or in Solaris 9 |
SUNW.HAStoragePlus This dependency is required only if the configuration uses a failover file system, of file systems in a zone. SUNW.LogicalHostName |
PostgreSQL resource in a Solaris 10 HA container. |
Sun Cluster HA for the Solaris Container boot resource. SUNW.HAStoragePlus SUNW.LogicalHostName—This dependency is required only if the zones boot resource does not manage the zone's IP address. |
You set these dependencies, when you register and configure Sun Cluster HA for PostgreSQL. For more information, see Registering and Configuring Sun Cluster HA for PostgreSQL.
If more elaborate dependencies are required, see the r_properties(5) and rg_properties(5) man pages for further dependencies and affinities settings.
Sun Cluster HA for PostgreSQL requires a parameter file to pass configuration information to the data service. You must create a directory for this file. Because the directory must be available on each node that is to host the PostgreSQL database, place the directory on the shared storage. If Sun Cluster HA for PostgreSQL is configured for an HA container, this file must be available in this zone. The parameter file is created automatically when the resource is registered.
For the WAL file shipping without shared storage configuration, the rsync utility is required. As an additional requirement, you need to link some PostgreSQL configuration files outside the PGDATA directory. Otherwise these files are destroyed during the resilvering of the primary database. Information about how to perform these steps is available in the comments of the resilver1 script. The PostgreSQL users on both nodes require a nonpassword login on each node.
This section explains only the special requirements for installing PostgreSQL for use with Sun Cluster HA for PostgreSQL. For complete information about installing and configuring PostgreSQL, see http://www.postgresql.org. For complete information about installing and configuring a Solaris Container, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Determine if the sun supplied PostgreSQL is already installed and if the installed version fulfills your requirements. To do this you need to check if at least the three following packages are installed on your system.
SUNWpostgr |
SUNWpostgr-libs |
SUNWpostgr-server-data |
To determine which PostgreSQL version is installed, submit the following commands.
# su - non-root-user $ postmaster --version |
If they are not installed, you need to decide wether you want to install PostgreSQL from the Solaris Media or if you want to build PostgreSQL on your own. If the version does not fulfill your needs, you have to build PostgreSQL on your own.
For each PostgreSQL database that you are installing and configuring choose the following tasks according to your zone type.
Determine whether you have to configure Sun Cluster HA for PostgreSQL to run in a global zone, in a zone or in an HA container configuration. The global zone configuration procedure is applicable if you install PostgreSQL on Solaris 9, or in the global zone of Solaris 10. The HA container configuration procedure is applicable if you install PostgreSQL in an HA container.
To install and configure PostgreSQL in a global zone configuration, complete the following tasks:
To install and configure PostgreSQL in a zone configuration, complete the following tasks:
To install and configure PostgreSQL in an HA container configuration, complete the following tasks:
For a complete example of deploying in a global zone, see Deployment Example: Installing PostgreSQL in the Global Zone.
As superuser register the SUNW.HAStoragePlus and the SUNW.gds resource types.
# clresourcetype register SUNW.HAStoragePlus SUNW.gds |
Create a failover resource group.
# clresourcegroup create PostgreSQL-resource-group |
Create a resource for PostgreSQL`s disk storage.
# clresource create -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=PostgreSQL-instance-mount-points \ PostgreSQL-has-resource |
(Optional) If you plan to access the database from a logical host, choose the following tasks according to your zone type.
# clreslogicalhostname create -g PostgreSQL-resource-group \ PostgreSQL-logical-hostname-resource-name |
Enable the failover resource group.
# clresourcegroup online -M PostgreSQL-resource-group |
Create a directory for the Sun Cluster HA for PostgreSQL parameter file.
# mkdir PostgreSQL-instance-mount-points/parameter-dir |
For complete information about installing PostgreSQL, go to http://www.postgresql.org.
For a complete example of deployment in a global zone, see Deployment Example: Installing PostgreSQL in the Global Zone.
Determine the following requirements for the deployment of PostgreSQL with Sun Cluster:
See if the PostgreSQL version that you need is already installed on each cluster node. by searching the most probable root paths where you find bin/postmaster:
/usr |
Root path for PostgreSQL shipped with Solaris OS. |
/usr/local/psql |
Root path for the PostgreSQL build without a prefix. |
/your-path |
Fully customized root path for PostgreSQL. This is where to place the binaries on the shared storage. A known convention is /path/postgresql-x.y.z. |
Determine the number of PostgreSQL resources to deploy.
Determine which cluster file systems will be used by each PostgreSQL resource.
Make sure, that a C compiler, make, and the readline package are installed. These packages are needed to build PostgreSQL from the source code downloads from http://www.postgresql.org .
The following assumptions are made:
The compiler gcc and the gmake package are installed in /usr/sfw.
The readline package is installed under /usr/local.
The PostgreSQL database software will be installed on the shared storage in the directory version in the failover file system /global/postgres.
The PostgreSQL database cluster will be installed in the same file system as the database software, in the directory /global/postgres/data.
The home directory of the postgres user is /global/postgres.
The PostgreSQL build directory is in /tmp/postgres/version, and the software is already downloaded and extracted in this place.
As superuser create the home directory for the PostgreSQL user on one node.
# mkdir /global/postgres |
Add a group for PostgreSQL on every node.
# groupadd -g 1000 postgres |
Add a user who owns the PostgreSQL installation on every node.
# useradd -u 1000 -g postgres -d /global/postgres -s /usr/bin/ksh postgres # chown -R postgres:postgres /global/postgres |
Switch to the PostgreSQL user.
# su - postgres |
Set your PATH variable.
$ PATH=$PATH:/usr/local/bin:/usr/sfw/bin:/usr/ccs/bin $ export PATH |
Set your LD_LIBRARY_PATH variable.
$ LD_LIBRARY_PATH=/usr/sfw/lib:/usr/local/lib:/usr/lib: $ export LD_LIBRARY_PATH |
Switch to your build directory.
$ cd /tmp/postgres/version |
Configure the PostgreSQL build.
$ ./configure --prefix=/global/postgres/version |
Complete, verify and install the build.
$ gmake $ gmake check $ gmake install |
For a complete example of deploying in a zone, see Deployment Example: Installing PostgreSQL in a Non-Global Zone.
As superuser register the SUNW.HAStoragePlus and the SUNW.gds resource types.
# clresourcetype register SUNW.HAStoragePlus SUNW.gds |
Install and boot the zone pgs-zone on all the nodes to host Sun Cluster HA for PostgreSQL.
Create a failover resource group.
# clresourcegroup create -n node-1:pgs-zone,node-2:pgs-zone PostgreSQL-resource-group |
Create a resource for the PostgreSQL zone`s disk storage.
# clresource create -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=PostgreSQL-instance-mount-points \ PostgreSQL-has-resource |
(Optional) Create a resource for the PostgreSQL`s logical hostname.
# clreslogicalhostname create -g PostgreSQL-resource-group \ PostgreSQL-logical-hostname-resource-name |
# clresourcegroup online -M PostgreSQL-resource-group |
For complete information about installing PostgreSQL, go to http://www.postgresql.org.
For a complete example of deploying in an HA container, see Deployment Example: Installing PostgreSQL in a Non-Global HA Container.
Determine the following requirements for the deployment of PostgreSQL with Sun Cluster:
See if the PostgreSQL version that you need is already installed on each cluster node. by searching the most probable root paths where you find bin/postmaster:
/usr |
Root path for PostgreSQL shipped with Solaris OS. |
/usr/local/psql |
Root path for the PostgreSQL build without a prefix. |
/your-path |
Fully customized root path for PostgreSQL. This is where to place the binaries on the shared storage. A known convention is /path/postgresql-x.y.z. |
Determine the number of PostgreSQL resources to deploy.
Determine which cluster file systems will be used by each PostgreSQL resource.
Make sure that a C compiler, make, and the readline package are installed. These packages are needed to build PostgreSQL from the source code downloads from http://www.postgresql.org .
The following assumptions are made:
The zone postgres-zone is installed and configured on every node.
The compiler gcc and the gmake package are installed in /usr/sfw.
The readline package is installed under /usr/local.
The PostgreSQL database software will be installed on the shared storage, in the directory version in the failover file system /postgres.
The PostgreSQL database cluster will be installed in the same file system as the database software, in the directory /postgres/data.
The home directory of the postgres user is /postgres.
The PostgreSQL build directory is in /tmp/postgres/version, and the software is already downloaded and extracted in this place.
As superuser log in to the zone.
# zlogin pgsql-zone |
Add a group for PostgreSQL.
# groupadd -g 1000 postgres |
Add a user who owns the PostgreSQL installation on every node.
# useradd -u 1000 -g postgres -d /postgres -m -s /usr/bin/ksh postgres |
Switch to the PostgreSQL user.
# su - postgres |
Set your PATH variable.
$ PATH=$PATH:/usr/local/bin:/usr/sfw/bin:/usr/ccs/bin $ export PATH |
Set your LD_LIBRARY_PATH variable.
$ LD_LIBRARY_PATH=/usr/sfw/lib:/usr/local/lib:/usr/lib: $ export LD_LIBRARY_PATH |
Switch to your build directory.
$ cd /tmp/postgres/version |
Configure the PostgreSQL build.
$ ./configure --prefix=/postgres/version |
Complete, verify, and install the build.
$ gmake $ gmake check $ gmake install |
For a complete example of deploying in an HA container, see Deployment Example: Installing PostgreSQL in a Non-Global HA Container.
As superuser register the SUNW.HAStoragePlus and the SUNW.gds resource types.
# clresourcetype register SUNW.HAStoragePlus SUNW.gds |
Create a failover resource group.
# clresourcegroup create PostgreSQL-resource-group |
Create a resource for the PostgreSQL zone`s disk storage.
# clresource create -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=PostgreSQL-instance-mount-points \ PostgreSQL-has-resource |
(Optional) If you want the protection against a total adapter failure for your public network, create a resource for the PostgreSQL`s logical hostname.
# clreslogicalhostname create -g PostgreSQL-resource-group \ PostgreSQL-logical-hostname-resource-name |
Place the resource group in the managed state.
# clresourcegroup online -M PostgreSQL-resource-group |
Install the zone.
Install the zone according to the Sun Cluster HA for Solaris Containers agent documentation, assuming that the resource name is pgsql-zone-rs and that the zone name is pgsql-zone.
Verify the zone's installation.
# zoneadm -z pgsql-zone boot # zoneadm -z pgsql-zone halt |
Register the zone's boot component.
Copy the container resource boot component configuration file.
# cp /opt/SUNWsczone/sczbt/util/sczbt_config zones-target-configuration-file |
Use a plain text editor to set the following variables:
RS=pgsql-zone-rs RG=PostgreSQL-resource-group PARAMETERDIR=pgsql-zone-parameter-directory SC_NETWORK=true|false SC_LH=PostgreSQL-logical-hostname-resource-name FAILOVER=true|false HAS_RS=PostgreSQL-has-resource Zonename=pgsql-zone Zonebootopt=zone-boot-options Milestone=zone-boot-milestone Mounts=
Create the parameter directory for your zone's resource.
# mkdir pgsql-zone-parameter-directory |
Execute the Sun Cluster HA for Solaris Container's registration script.
# /opt/SUNWsczone/sczbt/util/sczbt_register -f zones-target-configuration-file |
Enable the Solaris Container resource.
# clresource enable pgsql-zone-rs |
# clresourcegroup online PostgreSQL-resource-group |
For complete information about installing PostgreSQL, go to http://www.postgresql.org.
For a complete example of deploying in an HA container, see Deployment Example: Installing PostgreSQL in a Non-Global HA Container.
Determine the following requirements for the deployment of PostgreSQL with Sun Cluster:
See if the PostgreSQL version that you need is already installed on each cluster node by searching the most probable root paths where you find bin/postmaster:
/usr |
Root path for PostgreSQL shipped with Solaris OS. |
/usr/local/psql |
Root path for the PostgreSQL build without a prefix. |
/your-path |
Fully customized root path for PostgreSQL. This is where to place the binaries on the shared storage. A known convention is /path/postgresql-x.y.z. |
Determine the number of PostgreSQL resources to deploy.
Determine which cluster file systems will be used by each PostgreSQL resource.
Make sure that a C compiler, make, and the readline package are installed. These packages are needed to build PostgreSQL from the source code downloads from http://www.postgresql.org .
The following assumptions are made:
The zone postgres-zone is installed and configured on every node.
The compiler gcc and the gmake package are installed in /usr/sfw.
The readline package is installed under /usr/local.
The PostgreSQL database software will be installed on the shared storage, in the directory version in the failover file system /postgres.
The PostgreSQL database cluster will be installed in the same file system as the database software, in the directory /postgres/data.
The home directory of the postgres user is /postgres.
The PostgreSQL build directory is in /tmp/postgres/version, and the software is already downloaded and extracted in this place.
Log in to the zone.
# zlogin postgres-zone |
Add a group for PostgreSQL.
# groupadd -g 1000 postgres |
Add a user who owns the PostgreSQL installation on every node.
# useradd -u 1000 -g postgres -d /postgres -m -s /usr/bin/ksh postgres |
Switch to the PostgreSQL user.
# su - postgres |
Set your PATH variable.
$ PATH=$PATH:/usr/local/bin:/usr/sfw/bin:/usr/ccs/bin $ export PATH |
Set your LD_LIBRARY_PATH variable.
$ LD_LIBRARY_PATH=/usr/sfw/lib:/usr/local/lib:/usr/lib: $ export LD_LIBRARY_PATH |
Switch to your build directory.
$ cd /tmp/postgres/version |
Configure the PostgreSQL build.
$ ./configure --prefix=/postgres/version |
Complete, verify, and install the build.
$ gmake $ gmake check $ gmake install |
Before you install the Sun Cluster HA for PostgreSQL packages, verify that each PostgreSQL instance that you created is correctly configured to run in a cluster. The instance is the PostgreSQL database cluster together with the associated postmaster processes. This verification does not confirm that the PostgreSQL databases are highly available because the Sun Cluster HA for PostgreSQL data service is not yet configured.
Perform this procedure for each PostgreSQL instance that you created in Installing and Configuring PostgreSQL. During the verification you will complete the PostgreSQL postinstallation steps.
Determine whether you are in a local zone or in a global zone. If you are in an HA container, use /postgres instead of /global/postgres for your directory prefix in this procedure.
Switch to the PostgreSQL user if necessary.
# su - postgres |
(Optional) Set the PATH and LD_LIBRARY_PATH variables.
$ PATH=$PATH:/usr/local/bin:/usr/sfw/bin:/usr/ccs/bin $ export PATH $ LD_LIBRARY_PATH=/usr/sfw/lib:/usr/local/lib:/usr/lib: $ export LD_LIBRARY_PATH |
Set the PGDATA variable.
The PGDATA variable points to the directory where the PostgreSQL database cluster is installed. The PostgreSQL database cluster is a directory that contains the configuration and the data files for all the databases.
$ PGDATA=/global/postgres/data $ export PGDATA |
Create the data directory and the logs directory.
$ mkdir /global/postgres/data $ mkdir /global/postgres/logs |
Initialize the PostgreSQL cluster.
$ cd ~/postgres-version $ ./bin/initdb -D $PGDATA |
Start the PostgreSQL database server.
$ ./bin/pg_ctl -l /global/postgres/logs/firstlog start |
Create and delete a test database.
$ ./bin/createdb test $ ./bin/dropdb test |
If you are in a non global zone, leave this zone and return to the target zone.
If you did not install the Sun Cluster HA for PostgreSQL packages during your initial Sun Cluster installation, perform this procedure to install the packages. To install the packages, use the Sun JavaTM Enterprise System Installation Wizard.
You need to install the Sun Cluster HA for PostgreSQL packages in the global cluster and not in the zone cluster.
Perform this procedure on each cluster node where you are installing the Sun Cluster HA for PostgreSQL packages.
You can run the Sun Java Enterprise System Installation Wizard with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar.
Even if you plan to configure this data service to run in non-global zones, install the packages for this data service in the global zone. The packages are propagated to any existing non-global zones and to any non-global zones that are created after you install the packages.
Ensure that you have the Sun Java Availability Suite DVD-ROM.
If you intend to run the Sun Java Enterprise System Installation Wizard with a GUI, ensure that your DISPLAY environment variable is set.
On the cluster node where you are installing the data service packages, become superuser.
Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.
If the Volume Management daemon vold(1M) is running and configured to manage DVD-ROM devices, the daemon automatically mounts the DVD-ROM on the /cdrom directory.
Change to the Sun Java Enterprise System Installation Wizard directory of the DVD-ROM.
Start the Sun Java Enterprise System Installation Wizard.
# ./installer |
When you are prompted, accept the license agreement.
If any Sun Java Enterprise System components are installed, you are prompted to select whether to upgrade the components or install new software.
From the list of Sun Cluster agents under Availability Services, select the data service for PostgreSQL.
If you require support for languages other than English, select the option to install multilingual packages.
English language support is always installed.
When prompted whether to configure the data service now or later, choose Configure Later.
Choose Configure Later to perform the configuration after the installation.
Follow the instructions on the screen to install the data service packages on the node.
The Sun Java Enterprise System Installation Wizard displays the status of the installation. When the installation is complete, the wizard displays an installation summary and the installation logs.
(GUI only) If you do not want to register the product and receive product updates, deselect the Product Registration option.
The Product Registration option is not available with the CLI. If you are running the Sun Java Enterprise System Installation Wizard with the CLI, omit this step.
Exit the Sun Java Enterprise System Installation Wizard.
Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.
See Registering and Configuring Sun Cluster HA for PostgreSQL to register Sun Cluster HA for PostgreSQL and to configure the cluster for the data service.
Before you perform the procedures in this section, ensure that the Sun Cluster HA for PostgreSQL data service packages are installed.
The configuration and registration file in the /opt/SUNWscPostgreSQL/util directory exists to register the Sun Cluster HA for PostgreSQL resources. This file defines the dependencies that are required between the Sun Cluster HA for PostgreSQL component and other resources. For information about these dependencies, see Dependencies Between Sun Cluster HA for PostgreSQL Components
This section covers the following main topics:
Sun Cluster HA for PostgreSQL provides a script that automates the process of configuring the PostgreSQL resource. This script obtains configuration parameters from the pgs_config file. A template for this file is in the /opt/SUNWscPostgreSQL/util directory. To specify configuration parameters for the PostgreSQL resource, copy the pgs_config file to another directory and edit this pgs_config file.
This configuration file needs to be accessible from the zone where the PostgreSQL is installed.
Each configuration parameter in the pgs_config file is defined as a keyword-value pair. The pgs_config file already contains the required keywords and equals signs. For more information, see Listing of pgs_config. When you edit the /myplace/pgs_config file, add the required value to each keyword.
The keyword-value pairs in the pgs_config file are as follows:
RS=PostgreSQL-resource RG=PostgreSQL-resource-group PORT=80 LH=PostgreSQL-logical-hostname-resource-name HAS_RS=PostgreSQL-has-resource PFILE=pgsql-parameter-file ZONE=pgsql-zone ZONE_BT=pgsql-zone-rs PROJECT=pgsql-zone-project USER=pgsql-user PGROOT=pgsql-root-directory PGDATA=pgsql-data-directory PGPORT=pgsql-port PGHOST=pgsql-host PGLOGFILE=pgsql-log-file LD_LIBRARY_PATH=pgsql-ld-library-path ENVSCRIPT=pgsql-environment-script SCDB=pgsql-mon-db SCUSER=pgsql-mon-user SCTABLE=pgsql-mon-table SCPASS=pgsql-mon-pwd NOCONRET=pgsql-noconn-rtcode STDBY_RS=PostgreSQL-standbyresource STDBY_RG= PostgreSQL-standby-resource-group STDBY_USER=PostgreSQL-standby-user STDBY_HOST=PostgreSQL-standby-host STDBY_PARFILE=PostgreSQL-standby-parameter-file STDBY_PING=Number-of packets ROLECHG_RS=PostgreSQL-rolechanger-resource SSH_PASSDIR=PostgreSQL-user-passphrase-directory
The meaning and permitted values of the keywords in the pgs_config file are as follows:
Specifies the name that you are assigning to the PostgreSQL resource. You must specify a value for this keyword.
Specifies the name of the resource group where the PostgreSQL resource will reside. You must specify a value for this keyword.
In a global zone configuration specifies the value of a dummy port only if you specified the LH value for the PostgreSQL resource. This variable is used only at registration time. If you will not specify an LH, omit this value.
In an HA containerconfiguration, omit this value.
In a global zone configuration specifies the name of the SUNW.LogicalHostName resource for the PostgreSQL resource. This name must be the SUNW.LogicalHostname resource name you assigned when you created the resource in How to Enable a Zone to Run PostgreSQL in an HA Container Configuration. If you did not register a SUNW.LogicalHostname resource, omit this value.
In an HA container and WAL file shipping with out shared storage configuration, omit this value.
Specifies the names of resources on which your PostgreSQL will depend, for example, the SUNW.HAStoragePlus resource, for the PostgreSQL resource. This name must be the SUNW.HAStoragePlus resource name that you assigned when you created the resource in How to Enable a PostgreSQL Database to Run in a Global Zone Configuration. Dependencies to additional resources can be specified here. They must be separated by a comma. In a WAL file shipping without shared storage configuration, omit this value.
Specifies the name of the parameter file where the PostgreSQL specific parameters of the PostgreSQL resource are stored. This file is automatically created at registration time. You must specify a value for this keyword.
Specifies the name of the HA container to host the PostgreSQL database. Omit this value if you configure a global zone environment.
Specifies the name of the zone boot resource in an HA container configuration. Omit this value if you configure a global zone environment.
Specifies the name of the resource management project in the HA container. Omitting this value in an HA container configuration results in the default project for USER. Leave the value blank for a global zone configuration.
Specifies the name of the Solaris user who owns the PostgreSQL database. You must specify a value for this keyword.
Specifies the name of the directory in which PostgreSQL is installed. For example, if PostgreSQL version 8.1.2 is installed in/global/postgres/postgresql-8.1.2, the variable PGROOT needs to be set to /global/postgres/postgresql-8.1.2. A valid PGROOT variable contains the file pg_ctl, which is located in its subdirectory bin. You must specify a value for this keyword.
Examples for PGROOT:
/usr |
Root path for PostgreSQL shipped with Solaris OS. |
/usr/local/psql |
Root path for the PostgreSQL build without a prefix. |
/your-path |
Fully customized root path for PostgreSQL. This is where to place the binaries on the shared storage. A known convention is /path/postgresql-x.y.z. |
Specifies the name of the directory where the “PostgreSQL data cluster” is initialized. This directory is where the data directories and at least the postgresql.conf file are located. You must specify a value for this keyword.
Specifies the port on which the PostgreSQL server will listen.
Specifies the hostname or directory that is used by the probe. If PGHOST is a hostname, the hostname is used by the probe to connect to the database. If PGHOST is a directory, the probe expects the UNIX domain socket in this directory to establish its connection. The PGHOST variable is referenced only by the probe and the database must be configured according to this setting.
Specifies the name of the log file of PostgreSQL. All server messages will be found in this file. You must specify a value for this keyword.
Specifies the libraries needed to start the PostgreSQL server and utilities. This parameter is optional.
Specifies the name of a script to source PostgreSQL—specific environment variables. In a global zone configuration, the script type is either C shell or Korn shell, according to the login shell of the PostgreSQL user. In an HA container configuration, the script type must be a valid Korn shell script.
This parameter is optional.
Specifies the name of the PostgreSQL database that will be monitored. You must specify a value for this keyword.
Specifies the name of the PostgreSQL database user, which is needed to monitor the condition of the database. This user will be created during the installation process. You must specify a value for this keyword.
Specifies the name of the table that will be modified to monitor the health of the PostgreSQL application. This table will be created during the installation process. You must specify a value for this keyword.
Specifies the password for SCUSER. If no password is specified, the user set by SCUSER needs to be allowed to log in from the localhost without a password challenge.
This parameter is optional.
Specifies the value below 100 of the return code for failed database connections. For more information, seeTuning the Sun Cluster HA for PostgreSQL Fault Monitor.
Specifies the name you assigned to the PostgreSQL standby resource. You must specify a value for the keyword on this primary if you configure WAL file shipping as a replacement for the shared storage.
Specifies the name of the resource group where the PostgreSQL standby resource resides. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies the name of the Solaris user who owns the PostgreSQL standby database. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies name of the cluster node that hosts the designated standby database. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies the name of the parameter file of the PostgreSQL standby resource. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies the number of packages the primary uses to ping the standby host. This value is optional and the default is five packets.
Specifies the name of the PostgreSQL Rolechanger resource. You must specify a value for the keyword on the standby host if you configure WAL file shipping as a replacement for shared storage.
Specifies the directory where a ssh passphrase is stored at registration time. This parameter is optional.
For illustration purposes, two examples for the pgs_config file are provided. The first example shows the pgs_config file for a global zone configuration and second example shows the pgs_config file for an HA container configuration.
This example shows a pgs_config file in which configuration parameters are set as follows:
The name of the PostgreSQL resource is postgres-rs.
The name of the resource group for the PostgreSQL resource is postgres-rg.
The value of the dummy port for the PostgreSQL resource is 80.
The name of the SUNW.LogicalHost resource is postgres-lh.
The name of the SUNW.HAStoragePlus resource which manages the file system for PostgreSQL is postgres-has-rs.
The parameter file will be generated in /global/postgres/pfile.
The null value for ZONE, ZONE_BT, and PROJECT indicates, that it is a global zone configuration.
The name of the Solaris user who owns PostgreSQL is postgres.
The PostgreSQL software is installed in /global/postgres/postgresql-8.1.2.
The PostgreSQL data and configuration files are installed under /global/postgres/data.
The PostgreSQL database server listens on port 5432. The probe connects by using the UNIX domain socket in the /tmp directory.
The log file for the database server is /global/postgres/logs/scinstance1.
The libraries for the PostgreSQL server are stored in the paths of the LD_LIBRARY_PATH /usr/sfw/lib:/usr/local/lib:/usr/lib:.
Additional PostgreSQL variables are set in /global/postgres/variables.ksh.
The database that will be monitored is testdb.
The user for the database monitoring is testusr.
The table testtbl will be modified to probe the condition of the database.
The password for the user testusr is testpwd.
If a connection to the database testdb fails, the probe returns with return code 10.
RS=postgres-rs RG=postgres-rg PORT=80 LH=postgres-lh HAS_RS=postgres-has-rs PFILE=/global/postgres/pfile ZONE= ZONE_BT= PROJECT= USER=postgres PGROOT=/global/postgres/postgresql-8.1.2 PGDATA=/global/postgres/data PGPORT=5432 PGHOST= PGLOGFILE=/global/postgres/logs/scinstance1 LD_LIBRARY_PATH=/usr/sfw/lib:/usr/local/lib:/usr/lib: ENVSCRIPT=/global/postgres/variables.ksh SCDB=testdb SCUSER=testusr SCTABLE=testtbl SCPASS=testpwd NOCONRET=10
This example shows an pgs_config file in which configuration parameters are set as follows:
The name of the PostgreSQL resource is postgres-zrs.
The name of the resource group for the PostgreSQL resource is postgres-rg.
The values for the PORT variable, LH variable, and the HAS-RS variable are not set.
The parameter file will be generated in /postgres/pfile.
The PostgreSQL database server will be started in zone pgs-zone.
The boot component resource for the zone pgs-zone is named pgs-zone-rs.
The PostgreSQL database server will be started under the project pgs-project.
The name of the Solaris user who owns PostgreSQL is zpostgr.
The PostgreSQL software is installed in /postgres/postgresql-8.1.2.
The PostgreSQL data and configuration files are installed in /postgres/data.
The PostgreSQL database server listens on port 5432. The probe connects using the UNIX domain socket in /tmp.
The log file for the database server is /postgres/logs/scinstance1.
The libraries for the PostgreSQL server are stored in the paths of LD_LIBRARY_PATH /usr/sfw/lib:/usr/local/lib:/usr/lib:.
Additional PostgreSQL variables are set in /postgres/variables.ksh.
The database that will be monitored is testdb.
The user for the database monitoring is testusr.
The table testtbl will be modified to probe the condition of the database.
The password for the user testusr is testpwd.
If a connection to the database testdb fails, the probe returns with return code 10.
RS=postgres-zrs RG=postgres-rg PORT= LH= HAS_RS= PFILE=/postgres/pfile ZONE=pgs-zone ZONE_BT=pgs-zone-rs PROJECT=pgs-project USER=zpostgr PGROOT=/postgres/potgresql-8.1.2 PGDATA=/postgres/data PGPORT=5432 PGHOST= PGLOGFILE=/postgres/logs/scinstance1 LD_LIBRARY_PATH=/usr/sfw/lib:/usr/local/lib:/usr/lib: ENVSCRIPT=/postgres/variables.ksh SCDB=testdb SCUSER=testusr SCTABLE=testtbl SCPASS=testpwd NOCONRET=10
Sun Cluster HA for PostgreSQL provides a script that automates the process of configuring the PostgreSQL Rolechanger resource. This script obtains configuration parameters from the rolechg_config file. A template for this file is in the /opt/SUNWscPostgreSQL/rolechg/util directory. To specify configuration parameters for the PostgreSQL resource, copy the rolechg_config file to another directory and edit the file.
Each configuration parameter in the rolechg_config file is defined as a keyword-value pair. The rolechg_config file already contains the required keywords and equals signs. For more information, see the Listing of rolechg_config. When you edit the /myplace/rolechg_config file, add the required value to each keyword.
The keyword-value pairs in the rolechg_config file are as follows:
RS=Rolechanger-resource-name RG=Rolechanger-resource-group PORT=80 LH=Rolechanger-logical-host HAS_RS=Rolechanger-dependency-list STDBY_RS=PostgreSQL-standby-resource-name PRI_RS=PostgreSQL-primary-resource-name STDBY_HOST=PostgreSQL-standby-hostname STDBY_PFILE=PostgreSQL-standby-parameter-file TRIGGER=PostgreSQL-pg_standby-trigger-file WAIT=Seconds-before-trigger
The permitted values of the keywords rolechg_config and their explanation are as follows:
Specifies the name assigned to the Rolechanger resource. You must specify a value for this keyword.
Specifies the name assigned to the Rolechanger resource group. You must specify a value for this keyword.
In a global zone configuration, specifies the value of a dummy port only if you specified the LH value for the Rolechanger resource. This variable is used only during registration.
In a global zone configuration, specifies the name of the SUNW.LogicalHostName resource for the Rolechanger resource.
Specifies the dependency list for the Rolchanger resource. If you have only the Rolechanger resource and the logical host in your resource group, omit this value.
Specifies the name assigned to the PostgreSQL standby resource. You must specify a value for this keyword.
Specifies the name of the PostgreSQL primary resource. You must specify a value for this keyword.
Specifies the name of the host running the PostgreSQL standby resource group. You must specify a value for this keyword.
Specifies the name of the PostgreSQL standby resource parameter file. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies the trigger file for the PostgreSQL pg_standby utility. The trigger file must be an absolute path to a file name. You must specify a value for this keyword.
Specifies the number of seconds to wait before touching the trigger file, which starts the conversion from a standby to a primary. You must specify a value for this keyword.
The Rolechanger component of the PostgreSQL agent delivers two resilver scripts in the /opt/SUNWscPostgreSQL/rolecht/util directory. The scripts are called resilver-step1 and resilver-step2. The PostgreSQL user needs to copy, modify, and execute these scripts. The purpose of these scripts is to automate an exact copy from the standby to the primary after a failover. These scripts should incur a minimal amount of downtime, and provide a maximum amount of guidance.
The scripts rely on certain assumptions for the PostgreSQL configuration to work. You need to prepare your PostgreSQL installation according to the following assumptions:
The file postgresql.conf is linked to another directory than PGDATA, for example: postgresql.conf -> ../conf/postgresql.conf.
The file recovery.conf/recovery.done is linked to another directory than PGDATA, for example: recovery.conf -> ../conf/recovery.conf.
Every other configuration file in PGDATA, which has to vary between the designated primary and the designate standby is linked to another directory than PGDATA.
The Postgres users on the primary and on the standby are identical and trust each other on a ssh login without password request.
Each PostgreSQL installation is configured with an appropriate archive command and recovery.conf/done file.
When a recovery.conf file exists in the PGDATA directory, PostgreSQL executes the command specified in this file to obtain the WAL logs for its recovery. After finishing the recovery, PostgreSQL renames the file recovery.conf to recovery.done. To make the WAL file shipping and resilver scripts work properly, and for any other type of resilvering you might implement, you need to perform two steps. You have to create a link recovery.conf on the designated standby and a link recovery.done on the designated primary from your PGDATA directory to ../conf/recovery.conf.
The following examples show the different PostgreSQL configurations on the designated primary and standby servers. The designated primary and standby servers have different archive and recovery commands. In Example 4, the resilvering scripts are also explained in detail.
This example shows the required archive and recovery configuration for the designated primary server.
The archive command in postgresql.conf:
archive_command = '/usr/local/bin/rsync -arv %p \ standby:/pgs/82_walarchives/%f</dev/null' |
The contents of recovery.conf/done:
restore_command = 'cp /pgs/82_walarchives/%f %p' |
This example shows the required archive, recovery, and resilver configuration for the designated standby server.
The archive command in postgresql.conf:
archive_command = '/usr/local/bin/rsync -arv %p standby:/pgs/ \ 82_walarchives/%f</dev/null' |
The contents of recovery.conf/done:
restore_command = '/pgs/postgres-8.2.5/bin/pg_standby -k 10 -t \ /pgs/data/failover /pgs/82_walarchives %f %p' |
The two scripts have various variables that need to be customized. The key-value pair and explanation for the two scripts are as follows:
Explanation for the script resilver-step1
Specifies the PGDATA directory of the current node. For normal use, it would be the one on the designated standby node.
Specifies the PGDATA of the target node. For normal use, it would be the one on the designated primary node.
Specifies the name of the target node. For normal use, it would be the name of the designated primary node.
Specifies the PostgreSQL base directory, where the PostgreSQL binaries are located.
Specifies the resource group, which contains the cluster resource of the designated primary.
Specifies the resource group, which contains the cluster resource of the designated standby.
Specifies the resource name of the designated standby.
Specifies the database port.
Specifies the resource group, which contains the Rolechanger resource.
Specifies the absolute path to the RSYNC command including the necessary options.
Specifies whether your passphrase is secure.
Explanation for the script resilver-step2
Specifies the name of the source node. For normal use, it would be the name of the designated standby node.
Specifies the PGDATA directory of the current node. For normal use, it would be the name of the designated standby node.
Specifies the PGDATA of the target node. For normal use, it would be the name of the designated primary node.
Specifies the name of the target node. For normal use, it would be the name of the designated primary node.
Specifies the PostgreSQL base directory, where the PostgreSQL binaries are located.
Specifies the resource group, which contains the cluster resource of the designated primary.
Specifies the resource group, which contains the cluster resource of the designated standby.
Specifies the resource name of the designated standby resource. This name should be unique on your standby. The script resilver-step2 requires this file generated by the script resilver-step1 under /var/tmp/${STDBY_RS}-resilver.
Specifies the resource group, which contains the Rolechanger resource.
Specifies the node name or zone name of the designated primary host or zone.
Specifies the absolute path to RSYNC command including the necessary options.
Specifies whether your ssh key is secured by a passphrase or not.
You need three configuration files:
A file for the PostgreSQL primary resource
A file for the PostgreSQL standby resource
A file for the Rolechanger resource for WAL file shipping without shared storage configuration
In addition to these requirements, you also need to customize copies of resilver-step1 and resilver-step2.
The configuration files are as follows:
pgs_primary_config for the primary resource
pgs_standby_config for the standby resource
rolechg_config for the Rolechanger resource
Modified copy of the resilver-step1 script
Modified copy of the resilver-step2 script
This example shows a pgs_primary_config file, a pgs_standby_config file, and a rolechg_config file with configuration parameters are set.
The key-value pairs and explanation for a sample pgs_primary_config file are follows:
The name of the PostgreSQL resource is postgres-prim-rs.
The name of the resource group for the PostgreSQL resource is postgres-prim-rg.
The value for the dummy port for the PostgreSQL resource is 80.
SUNW.LogicalHost resource is not present in postgres-sta-rg.
SUNW. HAStoragPlus resource is not present in postgres-sta-rg.
The parameter file is generated in /postgres/pfile.
Specifies a global zone configuration.
Specifies a global zone configuration.
Specifies a global zone configuration.
The name of the Solaris user who owns PostgreSQL is pgs.
The PostgreSQL software is installed in /postgres/postgresql-8.3.1.
The PostgreSQL data and configuration files are installed under /postgres/data.
The PostgreSQL database server listens on port 5432.
The log file for the database server is /postgres/logs/scinstance1.
The libraries for the PostgreSQL server are stored in the paths of LD_LIBRARY_PATH/usr/sfw/lib:/usr/local/lib:/usr/lib directory.
Additional PostgreSQL variables are set in /global/ postgres/variables.ksh.
The monitored database is testdb.
The user for the database monitoring is testusr.
The table testtb1 is modified to probe the condition of the database.
The password for the user testusr is testpwd.
If a connection to the database testdb fails , the probe returns with return code 10.
The resource name of the PostgreSQL standby resource is postgres-sta-rs.
The resource group name of the PostgreSQL standby resource group is postgres-sta-rg.
The user who owns the PostgreSQL standby database is pgs.
The name of the standby host is phys-node2.
The parameter file of the PostgreSQL standby resource is /postgres/pfile.
The Rolechanger resource name has a null value because it is not needed on the primary.
The SSH_PASSDIR has a null value to indicate that the sshlkeys are not protected by a passphrase.
The key-value pairs and explanation for a pgs_standby_config file are as follows:
The name of the PostgreSQL resource is postgres-sta-rs.
The name of the resource group for the PostgreSQL resource is postgres-sta-rg.
The value for the dummy port for the PostgreSQL resource is 80.
SUNW.LogicalHost resource is not present in postgres-sta-rg.
SUNW. HAStoragPlus resource is not present in postgres-sta-rg.
The parameter file is generated in /postgres/pfile.
The null value indicates that it is a global zone configuration.
The null value indicates that it is a global zone configuration.
The null value indicates that it is a global zone configuration.
The name of the Solaris user who owns PostgreSQL is pgs.
The PostgreSQL software is installed in /postgres/postgresql-8.3.1.
The PostgreSQL data and configuration files are installed under /postgres/data.
The PostgreSQL database server listens on port 5432.
The log file for the database server is /postgres/logs/scinstance1.
The libraries for the PostgreSQL server are stored in the paths of the LD_LIBRARY_PATH/usr/sfw/lib: /usr/local/lib: /usr/lib: directory.
Additional PostgreSQL variables are set in /global/ postgres/variables.ksh.
The monitored database is testdb.
The user for the database monitoring is testusr.
The table testtb1 is modified to probe the condition of the database.
The password for the user testusr is testpwd.
If a connection to the database testdb fails, the probe returns with return code 10.
The value for the STDBY_RS is not required in a standby configuration.
The value for STDBY_RG is not required in a standby configuration.
The value for STDBY_USER is not required in a standby configuration.
The value for STDBY_HOST is not required in a standby configuration.
The value for STDBY_PARFILE is not required in a standby configuration.
The Rolechanger resource is rolechg-rs.
The SSH_PASSDIR has a null value , which means that the sshlkeys are not protected by a passphrase.
The key-value pairs and explanation for configuration file rolechg-config are as follows:
The name of the Rolechanger resource is rolechg-rs.
The name of the resource group for the PostgreSQL resource is rolechg-rg.
The value of the dummy port for the PostgreSQL resource is 5432.
The resource name for the SUNW.LogicalHost resource is pgs-1h-1.
SUNW. HAStoragPlus resource or other dependencies are not present.
The name of the PostgreSQL standby resource is postgres-sta-rs.
The name of the PostgreSQL primary resource is postgres-prim-rs.
The physical node name of the standby is phys-node2.
The parameter file on the standby is /postgres/pfile.
The trigger file on which the pg_standby utility reacts is phys-node2.
After the resource is started, Rolechanger waits for 30 seconds until it touches the trigger file.
Modifications in a copy of resilver-step1
PGDATA of the standby is in /postgres/data.
PGDATA of the primary is in /postgrs/data.
Specifies the name of the target node. The usual name is the name of the designated primary node.
Specifies the PostgreSQL base directory, where the PostgreSQL binaries are located.
Specifies the resource group that contains the cluster resource of the designated primary.
Specifies the resource group that contains the cluster resource of the designated standby.
Specifies the database port.
Specifies the resource group that contains the Rolechanger resource.
Specifies the absolute path to the RSYNC command, including the necessary options.
Specifies whether your passphrase is secure.
Modifications in a copy of resilver-step2
The source node is phys-node2.
PGDATA of the standby is in /postgres/data.
PGDATA of the primary is in /postgrs/data.
The target node is phys-node1.
Specifies the PostgreSQL base directory, where the PostgreSQL binaries are located.
Specifies the resource group which contains the cluster resource of the designated primary.
The resource group for the standby resource is postgres-sta-rg.
Specifies the resource group which contains the cluster resource of the designated standby.
Specifies the database port.
The resource group for the Rolechanger resource group is rolechg-rg.
The primary node is the global zone of phys-node1.
Specifies the absolute path to the RSYNC command including the necessary options.
Specifies whether your passphrase is secure.
To prepare your PostgreSQL installation for cluster control, you create a database, a user, and a table to be monitored by the PostgreSQL resource. Because you need to differentiate between a global zone and an HA container, two procedures are provided.
Ensure that you have edited the pgs_config file to specify configuration parameters for the Sun Cluster HA for PostgreSQL data service. For more information, see Specifying Configuration Parameters for the PostgreSQL Resource.
As superuser change the rights of the configuration file to be accessible for your PostgreSQL user.
# chmod 755 /myplace/pgs_config |
Switch to your PostgreSQL user.
# su - postgres |
If the login shell is not the Korn shell, switch to ksh.
% ksh |
Set the necessary variables.
$ . /myplace/pgs_config $ export PGDATA PGPORT LD_LIBRARY_PATH |
If your PostgreSQL is not already running, start the PostgreSQL server.
$ $PGROOT/bin/pg_ctl -l $PGLOGFILE start |
Prepare the database.
$ /opt/SUNWscPostgreSQL/util/pgs_db_prep -f /myplace/pgs_config |
(Optional) Configure your PostgreSQL instance to listen on the logical host's TCP/IP name.
If you want your PostgreSQL databases to listen on more than localhost, configure the listen_address parameter in the file postgresql.conf. Use a plain text editor such as vi, and set the value of listen_address to an appropriate value.
The PostgreSQL instance must listen on localhost. For additional information, see http://www.postgresql.org.
listen_address = 'localhost,myhost'
Set the security policy for the test database.
Use a plain text editor such as vi to add the following line to the file pg_hba.conf.
local testdb all password
For additional information about the pg_hba.conf file, see http://www.postgresql.org.
Stop the PostgreSQL database server.
$ $PGROOT/bin/pg_ctl stop |
Ensure, that you have edited the pgs_config file to specify configuration parameters for the Sun Cluster HA for PostgreSQL data service. For more information, see Specifying Configuration Parameters for the PostgreSQL Resource. Also make sure that the package directory of the Sun Cluster HA for PostgreSQL, /opt/SUNWscPostgreSQL, is available in the target zone.
As superuser change the rights of the configuration file to be accessible for your PostgreSQL user.
Ensure, that your pgs_config file is accessible from your zone. Otherwise, transfer the file to your zone by using appropriate methods.
# chmod 755 /myplace/pgs_config |
Switch to the target zone.
# zlogin pgsql-zone |
Switch to the PostgreSQL user.
# su - zpostgr |
If the login shell is not the Korn shell, switch to ksh.
% ksh |
Set the necessary variables.
$ . /myplace/pgs_config $ export PGDATA PGPORT LD_LIBRARY_PATH |
If your PostgreSQL is not already running, start the PostgreSQL server.
$ $PGROOT/bin/pg_ctl -l $PGLOGFILE start |
Prepare the database.
$ /opt/SUNWscPostgreSQL/util/pgs_db_prep -f /myplace/pgs_config |
(Optional) Configure your PostgreSQL instance to listen on the logical hosts TCP/IP name.
If you want your PostgreSQL databases to listen on more than localhost, configure the listen_address parameter in the file postgresql.conf. Use a plain text editor such as vi, and set the value of listen_address to an appropriate value.
The PostgreSQL instance must listen on localhost. For additional information, see http://www.postgresql.org .
listen_address = 'localhost,myhost'
Set the security policy for the test database.
Use a plain text editor such as vi to add the following line to the pg_hba.conf file.
local testdb all password
For additional information, see http://www.postgresql.org.
Stop the PostgreSQL database server.
$ $PGROOT/bin/pg_ctl stop |
Leave the target zone and return to the global zone.
Ensure that you have edited the pgs_config file to specify configuration parameters for the Sun Cluster HA for PostgreSQL data service. For more information, see Specifying Configuration Parameters for the PostgreSQL Resource.
Become superuser on one of the nodes in the cluster that will host PostgreSQL.
Go to the directory that contains the script for creating the Sun Cluster HA for PostgreSQL resource.
# cd /opt/SUNWscPostgreSQL/util |
Run the script that creates the PostgreSQL resource.
# ksh ./pgs_register -f /myplace/pgs_config |
If you omit the —f option, the file /opt/SUNWscPostgreSQL/util/pgs_config will be used.
Bring the PostgreSQL resource online.
# clresource enable postgres-rs |
Perform this task to change parameters in the Sun Cluster HA for PostgreSQL manifest and to validate the parameters in the HA container. Parameters for the Sun Cluster HA for PostgreSQL manifest are stored as properties of the SMF service. To modify parameters in the manifest, change the related properties in the SMF service then validate the parameter changes.
Become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations on the zones console.
Change the Solaris Service Management Facilities (SMF) properties for the Sun Cluster HA for PostgreSQL manifest.
# svccfg svc:/application/sczone-agents:resource |
For more information, see the svccfg(1M) man page.
Validate the parameter changes.
# /opt/SUNWscPostgreSQL/bin/control_pgs validate resource |
Messages for this command are stored in the /var/adm/messages/ directory of the HA container.
Disconnect from the HA container's console.
Become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations.
Disable and remove the resource that is used by the Sun Cluster HA for PostgreSQL data service.
# clresource disable resource # clresource delete resource |
Log in as superuser to the HA container's console.
Unregister Sun Cluster HA for PostgreSQL from the Solaris Service Management Facilities (SMF) service.
# /opt/SUNWscPostgreSQL/util/pgs_smf_remove -f filename |
Specifies the configuration file name.
The name of the configuration file that you used to register Sun Cluster HA for PostgreSQL with the SMF service.
If you no longer have the configuration file that you used to register Sun Cluster HA for PostgreSQL with the SMF service, create a replacement configuration file:
Make a copy of the default file, /opt/SUNWscPostgreSQL/util/pgs_config.
Set the ZONE and RS parameters with the values that are used by the data service.
Run the pgs_smf_remove command and use the -f option to specify this configuration file.
Disconnect from the HA container's console.
Ensure that you have edited the rolechg_config file to specify configuration parameters for the Sun Cluster HA for PostgreSQL Rolechanger data service. For more information, see http://www.postgresql.org.
Become superuser on one of the nodes in the cluster that hosts PostgreSQL.
Go to the directory that contains the script for creating the Sun Cluster HA for PostgreSQL Rolechanger resource.
# cd /opt/SUNWscPostgreSQL/util |
Run the script that creates the PostgreSQL resource.
# ksh ./rolechg_register -f /myplace/rolechg_config |
If you omit the -f option, the file /opt/SUNWscPostgreSQL/rolechg_util/rolechg_config is used.
Bring the PostgreSQL Rolechanger resource online.
# clresource enable rolechg-rs |
After you install, register, and configure Sun Cluster HA for PostgreSQL, verify this installation and configuration to determine whether the Sun Cluster HA for PostgreSQL data service makes your PostgreSQL database highly available.
Become superuser on a cluster node that is to host the PostgreSQL component.
Ensure that all the PostgreSQL resources are online.
For each resource, perform the following steps:
Switch the resource group to another cluster node, such as node2.
# clresourcegroup switch-h node2 postgres-rg |
Confirm that the resource is now online on node2.
# clresource status postgres-rs |
Become superuser on a cluster node that is to host the PostgreSQL component.
Ensure that all the PostgreSQL resources are online.
For each resource, perform the following steps:
Determine whether the resource is online.
# clresource status postgres-prim-rs |
# clresource status postgres-sta-rs |
# clresource status rolechg-rs |
If the resource is not online, bring the resource online.
# clresource enable postgres-sta-rs |
# clresource enable postgres-prim-rs |
# clresource enable rolechg-rs |
Reboot the primary node where the primary database runs.
# reboot |
Confirm that the resource is now online on node 2.
# cleresource status rolechg-rs |
For a failback, log in as the Postgres user on the standby node and resilver the primary node by executing scripts resilver-step1 and resilver-step2.
Ensure that you follow the configuration steps for the scripts.
The Sun Cluster HA for PostgreSQL fault monitor verifies that the data service is running in a healthy condition.
A Sun Cluster HA for PostgreSQL fault monitor is contained in each resource that represents the PostgreSQL instance. You created these resources when you registered and configured Sun Cluster HA for PostgreSQL. For more information, see Registering and Configuring Sun Cluster HA for PostgreSQL.
System properties and extension properties of the PostgreSQL resources control the behavior of the fault monitor. The default values of these properties determine the preset behavior of the fault monitor. Because the preset behavior should be suitable for most Sun Cluster installations, tune the Sun Cluster HA for PostgreSQL fault monitor only if you need to modify this preset behavior.
Tuning the Sun Cluster HA for PostgreSQL fault monitor involves the following tasks:
Setting the return value for failed PostgreSQL monitor connections
Setting the interval between fault monitor probes
Setting the time-out for fault monitor probes
Defining the criteria for persistent faults
Specifying the failover behavior of a resource
The fault monitor Sun Cluster HA for PostgreSQL differentiates between connection problems and definitive application failures. The value of NOCONRET in the PostgreSQL parameter file specifies the return code for connection problems. This value results in a certain amount of ignored consecutive failed probes as long as they all return the value of NOCONRET. The first successful probe reverts this “failed probe counter” back to zero. The maximum number of failed probes is calculated as100 / NOCONRET. A definitive application failure will result in an immediate restart or failover.
The definition of the return value NOCONRET defines one of two behaviors for failed database connections of a PostgreSQL resource.
Retry the connection to the test database several times before considering the PostgreSQL resource as failed and triggering a restart or failover.
Complain at every probe that the connection to the test database failed. No restart or failover will be triggered.
To achieve either of these behaviors, you need to consider the standard resource properties retry_interval and thorough_probe_interval.
A “just complaining” probe is achieved as soon as the following inequation is true:
retry_interval < thorough_probe_interval *100/NOCONRET
As soon as this inequation is false, the PostgreSQL resource restarts or fails over after 100 / NOCONRET consecutive probe failures.
The value 100/NOCONRET defines the maximum number of retries for the probe in the case of a failed connection.
Assume that the following resource parameters are set:
thorough_probe_interval=60
retry_interval=900
NOCONRET=10
If you encounter, for example, a shortage of available database sessions for 7 minutes, you will see 7 complaints in /var/adm/messages, but no resource restart. If the shortage lasts 10 minutes, you will have a restart of the PostgreSQL resource after the 10th probe.
If you do not want a resource restart in the previous example, set the value of NOCONRET=10 to 5 or less.
For more information, see Tuning Fault Monitors for Sun Cluster Data Services in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
The Sun Cluster HA for PostgreSQL resources use a parameter file to pass parameters to the start, stop, and probe commands. Changes to these parameters take effect at least at every restart or enabling, disabling of the resource.
Changing one of the following parameters, takes effect at the next probe of the PostgreSQL resource:
USER
PGROOT
PGPORT
PGHOST
LD_LIBRARY_PATH
SCDB
SCUSER
SCTABLE
SCPASS
NOCONRET
A false change of the parameters with an enabled PostgreSQL resource might result in an unplanned service outage. Therefore, disable the PostgreSQL resource first, execute the change, and then re-enable the resource.
The fault monitor for Sun Cluster HA for PostgreSQL ensures that all the requirements for the zone boot component to run are met:
The Sun Cluster HA for PostgreSQL main postmaster process is running.
If this process is not running, the fault monitor restarts the PostgreSQL database server. If the fault persists, the fault monitor fails over the resource group that contains the resource for the PostgreSQL.
Connections to the PostgreSQL database server are possible, and the database catalog is accessible.
If the connection fails, the probe exits with the connection failed return code NOCONRET. If the database catalog is not accessible, the fault monitor restarts the PostgreSQL resource.
The test database is healthy.
If the test table in the test database can be manipulated, the database server is considered healthy. If table manipulation fails, it is differentiated, whether the problem was a connection error or the database manipulation was unsuccessful for any other reason.
If the connection was impossible the probe exits with the connection failed return code NOCONRET. If the table manipulation itself was unsuccessful, the fault monitor triggers a restart or a failover the PostgreSQL database server resource.
Sun Cluster HA for PostgreSQL has a file named config that enables you to activate debugging for PostgreSQL resources. This file is in the /opt/SUNWscPostgreSQL/etc directory.
Determine whether you are in a global zone or in an HA container configuration.
If your operating system is Solaris 10 and your PostgreSQL resource is dependent on a Solaris Container boot component resource, you are in an HA container configuration. In any other case, especially on a Solaris 9 system , you are in a global zone configuration.
Determine whether debugging for Sun Cluster HA for PostgreSQL is active.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
If debugging is inactive, daemon.notice is set in the file /etc/syslog.conf of the appropriate zone.
If debugging is inactive, edit the /etc/syslog.conf file in the appropriate zone to change daemon.notice to daemon.debug.
Confirm that debugging for Sun Cluster HA for PostgreSQL is active.
If debugging is active, daemon.debug is set in the file /etc/syslog.conf.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.debug;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
Restart the syslogd daemon in the appropriate zone.
If your operating system is Solaris 9, type:
# pkill -1 syslogd |
If your operating system is Solaris 10, type:
# svcadm refresh svc:/system/system-log:default |
Edit the /opt/SUNWsczone/sczbt/etc/config file to change the DEBUG= variable according to one of the examples:
DEBUG=ALL
DEBUG=resource name
DEBUG=resource name,resource name, ...
# cat /opt/SUNWscPostgreSQL/etc/config # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # DEBUG=ALL # |
To deactivate debugging, repeat step 1 to 6, changing daemon.debug to daemon.notice and changing the DEBUG variable to DEBUG=.