Before you perform the procedures in this section, ensure that the Sun Cluster HA for PostgreSQL data service packages are installed.
The configuration and registration file in the /opt/SUNWscPostgreSQL/util directory exists to register the Sun Cluster HA for PostgreSQL resources. This file defines the dependencies that are required between the Sun Cluster HA for PostgreSQL component and other resources. For information about these dependencies, see Dependencies Between Sun Cluster HA for PostgreSQL Components
This section covers the following main topics:
Sun Cluster HA for PostgreSQL provides a script that automates the process of configuring the PostgreSQL resource. This script obtains configuration parameters from the pgs_config file. A template for this file is in the /opt/SUNWscPostgreSQL/util directory. To specify configuration parameters for the PostgreSQL resource, copy the pgs_config file to another directory and edit this pgs_config file.
This configuration file needs to be accessible from the zone where the PostgreSQL is installed.
Each configuration parameter in the pgs_config file is defined as a keyword-value pair. The pgs_config file already contains the required keywords and equals signs. For more information, see Listing of pgs_config. When you edit the /myplace/pgs_config file, add the required value to each keyword.
The keyword-value pairs in the pgs_config file are as follows:
RS=PostgreSQL-resource RG=PostgreSQL-resource-group PORT=80 LH=PostgreSQL-logical-hostname-resource-name HAS_RS=PostgreSQL-has-resource PFILE=pgsql-parameter-file ZONE=pgsql-zone ZONE_BT=pgsql-zone-rs PROJECT=pgsql-zone-project USER=pgsql-user PGROOT=pgsql-root-directory PGDATA=pgsql-data-directory PGPORT=pgsql-port PGHOST=pgsql-host PGLOGFILE=pgsql-log-file LD_LIBRARY_PATH=pgsql-ld-library-path ENVSCRIPT=pgsql-environment-script SCDB=pgsql-mon-db SCUSER=pgsql-mon-user SCTABLE=pgsql-mon-table SCPASS=pgsql-mon-pwd NOCONRET=pgsql-noconn-rtcode STDBY_RS=PostgreSQL-standbyresource STDBY_RG= PostgreSQL-standby-resource-group STDBY_USER=PostgreSQL-standby-user STDBY_HOST=PostgreSQL-standby-host STDBY_PARFILE=PostgreSQL-standby-parameter-file STDBY_PING=Number-of packets ROLECHG_RS=PostgreSQL-rolechanger-resource SSH_PASSDIR=PostgreSQL-user-passphrase-directory
The meaning and permitted values of the keywords in the pgs_config file are as follows:
Specifies the name that you are assigning to the PostgreSQL resource. You must specify a value for this keyword.
Specifies the name of the resource group where the PostgreSQL resource will reside. You must specify a value for this keyword.
In a global zone configuration specifies the value of a dummy port only if you specified the LH value for the PostgreSQL resource. This variable is used only at registration time. If you will not specify an LH, omit this value.
In an HA containerconfiguration, omit this value.
In a global zone configuration specifies the name of the SUNW.LogicalHostName resource for the PostgreSQL resource. This name must be the SUNW.LogicalHostname resource name you assigned when you created the resource in How to Enable a Zone to Run PostgreSQL in an HA Container Configuration. If you did not register a SUNW.LogicalHostname resource, omit this value.
In an HA container and WAL file shipping with out shared storage configuration, omit this value.
Specifies the names of resources on which your PostgreSQL will depend, for example, the SUNW.HAStoragePlus resource, for the PostgreSQL resource. This name must be the SUNW.HAStoragePlus resource name that you assigned when you created the resource in How to Enable a PostgreSQL Database to Run in a Global Zone Configuration. Dependencies to additional resources can be specified here. They must be separated by a comma. In a WAL file shipping without shared storage configuration, omit this value.
Specifies the name of the parameter file where the PostgreSQL specific parameters of the PostgreSQL resource are stored. This file is automatically created at registration time. You must specify a value for this keyword.
Specifies the name of the HA container to host the PostgreSQL database. Omit this value if you configure a global zone environment.
Specifies the name of the zone boot resource in an HA container configuration. Omit this value if you configure a global zone environment.
Specifies the name of the resource management project in the HA container. Omitting this value in an HA container configuration results in the default project for USER. Leave the value blank for a global zone configuration.
Specifies the name of the Solaris user who owns the PostgreSQL database. You must specify a value for this keyword.
Specifies the name of the directory in which PostgreSQL is installed. For example, if PostgreSQL version 8.1.2 is installed in/global/postgres/postgresql-8.1.2, the variable PGROOT needs to be set to /global/postgres/postgresql-8.1.2. A valid PGROOT variable contains the file pg_ctl, which is located in its subdirectory bin. You must specify a value for this keyword.
Examples for PGROOT:
/usr |
Root path for PostgreSQL shipped with Solaris OS. |
/usr/local/psql |
Root path for the PostgreSQL build without a prefix. |
/your-path |
Fully customized root path for PostgreSQL. This is where to place the binaries on the shared storage. A known convention is /path/postgresql-x.y.z. |
Specifies the name of the directory where the “PostgreSQL data cluster” is initialized. This directory is where the data directories and at least the postgresql.conf file are located. You must specify a value for this keyword.
Specifies the port on which the PostgreSQL server will listen.
Specifies the hostname or directory that is used by the probe. If PGHOST is a hostname, the hostname is used by the probe to connect to the database. If PGHOST is a directory, the probe expects the UNIX domain socket in this directory to establish its connection. The PGHOST variable is referenced only by the probe and the database must be configured according to this setting.
Specifies the name of the log file of PostgreSQL. All server messages will be found in this file. You must specify a value for this keyword.
Specifies the libraries needed to start the PostgreSQL server and utilities. This parameter is optional.
Specifies the name of a script to source PostgreSQL—specific environment variables. In a global zone configuration, the script type is either C shell or Korn shell, according to the login shell of the PostgreSQL user. In an HA container configuration, the script type must be a valid Korn shell script.
This parameter is optional.
Specifies the name of the PostgreSQL database that will be monitored. You must specify a value for this keyword.
Specifies the name of the PostgreSQL database user, which is needed to monitor the condition of the database. This user will be created during the installation process. You must specify a value for this keyword.
Specifies the name of the table that will be modified to monitor the health of the PostgreSQL application. This table will be created during the installation process. You must specify a value for this keyword.
Specifies the password for SCUSER. If no password is specified, the user set by SCUSER needs to be allowed to log in from the localhost without a password challenge.
This parameter is optional.
Specifies the value below 100 of the return code for failed database connections. For more information, seeTuning the Sun Cluster HA for PostgreSQL Fault Monitor.
Specifies the name you assigned to the PostgreSQL standby resource. You must specify a value for the keyword on this primary if you configure WAL file shipping as a replacement for the shared storage.
Specifies the name of the resource group where the PostgreSQL standby resource resides. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies the name of the Solaris user who owns the PostgreSQL standby database. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies name of the cluster node that hosts the designated standby database. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies the name of the parameter file of the PostgreSQL standby resource. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies the number of packages the primary uses to ping the standby host. This value is optional and the default is five packets.
Specifies the name of the PostgreSQL Rolechanger resource. You must specify a value for the keyword on the standby host if you configure WAL file shipping as a replacement for shared storage.
Specifies the directory where a ssh passphrase is stored at registration time. This parameter is optional.
For illustration purposes, two examples for the pgs_config file are provided. The first example shows the pgs_config file for a global zone configuration and second example shows the pgs_config file for an HA container configuration.
This example shows a pgs_config file in which configuration parameters are set as follows:
The name of the PostgreSQL resource is postgres-rs.
The name of the resource group for the PostgreSQL resource is postgres-rg.
The value of the dummy port for the PostgreSQL resource is 80.
The name of the SUNW.LogicalHost resource is postgres-lh.
The name of the SUNW.HAStoragePlus resource which manages the file system for PostgreSQL is postgres-has-rs.
The parameter file will be generated in /global/postgres/pfile.
The null value for ZONE, ZONE_BT, and PROJECT indicates, that it is a global zone configuration.
The name of the Solaris user who owns PostgreSQL is postgres.
The PostgreSQL software is installed in /global/postgres/postgresql-8.1.2.
The PostgreSQL data and configuration files are installed under /global/postgres/data.
The PostgreSQL database server listens on port 5432. The probe connects by using the UNIX domain socket in the /tmp directory.
The log file for the database server is /global/postgres/logs/scinstance1.
The libraries for the PostgreSQL server are stored in the paths of the LD_LIBRARY_PATH /usr/sfw/lib:/usr/local/lib:/usr/lib:.
Additional PostgreSQL variables are set in /global/postgres/variables.ksh.
The database that will be monitored is testdb.
The user for the database monitoring is testusr.
The table testtbl will be modified to probe the condition of the database.
The password for the user testusr is testpwd.
If a connection to the database testdb fails, the probe returns with return code 10.
RS=postgres-rs RG=postgres-rg PORT=80 LH=postgres-lh HAS_RS=postgres-has-rs PFILE=/global/postgres/pfile ZONE= ZONE_BT= PROJECT= USER=postgres PGROOT=/global/postgres/postgresql-8.1.2 PGDATA=/global/postgres/data PGPORT=5432 PGHOST= PGLOGFILE=/global/postgres/logs/scinstance1 LD_LIBRARY_PATH=/usr/sfw/lib:/usr/local/lib:/usr/lib: ENVSCRIPT=/global/postgres/variables.ksh SCDB=testdb SCUSER=testusr SCTABLE=testtbl SCPASS=testpwd NOCONRET=10
This example shows an pgs_config file in which configuration parameters are set as follows:
The name of the PostgreSQL resource is postgres-zrs.
The name of the resource group for the PostgreSQL resource is postgres-rg.
The values for the PORT variable, LH variable, and the HAS-RS variable are not set.
The parameter file will be generated in /postgres/pfile.
The PostgreSQL database server will be started in zone pgs-zone.
The boot component resource for the zone pgs-zone is named pgs-zone-rs.
The PostgreSQL database server will be started under the project pgs-project.
The name of the Solaris user who owns PostgreSQL is zpostgr.
The PostgreSQL software is installed in /postgres/postgresql-8.1.2.
The PostgreSQL data and configuration files are installed in /postgres/data.
The PostgreSQL database server listens on port 5432. The probe connects using the UNIX domain socket in /tmp.
The log file for the database server is /postgres/logs/scinstance1.
The libraries for the PostgreSQL server are stored in the paths of LD_LIBRARY_PATH /usr/sfw/lib:/usr/local/lib:/usr/lib:.
Additional PostgreSQL variables are set in /postgres/variables.ksh.
The database that will be monitored is testdb.
The user for the database monitoring is testusr.
The table testtbl will be modified to probe the condition of the database.
The password for the user testusr is testpwd.
If a connection to the database testdb fails, the probe returns with return code 10.
RS=postgres-zrs RG=postgres-rg PORT= LH= HAS_RS= PFILE=/postgres/pfile ZONE=pgs-zone ZONE_BT=pgs-zone-rs PROJECT=pgs-project USER=zpostgr PGROOT=/postgres/potgresql-8.1.2 PGDATA=/postgres/data PGPORT=5432 PGHOST= PGLOGFILE=/postgres/logs/scinstance1 LD_LIBRARY_PATH=/usr/sfw/lib:/usr/local/lib:/usr/lib: ENVSCRIPT=/postgres/variables.ksh SCDB=testdb SCUSER=testusr SCTABLE=testtbl SCPASS=testpwd NOCONRET=10
Sun Cluster HA for PostgreSQL provides a script that automates the process of configuring the PostgreSQL Rolechanger resource. This script obtains configuration parameters from the rolechg_config file. A template for this file is in the /opt/SUNWscPostgreSQL/rolechg/util directory. To specify configuration parameters for the PostgreSQL resource, copy the rolechg_config file to another directory and edit the file.
Each configuration parameter in the rolechg_config file is defined as a keyword-value pair. The rolechg_config file already contains the required keywords and equals signs. For more information, see the Listing of rolechg_config. When you edit the /myplace/rolechg_config file, add the required value to each keyword.
The keyword-value pairs in the rolechg_config file are as follows:
RS=Rolechanger-resource-name RG=Rolechanger-resource-group PORT=80 LH=Rolechanger-logical-host HAS_RS=Rolechanger-dependency-list STDBY_RS=PostgreSQL-standby-resource-name PRI_RS=PostgreSQL-primary-resource-name STDBY_HOST=PostgreSQL-standby-hostname STDBY_PFILE=PostgreSQL-standby-parameter-file TRIGGER=PostgreSQL-pg_standby-trigger-file WAIT=Seconds-before-trigger
The permitted values of the keywords rolechg_config and their explanation are as follows:
Specifies the name assigned to the Rolechanger resource. You must specify a value for this keyword.
Specifies the name assigned to the Rolechanger resource group. You must specify a value for this keyword.
In a global zone configuration, specifies the value of a dummy port only if you specified the LH value for the Rolechanger resource. This variable is used only during registration.
In a global zone configuration, specifies the name of the SUNW.LogicalHostName resource for the Rolechanger resource.
Specifies the dependency list for the Rolchanger resource. If you have only the Rolechanger resource and the logical host in your resource group, omit this value.
Specifies the name assigned to the PostgreSQL standby resource. You must specify a value for this keyword.
Specifies the name of the PostgreSQL primary resource. You must specify a value for this keyword.
Specifies the name of the host running the PostgreSQL standby resource group. You must specify a value for this keyword.
Specifies the name of the PostgreSQL standby resource parameter file. You must specify a value for this keyword on the primary if you configure WAL file shipping as a replacement for shared storage.
Specifies the trigger file for the PostgreSQL pg_standby utility. The trigger file must be an absolute path to a file name. You must specify a value for this keyword.
Specifies the number of seconds to wait before touching the trigger file, which starts the conversion from a standby to a primary. You must specify a value for this keyword.
The Rolechanger component of the PostgreSQL agent delivers two resilver scripts in the /opt/SUNWscPostgreSQL/rolecht/util directory. The scripts are called resilver-step1 and resilver-step2. The PostgreSQL user needs to copy, modify, and execute these scripts. The purpose of these scripts is to automate an exact copy from the standby to the primary after a failover. These scripts should incur a minimal amount of downtime, and provide a maximum amount of guidance.
The scripts rely on certain assumptions for the PostgreSQL configuration to work. You need to prepare your PostgreSQL installation according to the following assumptions:
The file postgresql.conf is linked to another directory than PGDATA, for example: postgresql.conf -> ../conf/postgresql.conf.
The file recovery.conf/recovery.done is linked to another directory than PGDATA, for example: recovery.conf -> ../conf/recovery.conf.
Every other configuration file in PGDATA, which has to vary between the designated primary and the designate standby is linked to another directory than PGDATA.
The Postgres users on the primary and on the standby are identical and trust each other on a ssh login without password request.
Each PostgreSQL installation is configured with an appropriate archive command and recovery.conf/done file.
When a recovery.conf file exists in the PGDATA directory, PostgreSQL executes the command specified in this file to obtain the WAL logs for its recovery. After finishing the recovery, PostgreSQL renames the file recovery.conf to recovery.done. To make the WAL file shipping and resilver scripts work properly, and for any other type of resilvering you might implement, you need to perform two steps. You have to create a link recovery.conf on the designated standby and a link recovery.done on the designated primary from your PGDATA directory to ../conf/recovery.conf.
The following examples show the different PostgreSQL configurations on the designated primary and standby servers. The designated primary and standby servers have different archive and recovery commands. In Example 4, the resilvering scripts are also explained in detail.
This example shows the required archive and recovery configuration for the designated primary server.
The archive command in postgresql.conf:
archive_command = '/usr/local/bin/rsync -arv %p \ standby:/pgs/82_walarchives/%f</dev/null' |
The contents of recovery.conf/done:
restore_command = 'cp /pgs/82_walarchives/%f %p' |
This example shows the required archive, recovery, and resilver configuration for the designated standby server.
The archive command in postgresql.conf:
archive_command = '/usr/local/bin/rsync -arv %p standby:/pgs/ \ 82_walarchives/%f</dev/null' |
The contents of recovery.conf/done:
restore_command = '/pgs/postgres-8.2.5/bin/pg_standby -k 10 -t \ /pgs/data/failover /pgs/82_walarchives %f %p' |
The two scripts have various variables that need to be customized. The key-value pair and explanation for the two scripts are as follows:
Explanation for the script resilver-step1
Specifies the PGDATA directory of the current node. For normal use, it would be the one on the designated standby node.
Specifies the PGDATA of the target node. For normal use, it would be the one on the designated primary node.
Specifies the name of the target node. For normal use, it would be the name of the designated primary node.
Specifies the PostgreSQL base directory, where the PostgreSQL binaries are located.
Specifies the resource group, which contains the cluster resource of the designated primary.
Specifies the resource group, which contains the cluster resource of the designated standby.
Specifies the resource name of the designated standby.
Specifies the database port.
Specifies the resource group, which contains the Rolechanger resource.
Specifies the absolute path to the RSYNC command including the necessary options.
Specifies whether your passphrase is secure.
Explanation for the script resilver-step2
Specifies the name of the source node. For normal use, it would be the name of the designated standby node.
Specifies the PGDATA directory of the current node. For normal use, it would be the name of the designated standby node.
Specifies the PGDATA of the target node. For normal use, it would be the name of the designated primary node.
Specifies the name of the target node. For normal use, it would be the name of the designated primary node.
Specifies the PostgreSQL base directory, where the PostgreSQL binaries are located.
Specifies the resource group, which contains the cluster resource of the designated primary.
Specifies the resource group, which contains the cluster resource of the designated standby.
Specifies the resource name of the designated standby resource. This name should be unique on your standby. The script resilver-step2 requires this file generated by the script resilver-step1 under /var/tmp/${STDBY_RS}-resilver.
Specifies the resource group, which contains the Rolechanger resource.
Specifies the node name or zone name of the designated primary host or zone.
Specifies the absolute path to RSYNC command including the necessary options.
Specifies whether your ssh key is secured by a passphrase or not.
You need three configuration files:
A file for the PostgreSQL primary resource
A file for the PostgreSQL standby resource
A file for the Rolechanger resource for WAL file shipping without shared storage configuration
In addition to these requirements, you also need to customize copies of resilver-step1 and resilver-step2.
The configuration files are as follows:
pgs_primary_config for the primary resource
pgs_standby_config for the standby resource
rolechg_config for the Rolechanger resource
Modified copy of the resilver-step1 script
Modified copy of the resilver-step2 script
This example shows a pgs_primary_config file, a pgs_standby_config file, and a rolechg_config file with configuration parameters are set.
The key-value pairs and explanation for a sample pgs_primary_config file are follows:
The name of the PostgreSQL resource is postgres-prim-rs.
The name of the resource group for the PostgreSQL resource is postgres-prim-rg.
The value for the dummy port for the PostgreSQL resource is 80.
SUNW.LogicalHost resource is not present in postgres-sta-rg.
SUNW. HAStoragPlus resource is not present in postgres-sta-rg.
The parameter file is generated in /postgres/pfile.
Specifies a global zone configuration.
Specifies a global zone configuration.
Specifies a global zone configuration.
The name of the Solaris user who owns PostgreSQL is pgs.
The PostgreSQL software is installed in /postgres/postgresql-8.3.1.
The PostgreSQL data and configuration files are installed under /postgres/data.
The PostgreSQL database server listens on port 5432.
The log file for the database server is /postgres/logs/scinstance1.
The libraries for the PostgreSQL server are stored in the paths of LD_LIBRARY_PATH/usr/sfw/lib:/usr/local/lib:/usr/lib directory.
Additional PostgreSQL variables are set in /global/ postgres/variables.ksh.
The monitored database is testdb.
The user for the database monitoring is testusr.
The table testtb1 is modified to probe the condition of the database.
The password for the user testusr is testpwd.
If a connection to the database testdb fails , the probe returns with return code 10.
The resource name of the PostgreSQL standby resource is postgres-sta-rs.
The resource group name of the PostgreSQL standby resource group is postgres-sta-rg.
The user who owns the PostgreSQL standby database is pgs.
The name of the standby host is phys-node2.
The parameter file of the PostgreSQL standby resource is /postgres/pfile.
The Rolechanger resource name has a null value because it is not needed on the primary.
The SSH_PASSDIR has a null value to indicate that the sshlkeys are not protected by a passphrase.
The key-value pairs and explanation for a pgs_standby_config file are as follows:
The name of the PostgreSQL resource is postgres-sta-rs.
The name of the resource group for the PostgreSQL resource is postgres-sta-rg.
The value for the dummy port for the PostgreSQL resource is 80.
SUNW.LogicalHost resource is not present in postgres-sta-rg.
SUNW. HAStoragPlus resource is not present in postgres-sta-rg.
The parameter file is generated in /postgres/pfile.
The null value indicates that it is a global zone configuration.
The null value indicates that it is a global zone configuration.
The null value indicates that it is a global zone configuration.
The name of the Solaris user who owns PostgreSQL is pgs.
The PostgreSQL software is installed in /postgres/postgresql-8.3.1.
The PostgreSQL data and configuration files are installed under /postgres/data.
The PostgreSQL database server listens on port 5432.
The log file for the database server is /postgres/logs/scinstance1.
The libraries for the PostgreSQL server are stored in the paths of the LD_LIBRARY_PATH/usr/sfw/lib: /usr/local/lib: /usr/lib: directory.
Additional PostgreSQL variables are set in /global/ postgres/variables.ksh.
The monitored database is testdb.
The user for the database monitoring is testusr.
The table testtb1 is modified to probe the condition of the database.
The password for the user testusr is testpwd.
If a connection to the database testdb fails, the probe returns with return code 10.
The value for the STDBY_RS is not required in a standby configuration.
The value for STDBY_RG is not required in a standby configuration.
The value for STDBY_USER is not required in a standby configuration.
The value for STDBY_HOST is not required in a standby configuration.
The value for STDBY_PARFILE is not required in a standby configuration.
The Rolechanger resource is rolechg-rs.
The SSH_PASSDIR has a null value , which means that the sshlkeys are not protected by a passphrase.
The key-value pairs and explanation for configuration file rolechg-config are as follows:
The name of the Rolechanger resource is rolechg-rs.
The name of the resource group for the PostgreSQL resource is rolechg-rg.
The value of the dummy port for the PostgreSQL resource is 5432.
The resource name for the SUNW.LogicalHost resource is pgs-1h-1.
SUNW. HAStoragPlus resource or other dependencies are not present.
The name of the PostgreSQL standby resource is postgres-sta-rs.
The name of the PostgreSQL primary resource is postgres-prim-rs.
The physical node name of the standby is phys-node2.
The parameter file on the standby is /postgres/pfile.
The trigger file on which the pg_standby utility reacts is phys-node2.
After the resource is started, Rolechanger waits for 30 seconds until it touches the trigger file.
Modifications in a copy of resilver-step1
PGDATA of the standby is in /postgres/data.
PGDATA of the primary is in /postgrs/data.
Specifies the name of the target node. The usual name is the name of the designated primary node.
Specifies the PostgreSQL base directory, where the PostgreSQL binaries are located.
Specifies the resource group that contains the cluster resource of the designated primary.
Specifies the resource group that contains the cluster resource of the designated standby.
Specifies the database port.
Specifies the resource group that contains the Rolechanger resource.
Specifies the absolute path to the RSYNC command, including the necessary options.
Specifies whether your passphrase is secure.
Modifications in a copy of resilver-step2
The source node is phys-node2.
PGDATA of the standby is in /postgres/data.
PGDATA of the primary is in /postgrs/data.
The target node is phys-node1.
Specifies the PostgreSQL base directory, where the PostgreSQL binaries are located.
Specifies the resource group which contains the cluster resource of the designated primary.
The resource group for the standby resource is postgres-sta-rg.
Specifies the resource group which contains the cluster resource of the designated standby.
Specifies the database port.
The resource group for the Rolechanger resource group is rolechg-rg.
The primary node is the global zone of phys-node1.
Specifies the absolute path to the RSYNC command including the necessary options.
Specifies whether your passphrase is secure.
To prepare your PostgreSQL installation for cluster control, you create a database, a user, and a table to be monitored by the PostgreSQL resource. Because you need to differentiate between a global zone and an HA container, two procedures are provided.
Ensure that you have edited the pgs_config file to specify configuration parameters for the Sun Cluster HA for PostgreSQL data service. For more information, see Specifying Configuration Parameters for the PostgreSQL Resource.
As superuser change the rights of the configuration file to be accessible for your PostgreSQL user.
# chmod 755 /myplace/pgs_config |
Switch to your PostgreSQL user.
# su - postgres |
If the login shell is not the Korn shell, switch to ksh.
% ksh |
Set the necessary variables.
$ . /myplace/pgs_config $ export PGDATA PGPORT LD_LIBRARY_PATH |
If your PostgreSQL is not already running, start the PostgreSQL server.
$ $PGROOT/bin/pg_ctl -l $PGLOGFILE start |
Prepare the database.
$ /opt/SUNWscPostgreSQL/util/pgs_db_prep -f /myplace/pgs_config |
(Optional) Configure your PostgreSQL instance to listen on the logical host's TCP/IP name.
If you want your PostgreSQL databases to listen on more than localhost, configure the listen_address parameter in the file postgresql.conf. Use a plain text editor such as vi, and set the value of listen_address to an appropriate value.
The PostgreSQL instance must listen on localhost. For additional information, see http://www.postgresql.org.
listen_address = 'localhost,myhost'
Set the security policy for the test database.
Use a plain text editor such as vi to add the following line to the file pg_hba.conf.
local testdb all password
For additional information about the pg_hba.conf file, see http://www.postgresql.org.
Stop the PostgreSQL database server.
$ $PGROOT/bin/pg_ctl stop |
Ensure, that you have edited the pgs_config file to specify configuration parameters for the Sun Cluster HA for PostgreSQL data service. For more information, see Specifying Configuration Parameters for the PostgreSQL Resource. Also make sure that the package directory of the Sun Cluster HA for PostgreSQL, /opt/SUNWscPostgreSQL, is available in the target zone.
As superuser change the rights of the configuration file to be accessible for your PostgreSQL user.
Ensure, that your pgs_config file is accessible from your zone. Otherwise, transfer the file to your zone by using appropriate methods.
# chmod 755 /myplace/pgs_config |
Switch to the target zone.
# zlogin pgsql-zone |
Switch to the PostgreSQL user.
# su - zpostgr |
If the login shell is not the Korn shell, switch to ksh.
% ksh |
Set the necessary variables.
$ . /myplace/pgs_config $ export PGDATA PGPORT LD_LIBRARY_PATH |
If your PostgreSQL is not already running, start the PostgreSQL server.
$ $PGROOT/bin/pg_ctl -l $PGLOGFILE start |
Prepare the database.
$ /opt/SUNWscPostgreSQL/util/pgs_db_prep -f /myplace/pgs_config |
(Optional) Configure your PostgreSQL instance to listen on the logical hosts TCP/IP name.
If you want your PostgreSQL databases to listen on more than localhost, configure the listen_address parameter in the file postgresql.conf. Use a plain text editor such as vi, and set the value of listen_address to an appropriate value.
The PostgreSQL instance must listen on localhost. For additional information, see http://www.postgresql.org .
listen_address = 'localhost,myhost'
Set the security policy for the test database.
Use a plain text editor such as vi to add the following line to the pg_hba.conf file.
local testdb all password
For additional information, see http://www.postgresql.org.
Stop the PostgreSQL database server.
$ $PGROOT/bin/pg_ctl stop |
Leave the target zone and return to the global zone.
Ensure that you have edited the pgs_config file to specify configuration parameters for the Sun Cluster HA for PostgreSQL data service. For more information, see Specifying Configuration Parameters for the PostgreSQL Resource.
Become superuser on one of the nodes in the cluster that will host PostgreSQL.
Go to the directory that contains the script for creating the Sun Cluster HA for PostgreSQL resource.
# cd /opt/SUNWscPostgreSQL/util |
Run the script that creates the PostgreSQL resource.
# ksh ./pgs_register -f /myplace/pgs_config |
If you omit the —f option, the file /opt/SUNWscPostgreSQL/util/pgs_config will be used.
Bring the PostgreSQL resource online.
# clresource enable postgres-rs |
Perform this task to change parameters in the Sun Cluster HA for PostgreSQL manifest and to validate the parameters in the HA container. Parameters for the Sun Cluster HA for PostgreSQL manifest are stored as properties of the SMF service. To modify parameters in the manifest, change the related properties in the SMF service then validate the parameter changes.
Become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations on the zones console.
Change the Solaris Service Management Facilities (SMF) properties for the Sun Cluster HA for PostgreSQL manifest.
# svccfg svc:/application/sczone-agents:resource |
For more information, see the svccfg(1M) man page.
Validate the parameter changes.
# /opt/SUNWscPostgreSQL/bin/control_pgs validate resource |
Messages for this command are stored in the /var/adm/messages/ directory of the HA container.
Disconnect from the HA container's console.
Become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations.
Disable and remove the resource that is used by the Sun Cluster HA for PostgreSQL data service.
# clresource disable resource # clresource delete resource |
Log in as superuser to the HA container's console.
Unregister Sun Cluster HA for PostgreSQL from the Solaris Service Management Facilities (SMF) service.
# /opt/SUNWscPostgreSQL/util/pgs_smf_remove -f filename |
Specifies the configuration file name.
The name of the configuration file that you used to register Sun Cluster HA for PostgreSQL with the SMF service.
If you no longer have the configuration file that you used to register Sun Cluster HA for PostgreSQL with the SMF service, create a replacement configuration file:
Make a copy of the default file, /opt/SUNWscPostgreSQL/util/pgs_config.
Set the ZONE and RS parameters with the values that are used by the data service.
Run the pgs_smf_remove command and use the -f option to specify this configuration file.
Disconnect from the HA container's console.
Ensure that you have edited the rolechg_config file to specify configuration parameters for the Sun Cluster HA for PostgreSQL Rolechanger data service. For more information, see http://www.postgresql.org.
Become superuser on one of the nodes in the cluster that hosts PostgreSQL.
Go to the directory that contains the script for creating the Sun Cluster HA for PostgreSQL Rolechanger resource.
# cd /opt/SUNWscPostgreSQL/util |
Run the script that creates the PostgreSQL resource.
# ksh ./rolechg_register -f /myplace/rolechg_config |
If you omit the -f option, the file /opt/SUNWscPostgreSQL/rolechg_util/rolechg_config is used.
Bring the PostgreSQL Rolechanger resource online.
# clresource enable rolechg-rs |