Go to main content

Administering an Oracle® Solaris Cluster 4.4 Configuration

Exit Print View

Updated: November 2019
 
 

List of Cluster Puppet Modules and Description

# puppet describe ha_cluster_resourcetype

ha_cluster_resourcetype
=======================
Oracle Solaris Cluster Resource Type Management


Parameters
----------

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **name**
    Resource Type name. When using zone cluster, specify as <zc>:<rtname>.

- **rtrfilepath**
    The full path to an RTR file or a directory that contains RTR files for
    the Resource Type.

Providers
---------
    ha_cluster_resourcetype

Below is a sample manifest for ha_cluster_resource type. In this example we are registering SUNW.SclaMountPoint rtr which has the rtr file located at /usr/cluster/lib/rgm/rtreg/SUNW.ScalMountPoint.

ha_cluster_resourcetype { "SUNW.ScalMountPoint":
        ensure => 'present',
        rtrfilepath => "/usr/cluster/lib/rgm/rtreg/SUNW.ScalMountPoint",
}

# puppet describe ha_cluster_resourcegroup

ha_cluster_resourcegroup
========================
Oracle Solaris Cluster Resource Group Management

Parameters
----------

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **manage**
    Change the Resource Group managed state.
    Valid values are `true`, `false`.

- **name**
    Resource Group name. When using zone cluster, specify as <zc>:<rgname>.

- **nodes**
    Sepcify the cluster nodes to host the Resource Group.

- **online**
    Change the Resource Group online state.
    Valid values are `true`, `false`.

- **rgproperty**
    Resource Group properties to set at create time. Specify as Hashes.

- **scalable**
    Specify if the Resource Group is scalable or not.
    Valid values are `true`, `false`.

Providers
---------
    ha_cluster_resourcegroup

# puppet describe ha_cluster_resource

Sample manifest for ha_cluster_resource group for creating say, a scalable resource group RGScal.

ha-cluster-resourcegroup { "RGScal":
        scalable => true,
        ensure => 'present',
}


ha_cluster_resource
===================
Oracle Solaris Cluster Resource Management


Parameters
----------

- **enable**
    Change the Oracle Solaris resource enabled state.
    Valid values are `true`, `false`.

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **monitor**
    Change the Oracle Solaris resource monitored state.
    Valid values are `true`, `false`.

- **name**
    Oracle Solaris Cluster resource name

- **rgname**
    Oracle Solaris Cluster resource group to contain this resource.

- **rsproperty**
    Oracle Solaris Cluster resource props to set at create time.

- **rstype**
    Oracle Solaris Cluster resource type to instantiate.

- **zonecluster**
    Oracle Solaris Zone Cluster to manage the resource.

Providers
---------
    ha_cluster_resource

Sample manifest for ha_cluster_resource for creating resource hasp_res of resource type SUNW.HAStoragePlus which is part of a resource group rg1. This snippet depicts that the resource is being created with a zpool zfs1.

ha_cluster_resource { "hasp_res": 
 ensure => 'present', 
 rstype => 'SUNW.HAStoragePlus', 
 rgname => 'rg1', 
 name => 'hasp-rs', 
 rsproperty => {"zpools" => "zfs1", 
                "failover_mode" => "hard"}, 
 enable => 'true', 
 require => [ha_cluster_resourcegroup['RG'], 
		   ha_cluster_resourcetype['SUNW.HAStoragePlus']], 
}


# puppet describe ha_cluster_quorum

ha_cluster_quorum
=================
Oracle Solaris Cluster Quorum Devices Management


Parameters
----------

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **lu_suri**
    Logical Unit storage URL to be used as quorum device.

- **name**
    Oracle Solaris Cluster quorum device name

- **qproperty**
    Oracle Solaris Cluster quorum devices properties.

- **qtype**
    Oracle Solaris Cluster quorum device type.
    Valid values are `shared_disk`, `quorum_server`.

Providers
---------
    ha_cluster_quorum

Sample manifest for ha_cluster_quorum. This snippet depicts the addition of a quorum disk d1.

ha_cluster_quorum { "d1":
        ensure => 'present',
}

This snippet depict the addition of a quorum server qshost.example.com with IP address 10.12.13.264 and running on port 9000.

ha_cluster_quorum { "qshost.example.com":
        ensure => 'present',
        qproperty => {qshost => "10.12.13.264",
                port => '9000'
        },
        qtype => 'quorum_server',
}


# puppet describe ha_cluster_nas

ha_cluster_nas
==============
Oracle Solaris Cluster NAS Devices Management


Parameters
----------

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **name**
    NAS device name. When using zone cluster, specify as <zc>:<nasname>.

- **nasdirectories**
    NAS device directories to add. Specify multiple directories as an array.

- **nasproperty**
    Specify the nodeIPs property for the nodes that use an IP other than the
    node IP. Specify as Hashes.

- **nastype**
    NAS device type.

- **passwd**
    The password for the userid to access the NAS device.

- **userid**
    The userid to access the NAS device.

Providers
---------
    ha_cluster_nas

This sample manifest depicts adding directories pool-0/test/test21, pool-0/test/test20 from ZFS SA device nas-stor to cluster configuration.

ha_cluster_nas { "nas-stor":
        ensure => 'present',
        nastype => 'sun_uss',
        userid => 'osc_agent',
        passwd => 'abc123',
        nasdirectories => ["pool-0/test/test20", "pool-0/test/test21", "pool-0/t
est/test22"]
}

# puppet describe ha_cluster_devicegroup

ha_cluster_devicegroup
======================
Oracle Solaris Cluster Zpool Device Group Management


Parameters
----------

- **dgproperty**
    Specify the properties for the zpool Device Group.

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **name**
    Oracle Solaris Cluster zpool Device Group name

- **nodes**
    Sepcify the nodes to host the Device Group.

- **online**
    Change the Oracle Solaris zpool Device Group online state.
    Valid values are `true`, `false`.

Providers
---------
    ha_cluster_devicegroup

This is a sample manifest for creating and bring online a zpool type device group gpool with poolaccess property as global. schost1.example.com and schost2.example.com are cluster nodes.

ha_cluster_devicegroup { 'gpool':
  ensure => 'present',
  nodes => ['schost1.example.com', 'schost2.example.com'],
  dgproperty => {“poolaccess”=>”global”},
  online => 'true',
}

# puppet describe ha_cluster_logicalhost

ha_cluster_logicalhost
======================
Oracle Solaris Cluster Logicalhostname Resource Management


Parameters
----------

- **enable**
    Change the Logicalhostname resource enabled state.
    Valid values are `true`, `false`.

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **logicalhosts**
    A list of logical hostnames for this resource.

- **monitor**
    Change the Logicalhostname resource monitored state.
    Valid values are `true`, `false`.

- **name**
    Logicalhostname resource name. When using zone cluster, specify as
    <zc>:<rsname>.

- **netiflist**
    The network interfaces to host the logical hostnames for this resource.

- **rgname**
    Resource Group to contain this LogicalHostname resource.

- **rsproperty**
    Logicalhostname resource properties to set at create time.

Providers
---------
    ha_cluster_logicalhost

This sample manifest depicts creating a logical host resource lhtest with rg1 resource group. addr-1 and addr-2 are logical hostnames.

ha_cluster_resourcetype { "SUNW.LogicalHostname":
        ensure => 'present',
}
ha_cluster_logicalhost { "lhtest":
        ensure => 'present',
        rgname => 'rg1',
        logicalhosts => ['addr-1', 'addr-2',],
        enable => 'true',
}

# puppet describe ha_cluster_sharedaddress

ha_cluster_sharedaddress
========================
Oracle Solaris Cluster SharedAddress Resource Management


Parameters
----------

- **auxnodelist**
    The nodes to host the logical hosts but cannot serve as the primary node
    during failover.

- **enable**
    Change the SharedAddress resource enabled state.
    Valid values are `true`, `false`.

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **logicalhosts**
    A list of logical hostnames for this resource.

- **monitor**
    Change the SharedAddress resource monitored state.
    Valid values are `true`, `false`.

- **name**
    SharedAddress resource name. When using zone cluster, specify as
    <zc>:<rsname>.

- **netiflist**
    The network interfaces to host the logical hostnames for this resource.

- **rgname**
    Resource Group to contain this resource.

- **rsproperty**
    SharedAddress resource properties to set at create time.

Providers
---------
    ha_cluster_sharedaddress

This sample depicts a manifest for creating a shared address resource satest part of scal-rg. sa-host1 and sa-host2 are the hostnames managed by this shared address resource.

ha_cluster_logicalhost { "satest":
        ensure => 'present',
        rgname => 'scal-rg',
        logicalhosts => ['sa-host1', 'sa-host2'],
        auxnodelist => ['ptria2', 'ptria1'],
        enable => 'true',
}

# puppet describe ha_cluster_mysql

ha_cluster_mysql
================
Oracle Solaris Cluster HA-MySQL Resource Management


Parameters
----------

- **admin_passwd**
    The administrator user password.

- **admin_user**
    MySQL admin user for localhost.

- **disable_mysql_fmri**
    SMF service to disable MySQL.

- **enable**
    Change the Oracle.mysql Resource enabled state.
    Valid values are `true`, `false`.

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **fmpass**
    The password for the MySQL fault monitor user.

- **fmuser**
    User name for the MySQL fault monitor user.

- **monitor**
    Change the Oracle.mysql Resource monitored state.
    Valid values are `true`, `false`.

- **mysql_basedir**
    MySQL base directory.

- **mysql_datadir**
    MySQL Database directory.

- **mysql_host**
    MySQL logical hostname.

- **mysql_nic_hostname**
    Physical hostnames that the logical hostname belongs to for
    every cluster node to host the MySQL resource group.

- **mysql_sock**
    Socket name for MySQL daemon. If not specified, use
    /tmp/<mysql_host>.sock.

- **name**
    Oracle.mysql resource name. When using zone cluster, specify as
    <zc>:<rsname>.

- **rgname**
    Resource Group to contain the HA MySQL resource.

- **rsproperty**
    Oracle.mysql Resource Properties to set at create time.

Providers
---------
    ha_cluster_mysql


# puppet describe ha_cluster_zonecluster

ha_cluster_zonecluster
======================
Oracle Solaris Cluster Zone Cluster Management


Parameters
----------

- **cmd_file**
    Oracle Solaris Cluster Zone Cluser configuration command file.

- **config_profile**
    Oracle Solaris Cluster Zone Cluser sysconfig profile for installing.

- **ensure**
    The basic property that the resource should be in.
    Valid values are `present`, `absent`.

- **manifest_file**
    Oracle Solaris Cluster Zone Cluser manifest file for installing.

- **name**
    Oracle Solaris Cluster Zone Cluster name

- **zc_nodes**
    Oracle Solaris Cluster Zone Cluster nodes.

- **zc_status**
    Boot or reboot the zone cluster to be in cluster mode or non-cluster
    mode.
Valid values are `offline`, `online`.

Providers
---------
    ha_cluster_zonecluster

This sample depicts a manifest for creating a zone cluster zc1. zc_config.xml and zc_manifest.xml are sysconfig and custom manifest files to configure the zone cluster. For more information on installing zone clusters refer to Chapter 6, Creating Zone Clusters in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.

ha_cluster_zonecluster { "zc1":
        ensure => 'present',
        cmd_file => "/net/sharehost/cmdfile",
        config_profile => "/net/sharehost/zc_config.xml",
        manifest_file => "/net/sharehost/zc_manifest.xml",
        zc_status => "online"
}