L'élément phys-schost# fait référence à l'invite du cluster global. Appliquez cette procédure à un cluster global.
Cette procédure contient la forme longue des commandes d'Oracle Solaris Cluster. La plupart des commandes possèdent également des formes brèves. A l'exception de la forme du nom, ces commandes sont identiques.
Vous pouvez également afficher la configuration d'un cluster via la GUI d'Oracle Solaris Cluster Manager. Pour obtenir les instructions de connexion à la GUI, reportez-vous à la section Accès à Oracle Solaris Cluster Manager.
Avant de commencer
Les utilisateurs différents du rôle root doivent disposer d'une autorisation RBAC solaris.cluster.read pour utiliser la sous-commande status.
% cluster show
Effectuez toutes les étapes de cette procédure à partir d'un noeud du cluster global.
En exécutant la commande cluster show à partir d'un noeud du cluster global, vous pouvez afficher des informations de configuration détaillées concernant le cluster ainsi que des informations concernant les clusters de zones éventuellement configurés.
Vous pouvez également vous servir de la commande clzonecluster show pour afficher uniquement les informations de configuration du cluster de zones. Les propriétés d'un cluster de zones sont notamment son nom, le type d'IP, l'autoinitialisation et le chemin de la zone. La sous-commande show s'exécute à l'intérieur d'un cluster de zones et s'applique uniquement au cluster de zones concerné. Exécuter la commande clzonecluster show à partir d'un noeud d'un cluster de zones permet uniquement d'extraire le statut des objets visibles pour le cluster de zones concerné.
Pour afficher de plus amples informations sur la commande cluster, servez-vous des options détaillées. Pour plus d'informations, reportez-vous à la page de manuel cluster(1CL). Pour plus d'informations sur la commande clzonecluster, reportez-vous à la page de manuel clzonecluster(1CL).
L'exemple suivant liste les informations de configuration concernant le cluster global. Si vous avez configuré un cluster de zones, les informations relatives à ce cluster sont également affichées.
phys-schost# cluster show
=== Cluster === Cluster Name: cluster-1 clusterid: 0x4DA2C888 installmode: disabled heartbeat_timeout: 10000 heartbeat_quantum: 1000 private_netaddr: 172.11.0.0 private_netmask: 255.255.248.0 max_nodes: 64 max_privatenets: 10 num_zoneclusters: 12 udp_session_timeout: 480 concentrate_load: False global_fencing: prefer3 Node List: phys-schost-1 Node Zones: phys_schost-2:za === Host Access Control === Cluster name: clustser-1 Allowed hosts: phys-schost-1, phys-schost-2:za Authentication Protocol: sys === Cluster Nodes === Node Name: phys-schost-1 Node ID: 1 Enabled: yes privatehostname: clusternode1-priv reboot_on_path_failure: disabled globalzoneshares: 3 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x43CB1E1800000001 Transport Adapter List: net1, net3 --- Transport Adapters for phys-schost-1 --- Transport Adapter: net1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): net Adapter Property(device_instance): 1 Adapter Property(lazy_free): 1 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.1.1 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled Transport Adapter: net3 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): net Adapter Property(device_instance): 3 Adapter Property(lazy_free): 0 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.0.129 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled --- SNMP MIB Configuration on phys-schost-1 --- SNMP MIB Name: Event State: Disabled Protocol: SNMPv2 --- SNMP Host Configuration on phys-schost-1 --- --- SNMP User Configuration on phys-schost-1 --- SNMP User Name: foo Authentication Protocol: MD5 Default User: No Node Name: phys-schost-2:za Node ID: 2 Type: cluster Enabled: yes privatehostname: clusternode2-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 2 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x43CB1E1800000002 Transport Adapter List: e1000g1, nge1 --- Transport Adapters for phys-schost-2 --- Transport Adapter: e1000g1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): e1000g Adapter Property(device_instance): 2 Adapter Property(lazy_free): 0 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.0.130 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled Transport Adapter: nge1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): nge Adapter Property(device_instance): 3 Adapter Property(lazy_free): 1 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.1.2 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled --- SNMP MIB Configuration on phys-schost-2 --- SNMP MIB Name: Event State: Disabled Protocol: SNMPv2 --- SNMP Host Configuration on phys-schost-2 --- --- SNMP User Configuration on phys-schost-2 --- === Transport Cables === Transport Cable: phys-schost-1:e1000g1,switch2@1 Cable Endpoint1: phys-schost-1:e1000g1 Cable Endpoint2: switch2@1 Cable State: Enabled Transport Cable: phys-schost-1:nge1,switch1@1 Cable Endpoint1: phys-schost-1:nge1 Cable Endpoint2: switch1@1 Cable State: Enabled Transport Cable: phys-schost-2:nge1,switch1@2 Cable Endpoint1: phys-schost-2:nge1 Cable Endpoint2: switch1@2 Cable State: Enabled Transport Cable: phys-schost-2:e1000g1,switch2@2 Cable Endpoint1: phys-schost-2:e1000g1 Cable Endpoint2: switch2@2 Cable State: Enabled === Transport Switches === Transport Switch: switch2 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled Transport Switch: switch1 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled === Quorum Devices === Quorum Device Name: d3 Enabled: yes Votes: 1 Global Name: /dev/did/rdsk/d3s2 Type: shared_disk Access Mode: scsi3 Hosts (enabled): phys-schost-1, phys-schost-2 Quorum Device Name: qs1 Enabled: yes Votes: 1 Global Name: qs1 Type: quorum_server Hosts (enabled): phys-schost-1, phys-schost-2 Quorum Server Host: 10.11.114.83 Port: 9000 === Device Groups === Device Group Name: testdg3 Type: SVM failback: no Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset name: testdg3 === Registered Resource Types === Resource Type: SUNW.LogicalHostname:2 RT_description: Logical Hostname Resource Type RT_version: 4 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hafoip Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: True Pkglist: <NULL> RT_system: True Global_zone: True Resource Type: SUNW.SharedAddress:2 RT_description: HA Shared Address Resource Type RT_version: 2 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hascip Single_instance: False Proxy: False Init_nodes: <Unknown> Installed_nodes: <All> Failover: True Pkglist: <NULL> RT_system: True Global_zone: True Resource Type: SUNW.HAStoragePlus:4 RT_description: HA Storage Plus RT_version: 4 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hastorageplus Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: False Pkglist: <NULL> RT_system: True Global_zone: True Resource Type: SUNW.haderby RT_description: haderby server for Oracle Solaris Cluster RT_version: 1 API_version: 7 RT_basedir: /usr/cluster/lib/rgm/rt/haderby Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: False Pkglist: <NULL> RT_system: True Global_zone: True Resource Type: SUNW.sctelemetry RT_description: sctelemetry service for Oracle Solaris Cluster RT_version: 1 API_version: 7 RT_basedir: /usr/cluster/lib/rgm/rt/sctelemetry Single_instance: True Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: False Pkglist: <NULL> RT_system: True Global_zone: True === Resource Groups and Resources === Resource Group: HA_RG RG_description: <Null> RG_mode: Failover RG_state: Managed Failback: False Nodelist: phys-schost-1 phys-schost-2 --- Resources for Group HA_RG --- Resource: HA_R Type: SUNW.HAStoragePlus:4 Type_version: 4 Group: HA_RG R_description: Resource_project_name: SCSLM_HA_RG Enabled{phys-schost-1}: True Enabled{phys-schost-2}: True Monitored{phys-schost-1}: True Monitored{phys-schost-2}: True Resource Group: cl-db-rg RG_description: <Null> RG_mode: Failover RG_state: Managed Failback: False Nodelist: phys-schost-1 phys-schost-2 --- Resources for Group cl-db-rg --- Resource: cl-db-rs Type: SUNW.haderby Type_version: 1 Group: cl-db-rg R_description: Resource_project_name: default Enabled{phys-schost-1}: True Enabled{phys-schost-2}: True Monitored{phys-schost-1}: True Monitored{phys-schost-2}: True Resource Group: cl-tlmtry-rg RG_description: <Null> RG_mode: Scalable RG_state: Managed Failback: False Nodelist: phys-schost-1 phys-schost-2 --- Resources for Group cl-tlmtry-rg --- Resource: cl-tlmtry-rs Type: SUNW.sctelemetry Type_version: 1 Group: cl-tlmtry-rg R_description: Resource_project_name: default Enabled{phys-schost-1}: True Enabled{phys-schost-2}: True Monitored{phys-schost-1}: True Monitored{phys-schost-2}: True === DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: phys-schost-1:/dev/rdsk/c0t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d2 Full Device Path: phys-schost-1:/dev/rdsk/c1t0d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-2:/dev/rdsk/c2t1d0 Full Device Path: phys-schost-1:/dev/rdsk/c2t1d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys-schost-2:/dev/rdsk/c2t2d0 Full Device Path: phys-schost-1:/dev/rdsk/c2t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d5 Full Device Path: phys-schost-2:/dev/rdsk/c0t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d6 Full Device Path: phys-schost-2:/dev/rdsk/c1t0d0 Replication: none default_fencing: global === NAS Devices === Nas Device: nas_filer1 Type: sun_uss nodeIPs{phys-schost-2}: 10.134.112.112 nodeIPs{phys-schost-1 10.134.112.113 User ID: rootExemple 1-6 Affichage de la configuration du cluster de zones
L'exemple suivant répertorie les propriétés de la configuration du cluster de zones avec RAC.
% clzonecluster show === Zone Clusters === Zone Cluster Name: sczone zonename: sczone zonepath: /zones/sczone autoboot: TRUE ip-type: shared enable_priv_net: TRUE --- Solaris Resources for sczone --- Resource Name: net address: 172.16.0.1 physical: auto Resource Name: net address: 172.16.0.2 physical: auto Resource Name: fs dir: /local/ufs-1 special: /dev/md/ds1/dsk/d0 raw: /dev/md/ds1/rdsk/d0 type: ufs options: [logging] Resource Name: fs dir: /gz/db_qfs/CrsHome special: CrsHome raw: type: samfs options: [] Resource Name: fs dir: /gz/db_qfs/CrsData special: CrsData raw: type: samfs options: [] Resource Name: fs dir: /gz/db_qfs/OraHome special: OraHome raw: type: samfs options: [] Resource Name: fs dir: /gz/db_qfs/OraData special: OraData raw: type: samfs options: [] --- Zone Cluster Nodes for sczone --- Node Name: sczone-1 physical-host: sczone-1 hostname: lzzone-1 Node Name: sczone-2 physical-host: sczone-2 hostname: lzzone-2
Vous pouvez également afficher les périphériques NAS configurés pour les clusters globaux ou de zones à l'aide de la sous-commande clnasdevice show ou d'Oracle Solaris Cluster Manager. Pour plus d'informations, reportez-vous à la page de manuel clnasdevice(1CL).