phys-schost# refleja una petición de datos de cluster global. Siga este procedimiento en un cluster global.
Este procedimiento proporciona las formas largas de los comandos de Oracle Solaris Cluster. La mayoría de los comandos también tienen una forma corta. A excepción de las formas cortas y largas de los nombres de comando, los comandos son idénticos.
Antes de empezar
Los usuarios que no tienen el rol root necesitan la autorización de RBAC solaris.cluster.read para utilizar el subcomando status.
% cluster show
Siga todos los pasos de este procedimiento desde un nodo del cluster global.
Al ejecutar el comando cluster show desde un nodo del cluster global, se muestra información detallada sobre la configuración del cluster e información sobre los clusters de zona si es que están configurados.
También puede usar el comando clzonecluster show para visualizar la información de configuración solo de los clusters de zona. Entre las propiedades de un cluster de zona se incluyen el nombre, el tipo de IP, el inicio automático y la ruta de zona. El subcomando show se ejecuta dentro de un cluster de zona y se aplica solo a ese cluster de zona. Al ejecutar el comando clzonecluster show desde un nodo del cluster de zona, solo se recupera el estado de los objetos visibles en ese cluster de zona en concreto.
Para visualizar más información acerca del comando cluster, utilice las opciones detalladas. Consulte la página del comando man cluster(1CL) para obtener detalles. Consulte la página del comando man clzonecluster(1CL) para obtener más información sobre clzonecluster.
En el ejemplo siguiente se proporciona información de configuración sobre el cluster global. Si tiene configurado un cluster de zona, también muestra la información pertinente.
phys-schost# cluster show
=== Cluster === Cluster Name: cluster-1 clusterid: 0x4DA2C888 installmode: disabled heartbeat_timeout: 10000 heartbeat_quantum: 1000 private_netaddr: 172.11.0.0 private_netmask: 255.255.248.0 max_nodes: 64 max_privatenets: 10 num_zoneclusters: 12 udp_session_timeout: 480 concentrate_load: False global_fencing: prefer3 Node List: phys-schost-1 Node Zones: phys_schost-2:za === Host Access Control === Cluster name: clustser-1 Allowed hosts: phys-schost-1, phys-schost-2:za Authentication Protocol: sys === Cluster Nodes === Node Name: phys-schost-1 Node ID: 1 Enabled: yes privatehostname: clusternode1-priv reboot_on_path_failure: disabled globalzoneshares: 3 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x43CB1E1800000001 Transport Adapter List: net1, net3 --- Transport Adapters for phys-schost-1 --- Transport Adapter: net1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): net Adapter Property(device_instance): 1 Adapter Property(lazy_free): 1 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.1.1 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled Transport Adapter: net3 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): net Adapter Property(device_instance): 3 Adapter Property(lazy_free): 0 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.0.129 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled --- SNMP MIB Configuration on phys-schost-1 --- SNMP MIB Name: Event State: Disabled Protocol: SNMPv2 --- SNMP Host Configuration on phys-schost-1 --- --- SNMP User Configuration on phys-schost-1 --- SNMP User Name: foo Authentication Protocol: MD5 Default User: No Node Name: phys-schost-2:za Node ID: 2 Type: cluster Enabled: yes privatehostname: clusternode2-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 2 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x43CB1E1800000002 Transport Adapter List: e1000g1, nge1 --- Transport Adapters for phys-schost-2 --- Transport Adapter: e1000g1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): e1000g Adapter Property(device_instance): 2 Adapter Property(lazy_free): 0 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.0.130 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled Transport Adapter: nge1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): nge Adapter Property(device_instance): 3 Adapter Property(lazy_free): 1 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.1.2 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled --- SNMP MIB Configuration on phys-schost-2 --- SNMP MIB Name: Event State: Disabled Protocol: SNMPv2 --- SNMP Host Configuration on phys-schost-2 --- --- SNMP User Configuration on phys-schost-2 --- === Transport Cables === Transport Cable: phys-schost-1:e1000g1,switch2@1 Cable Endpoint1: phys-schost-1:e1000g1 Cable Endpoint2: switch2@1 Cable State: Enabled Transport Cable: phys-schost-1:nge1,switch1@1 Cable Endpoint1: phys-schost-1:nge1 Cable Endpoint2: switch1@1 Cable State: Enabled Transport Cable: phys-schost-2:nge1,switch1@2 Cable Endpoint1: phys-schost-2:nge1 Cable Endpoint2: switch1@2 Cable State: Enabled Transport Cable: phys-schost-2:e1000g1,switch2@2 Cable Endpoint1: phys-schost-2:e1000g1 Cable Endpoint2: switch2@2 Cable State: Enabled === Transport Switches === Transport Switch: switch2 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled Transport Switch: switch1 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled === Quorum Devices === Quorum Device Name: d3 Enabled: yes Votes: 1 Global Name: /dev/did/rdsk/d3s2 Type: shared_disk Access Mode: scsi3 Hosts (enabled): phys-schost-1, phys-schost-2 Quorum Device Name: qs1 Enabled: yes Votes: 1 Global Name: qs1 Type: quorum_server Hosts (enabled): phys-schost-1, phys-schost-2 Quorum Server Host: 10.11.114.83 Port: 9000 === Device Groups === Device Group Name: testdg3 Type: SVM failback: no Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset name: testdg3 === Registered Resource Types === Resource Type: SUNW.LogicalHostname:2 RT_description: Logical Hostname Resource Type RT_version: 4 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hafoip Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: True Pkglist: <NULL> RT_system: True Global_zone: True Resource Type: SUNW.SharedAddress:2 RT_description: HA Shared Address Resource Type RT_version: 2 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hascip Single_instance: False Proxy: False Init_nodes: <Unknown> Installed_nodes: <All> Failover: True Pkglist: <NULL> RT_system: True Global_zone: True Resource Type: SUNW.HAStoragePlus:4 RT_description: HA Storage Plus RT_version: 4 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hastorageplus Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: False Pkglist: <NULL> RT_system: True Global_zone: True Resource Type: SUNW.haderby RT_description: haderby server for Oracle Solaris Cluster RT_version: 1 API_version: 7 RT_basedir: /usr/cluster/lib/rgm/rt/haderby Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: False Pkglist: <NULL> RT_system: True Global_zone: True Resource Type: SUNW.sctelemetry RT_description: sctelemetry service for Oracle Solaris Cluster RT_version: 1 API_version: 7 RT_basedir: /usr/cluster/lib/rgm/rt/sctelemetry Single_instance: True Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: False Pkglist: <NULL> RT_system: True Global_zone: True === Resource Groups and Resources === Resource Group: HA_RG RG_description: <Null> RG_mode: Failover RG_state: Managed Failback: False Nodelist: phys-schost-1 phys-schost-2 --- Resources for Group HA_RG --- Resource: HA_R Type: SUNW.HAStoragePlus:4 Type_version: 4 Group: HA_RG R_description: Resource_project_name: SCSLM_HA_RG Enabled{phys-schost-1}: True Enabled{phys-schost-2}: True Monitored{phys-schost-1}: True Monitored{phys-schost-2}: True Resource Group: cl-db-rg RG_description: <Null> RG_mode: Failover RG_state: Managed Failback: False Nodelist: phys-schost-1 phys-schost-2 --- Resources for Group cl-db-rg --- Resource: cl-db-rs Type: SUNW.haderby Type_version: 1 Group: cl-db-rg R_description: Resource_project_name: default Enabled{phys-schost-1}: True Enabled{phys-schost-2}: True Monitored{phys-schost-1}: True Monitored{phys-schost-2}: True Resource Group: cl-tlmtry-rg RG_description: <Null> RG_mode: Scalable RG_state: Managed Failback: False Nodelist: phys-schost-1 phys-schost-2 --- Resources for Group cl-tlmtry-rg --- Resource: cl-tlmtry-rs Type: SUNW.sctelemetry Type_version: 1 Group: cl-tlmtry-rg R_description: Resource_project_name: default Enabled{phys-schost-1}: True Enabled{phys-schost-2}: True Monitored{phys-schost-1}: True Monitored{phys-schost-2}: True === DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: phys-schost-1:/dev/rdsk/c0t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d2 Full Device Path: phys-schost-1:/dev/rdsk/c1t0d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-2:/dev/rdsk/c2t1d0 Full Device Path: phys-schost-1:/dev/rdsk/c2t1d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys-schost-2:/dev/rdsk/c2t2d0 Full Device Path: phys-schost-1:/dev/rdsk/c2t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d5 Full Device Path: phys-schost-2:/dev/rdsk/c0t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d6 Full Device Path: phys-schost-2:/dev/rdsk/c1t0d0 Replication: none default_fencing: global === NAS Devices === Nas Device: nas_filer1 Type: sun_uss nodeIPs{phys-schost-2}: 10.134.112.112 nodeIPs{phys-schost-1 10.134.112.113 User ID: rootEjemplo 6 Visualización de la información del cluster de zona
El siguiente ejemplo muestra las propiedades de la configuración del cluster de zona con RAC.
% clzonecluster show === Zone Clusters === Zone Cluster Name: sczone zonename: sczone zonepath: /zones/sczone autoboot: TRUE ip-type: shared enable_priv_net: TRUE --- Solaris Resources for sczone --- Resource Name: net address: 172.16.0.1 physical: auto Resource Name: net address: 172.16.0.2 physical: auto Resource Name: fs dir: /local/ufs-1 special: /dev/md/ds1/dsk/d0 raw: /dev/md/ds1/rdsk/d0 type: ufs options: [logging] Resource Name: fs dir: /gz/db_qfs/CrsHome special: CrsHome raw: type: samfs options: [] Resource Name: fs dir: /gz/db_qfs/CrsData special: CrsData raw: type: samfs options: [] Resource Name: fs dir: /gz/db_qfs/OraHome special: OraHome raw: type: samfs options: [] Resource Name: fs dir: /gz/db_qfs/OraData special: OraData raw: type: samfs options: [] --- Zone Cluster Nodes for sczone --- Node Name: sczone-1 physical-host: sczone-1 hostname: lzzone-1 Node Name: sczone-2 physical-host: sczone-2 hostname:lzzone-2
También puede ver los dispositivos NAS configurados para clusters globales o de zona mediante el subcomando clnasdevice show o Oracle Solaris Cluster Manager. Para obtener más información, consulte la página del comando man clnasdevice(1CL).