About This Documentation (PDF and HTML)
Overview of the Sun Blade Storage Module M2
Installing the Storage Module Into the Chassis
Assigning and Managing Storage
Overview of the Sun Blade Storage Module M2 Product Notes
Supported Firmware, Hardware and Software
Solaris Operating System Issues
Performing Maintenance and Hot Plug Actions
Importing Existing Virtual Drives to a Replacement REM
Storage Module Sensors and Indicators
Viewing the CMM ILOM Event Log
Sun Blade Zone Manager Not Ready
Missing SAS-2 Components Error
Server Module Not SAS-2 Capable Error
Newly Inserted NEM is Not Discovered
Storage Module Becomes Inaccessible at Host and ILOM /CH/BLx/fault_state is "Faulted"
Storage Module Fault LED is On
SAS Path Disappears and ILOM /CH/NEMx/fault_state is "Faulted"
NEM STATE (/CH/NEMx/STATE) is Not "Running"
NEM /CH/NEMx/OK Indicator is in Standby Blink
Introduction to the Sun Blade Storage Module M2
Maintaining the Sun Blade Storage Module M2
The storage module can be in a degraded condition if it powers off.
What to look for:
The following table provides examples of how the web interface and CLI might look like when the storage module is in a degraded state.
|
Things to check:
Is the storage module powered off? This can be confirmed by either looking at the storage module front panel LEDs, or checking to see if storage module components are listed in ILOM. For example, in the ILOM CLI example in the table above, if you see HDDs listed, then the storage module has power.
If the storage module is powered on, is the fault state expander related? This can be confirmed by viewing the fault itself. In the CMM ILOM CLI, enter the following commands:
Log into the CMM with administrator privileges.
Enter the command:
-> cd /CMM/faultmgmt
Find the faulted target device by entering the command:
-> ls
Output might look like:
/CMM/faultmgmt Targets: shell 0 (/CH/BL2) Properties: Commands: cd show
View logged faults by entering the command:
-> show /CMM/faultmgmt/0/faults
Where 0 is the target device that is experiencing the fault, and faults is the directory that contains the logged faults.
Look for:
/CMM/faultmgmt/0/faults Targets: 0 (fault.chassis.sas.comm.fail) Properties: Commands: cd show
Actions to perform:
If the storage module is powered off, there might have been a hardware failure or an over-temperature event. Check that the chassis is being properly cooled (air-conditioning is functioning and all slot fillers are in place), then reinsert the blade after all cooling conditions are repaired. If the storage module does not power back on after insertion into the chassis, contact Oracle service.
If the storage module is still powered on, use CMM ILOM to perform a "reset" of the storage module, as follows:
Log into the CMM with administrator privileges.
Enter the command:
-> cd /CH/BLx
Where x is the number of the blade slot for the storage module.
Then, enter the command:
-> reset
Wait for at least 2 minutes, then check the state of the storage module:
-> show /CH/BLx/STATE
Where output might look like:
/CH/BL2/STATE Targets: Properties: type = Module ipmi_name = BL2/STATE class = Discrete Sensor --> value = Running alarm_status = cleared Commands: cd show
If the reset does not change the STATE sensor to “Running”, then remove and reinsert the storage module into the same chassis slot. If this does not change the storage module STATE to “Running”, then contact Oracle service.
If the STATE sensor goes back to “Running”, but the storage module is still unresponsive after a reset, there might still be an issue with how the CMM interprets the state of the storage module SAS expander. Clear the fault by issuing the following CLI commands:
Log into the CMM with administrator privileges.
Enter the command:
-> cd /CH/BLx
Where x is the number of the blade slot for the storage module.
Then, enter the command:
-> set clear_fault_state=true
Then perform a CMM reset.
-> cd /CMM
Enter the command:
-> reset
This should restore the state.