13 Replacing a Server Disk

This chapter contains the following sections:

Note:

The failure of a disk is never catastrophic on Oracle Big Data Appliance. No user data is lost. Data stored in HDFS or Oracle NoSQL Database is automatically replicated.

Repair of the physical disks does not require shutting down Oracle Big Data Appliance. However, individual servers can be taken outside of the cluster temporarily and require downtime.

See Also:

My Oracle Support Doc ID 1581331.1 My Oracle Support Doc ID 1581331.1

13.1 Verifying the Server Configuration

The 12 disk drives in each Oracle Big Data Appliance server are controlled by an LSI MegaRAID SAS 92610-8i disk controller. Oracle recommends verifying the status of the RAID devices to avoid possible performance degradation or an outage. The effect on the server of validating the RAID devices is minimal. The corrective actions may affect operation of the server and can range from simple reconfiguration to an outage, depending on the specific issue uncovered.

13.1.1 Verifying Disk Controller Configuration

Enter this command to verify the disk controller configuration:

# MegaCli64 -AdpAllInfo -a0 | grep "Device Present" -A 8

The following is an example of the output from the command. There should be 12 virtual drives, no degraded or offline drives, and 14 physical devices. The 14 devices are the controllers and the 12 disk drives.

                Device Present
                ================
Virtual Drives    : 12 
  Degraded        : 0 
  Offline         : 0 
Physical Devices  : 14 
  Disks           : 12 
  Critical Disks  : 0 
  Failed Disks    : 0 

If the output is different, then investigate and correct the problem.

13.1.2 Verifying Virtual Drive Configuration

Enter this command to verify the virtual drive configuration:

# MegaCli64 -LDInfo -lAll -a0

The following is an example of the output for Virtual Drive 0. Ensure that State is Optimal.

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-0, Secondary-0, RAID Level Qualifier-0
Size                : 1.817 TB
Parity Size         : 0
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 1
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAheadNone, Cached, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Cached, No Write Cache if Bad BBU
Access Policy       : Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None

13.1.3 Verifying Physical Drive Configuration

Use the following command to verify the physical drive configuration:

# MegaCli64 -PDList -a0 | grep Firmware

The following is an example of the output from the command. The 12 drives should be Online, Spun Up. If the output is different, then investigate and correct the problem.

Firmware state: Online, Spun Up
Device Firmware Level: 061A
Firmware state: Online, Spun Up
Device Firmware Level: 061A
Firmware state: Online, Spun Up
Device Firmware Level: 061A
     .
     .
     .

13.2 About Disk Drive Identifiers

The Oracle Big Data Appliance servers contain a disk enclosure cage that is controlled by the host bus adapter (HBA). The enclosure holds 12 disk drives that are identified by slot numbers 0 to 11. The drives can be dedicated to specific functions, as shown in Table 13-1.

Oracle Big Data Appliance uses symbolic links, which are defined in /dev/disk/by_hba_slot, to identify the slot number of a disk. The links have the form snpm, where n is the slot number and m is the partition number. For example, /dev/disk/by_hba_slot/s0p1 initially corresponds to /dev/sda1.

When a disk is hot swapped, the operating system cannot reuse the kernel device name. Instead, it allocates a new device name. For example, if you hot swap /dev/sda, then the disk corresponding /dev/disk/by-hba-slot/s0 might link to /dev/sdn instead of /dev/sda. Therefore, the links in /dev/disk/by-hba-slot/ are automatically updated when devices are added or removed.

The command output lists device names as kernel device names instead of symbolic link names. Thus, /dev/disk/by-hba-slot/s0 might be identified as /dev/sda in the output of a command.

13.2.1 Standard Disk Drive Mappings

Table 13-1 shows typical initial mappings between the RAID logical drives and the operating system identifiers. Nonetheless, you must use the mappings that exist for your system, which might be different from the ones listed here. The table also identifies the dedicated function of each drive in an Oracle Big Data Appliance server. The server with the failed drive is part of either a CDH cluster (HDFS) or an Oracle NoSQL Database cluster.

Table 13-1 Disk Drive Identifiers

Symbolic Link to Physical Slot Typical Initial Kernel Device Name Dedicated Function

/dev/disk/by-hba-slot/s0

/dev/sda

Operating system

/dev/disk/by-hba-slot/s1

/dev/sdb

Operating system

/dev/disk/by-hba-slot/s2

/dev/sdc

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s3

/dev/sdd

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s4

/dev/sde

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s5

/dev/sdf

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s6

/dev/sdg

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s7

/dev/sdh

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s8

/dev/sdi

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s9

/dev/sdj

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s10

/dev/sdk

HDFS or Oracle NoSQL Database

/dev/disk/by-hba-slot/s11

/dev/sdl

HDFS or Oracle NoSQL Database

13.2.2 Standard Mount Points

Table 13-2 show the mappings between HDFS partitions and mount points.

Table 13-2 Mount Points

Symbolic Link to Physical Slot and Partition HDFS Partition Mount Point

/dev/disk/by-hba-slot/s0p4

/dev/sda4

/u01

/dev/disk/by-hba-slot/s1p4

/dev/sdb4

/u02

/dev/disk/by-hba-slot/s2p1

/dev/sdc1

/u03

/dev/disk/by-hba-slot/s3p1

/dev/sdd1

/u04

/dev/disk/by-hba-slot/s4p1

/dev/sde1

/u05

/dev/disk/by-hba-slot/s5p1

/dev/sdf1

/u06

/dev/disk/by-hba-slot/s6p1

/dev/sdg1

/u07

/dev/disk/by-hba-slot/s7p1

/dev/sdh1

/u08

/dev/disk/by-hba-slot/s8p1

/dev/sdi1

/u09

/dev/disk/by-hba-slot/s9p1

/dev/sdj1

/u10

/dev/disk/by-hba-slot/s10p1

/dev/sdk1

/u11

/dev/disk/by-hba-slot/s11p1

/dev/sdl1

/u12

13.2.3 Obtaining the Physical Slot Number of a Disk Drive

Use the following MegaCli64 command to verify the mapping of virtual drive numbers to physical slot numbers. See "Replacing a Disk Drive."

# MegaCli64 LdPdInfo a0 | more 

13.3 Overview of the Disk Replacement Process

  1. Replace the failed disk drive.
  2. Run the bdaconfiguredisk utility to configure the new disk drive.

The bdaconfiguredisk utility automates the rest of the process. Its operations include:

  • Performs the basic configuration steps for the new disk.
  • Identifies the dedicated function of the failed disk, as an operating system disk, or as either an HDFS disk or an Oracle NoSQL Database disk.
  • Configures the disk for its dedicated function.
  • Verifies that the configuration is correct.
  • Provisions the disk (installs the Oracle Big Data Appliance software).

See Also:

"Servicing Storage Drives" in the Oracle Server X7-2L Service Manual at

http://docs.oracle.com/cd/E62172_01/html/E62184/z400001c165586.html#scrolltoc

"Servicing Storage Drives and Rear Drives" in the Oracle Server X6-2L Service Manual at

http://docs.oracle.com/cd/E62172_01/html/E62184/z400001c165586.html#scrolltoc

"Servicing Storage Drives and Rear Drives" in the Oracle Server X5-2L Service Manual at

http://docs.oracle.com/cd/E41033_01/html/E48325/cnpsm.z40000091011460.html#scrolltoc

"Servicing Storage Drives and Boot Drives" in the Sun Fire X4270M2 Server Service Manual at

http://docs.oracle.com/cd/E19245-01/E21671/hotswap.html#50503714_61628

13.4 What If a Server Fails to Restart?

The server may restart during the disk replacement procedures, either because you issued a reboot command or made an error in a MegaCli64 command. In most cases, the server restarts successfully, and you can continue working. However, in other cases, an error occurs so that you cannot reconnect using ssh. In this case, you must complete the reboot using Oracle ILOM.

To restart a server using Oracle ILOM:

  1. Use your browser to open a connection to the server using Oracle ILOM. For example:

    http://bda1node12-c.example.com

    Note:

    Your browser must have a JDK plug-in installed. If you do not see the Java coffee cup on the log-in page, then you must install the plug-in before continuing.

  2. Log in using your Oracle ILOM credentials.

  3. Select the Remote Control tab.

  4. Click the Launch Remote Console button.

  5. Enter Ctrl+d to continue rebooting.

  6. If the reboot fails, then enter the server root password at the prompt and attempt to fix the problem.

  7. After the server restarts successfully, open the Redirection menu and choose Quit to close the console window.

See Also:

Oracle Integrated Lights Out Manager (ILOM) 3.0 documentation at

http://docs.oracle.com/cd/E19860-01/

13.5 Prerequisites for Replacing a Failing Disk

To replace an HDFS disk or an operating system disk that is in a state of predictive failure, you must first dismount the HDFS partitions. You must also turn off swapping before replacing an operating system disk.

Note:

Only dismount HDFS partitions. For an operating system disk, ensure that you do not dismount operating system partitions. Only partition 4 (sda4 or sdb4) of an operating system disk is used for HDFS.

To dismount HDFS partitions:

  1. Log in to the server with the failing drive.

  2. If the failing drive supported the operating system, then turn off swapping:

    # bdaswapoff
    

    Removing a disk with active swapping crashes the kernel.

  3. List the mounted HDFS partitions:

    # mount -l
    
    /dev/md2 on / type ext4 (rw,noatime)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    /dev/md0 on /boot type ext4 (rw)
    tmpfs on /dev/shm type tmpfs (rw)
    /dev/sda4 on /u01 type ext4 (rw,nodev,noatime) [/u01]
    /dev/sdb4 on /u02 type ext4 (rw,nodev,noatime) [/u02]
    /dev/sdc1 on /u03 type ext4 (rw,nodev,noatime) [/u03]
    /dev/sdd1 on /u04 type ext4 (rw,nodev,noatime) [/u04]
         .
         .
         .
    
  4. Check the list of mounted partitions for the failing disk. If the disk has no partitions listed, then proceed to "Replacing a Disk Drive." Otherwise, continue to the next step.

    Caution:

    For operating system disks, look for partition 4 (sda4 or sdb4). Do not dismount an operating system partition.

  5. Dismount the HDFS mount points for the failed disk:

    # umount mountpoint
    

    For example, umount /u11 removes the mount point for partition /dev/sdk1.

    If the umount commands succeed, then proceed to "Replacing a Disk Drive." If a umount command fails with a device busy message, then the partition is still in use. Continue to the next step.

  6. Open a browser window to Cloudera Manager. For example:

    http://bda1node03.example.com:7180

  7. Complete these steps in Cloudera Manager:

    Note:

    If you remove mount points in Cloudera Manager as described in the following steps, then you must restore these mount points in Cloudera Manager after finishing all other configuration procedures.

    1. Log in as admin.

    2. On the Services page, click hdfs

    3. Click the Instances subtab.

    4. In the Host column, locate the server with the failing disk. Then click the service in the Name column, such as datanode, to open its page.

    5. Click the Configuration subtab.

    6. Remove the mount point from the Directory field.

    7. Click Save Changes.

    8. From the Actions list, choose Restart this DataNode.

  8. In Cloudera Manager, remove the mount point from NodeManager Local Directories:

    1. On the Services page, click Yarn.

    2. In the Status Summary, click NodeManager.

    3. From the list, click to select the NodeManager that is on the host with the failed disk.

    4. Click the Configuration subtab.

    5. Remove the mount point from the NodeManager.

    6. Click Save Changes.

    7. Restart the NodeManager.

  9. If you have added any other roles that store data on the same HDFS mount point (such as HBase Region Server), then remove and restore the mount points for these roles in the same way.

  10. Return to your session on the server with the failed drive.

  11. Reissue the umount command:

    # umount mountpoint
    

    If the umount still fails, run lsof to list open files under the HDFS mount point and the processes that opened them. This may help you to identify the process that is preventing the unmount. For example:

    # lsof | grep /u11
  12. Bring the disk offline:

    # MegaCli64 PDoffline "physdrv[enclosure:slot]" a0
    

    For example, "physdrv[20:10]" identifies disk s11, which is located in slot 10 of enclosure 20.

  13. Delete the disk from the controller configuration table:

    MegaCli64 CfgLDDel Lslot a0 
    

    For example, L10 identifies slot 10.

  14. Complete the steps in "Replacing a Disk Drive."

13.6 Replacing a Disk Drive

Complete this procedure to replace a failed or failing disk drives.

  1. Before replacing a failing disk, see "Prerequisites for Replacing a Failing Disk."

  2. Replace the failed disk drive.

    See "Parts for Oracle Big Data Appliance Servers".

  3. Power on the server if you powered it off to replace the failed disk.

  4. Connect to the server as root using either the KVM or an SSL connection to a laptop.

  5. Store the physical drive information in a file:

    # MegaCli64 pdlist a0 > pdinfo.tmp
    

    Note: This command redirects the output to a file so that you can perform several searches using a text editor. If you prefer, you can pipe the output through the more or grep commands.

    The utility returns the following information for each slot. This example shows a Firmware State of Unconfigured(good), Spun Up.

    Enclosure Device ID: 20
    Slot Number: 8
    Drive's postion: DiskGroup: 8, Span: 0, Arm: 0
    Enclosure position: 0
    Device Id: 11
    WWN: 5000C5003487075C
    Sequence Number: 2
    Media Error Count: 0
    Other Error Count: 0
    Predictive Failure Count: 0
    Last Predictive Failure Event Seq Number: 0
    PD Type: SAS
    Raw Size: 1.819 TB [0xe8e088b0 Sectors]
    Non Coerced Size: 1.818 TB [0xe8d088b0 Sectors]
    Coerced Size: 1.817 TB [0xe8b6d000 Sectors]
    Firmware state: Unconfigured(good), Spun Up
    Is Commissioned Spare : NO
    Device Firmware Level: 061A
    Shield Counter: 0
    Successful diagnostics completion on :  N/A
    SAS Address(0): 0x5000c5003487075d
    SAS Address(1): 0x0
    Connected Port Number: 0(path0)
    Inquiry Data: SEAGATE ST32000SSSUN2.0T061A1126L6M3WX
    FDE Enable: Disable
    Secured: Unsecured
    Locked: Unlocked
    Needs EKM Attention: No
    Foreign State: None
    Device Speed: 6.0Gb/s
    Link Speed: 6.0Gb/s
    Media Type: Hard Disk Device
    .
    .
    .
    
  6. Open the file you created in Step 5 in a text editor and search for the following:

    • Disks that have a Foreign State of Foreign

    • Disks that have a Firmware State of Unconfigured

  7. For disks that have a Foreign State of Foreign, clear that status:

    # MegaCli64 CfgForeign clear a0
    

    A foreign disk is one that the controller saw previously, such as a reinserted disk.

  8. For disks that have a Firmware State of Unconfigured (Bad), complete these steps:

    1. Note the enclosure device ID number and the slot number.

    2. Enter a command in this format:

      # MegaCli64 pdmakegood physdrv[enclosure:slot] a0
      

      For example, [20:10] repairs the disk identified by enclosure 20 in slot 10.

    3. Check the current status of Foreign State again:

      # MegaCli64 pdlist a0 | grep foreign
      
    4. If the Foreign State is still Foreign, then repeat the clear command:

      # MegaCli64 CfgForeign clear a0
      
  9. For disks that have a Firmware State of Unconfigured (Good), use the following command. If multiple disks are unconfigured, then configure them in order from the lowest to the highest slot number:

    # MegaCli64 CfgLdAdd r0[enclosure:slot] a0
     
    Adapter 0: Created VD 1
     
    Adapter 0: Configured the Adapter!!
     
    Exit Code: 0x00
    

    For example, [20:5] repairs the disk identified by enclosure 20 in slot 5.

  10. If the CfgLdAdd command in Step 9 fails because of cached data, then clear the cache:

    # MegaCli64 discardpreservedcache l1 a0 
    
  11. Verify that the disk is recognized by the operating system:

    # lsscsi
    

    The disk may appear with its original device name (such as /dev/sdc) or under a new device name (such as /dev/sdn). If the operating system does not recognize the disk, then the disk is missing from the list generated by the lsscsi command.

    The lssci output might not show the correct order, but you can continue with the configuration. While the same physical to logical disk mapping is required, the same disk to device mapping for the kernel is not required. The disk configuration is based on /dev/disks/by-hba-slot device names.

    This example output shows two disks with new device names: /dev/sdn in slot 5, and /dev/sdo in slot 10.

    [0:0:20:0]   enclosu ORACLE  CONCORD14        0960  -
    [0:2:0:0]    disk    LSI      MR9261-8i        2.12  /dev/sda
    [0:2:1:0]    disk    LSI      MR9261-8i        2.12  /dev/sdb
    [0:2:2:0]    disk    LSI      MR9261-8i        2.12  /dev/sdc
    [0:2:3:0]    disk    LSI      MR9261-8i        2.12  /dev/sdd
    [0:2:4:0]    disk    LSI      MR9261-8i        2.12  /dev/sde
    [0:2:5:0]    disk    LSI      MR9261-8i        2.12  /dev/sdn
    [0:2:6:0]    disk    LSI      MR9261-8i        2.12  /dev/sdg
    [0:2:7:0]    disk    LSI      MR9261-8i        2.12  /dev/sdh
    [0:2:8:0]    disk    LSI      MR9261-8i        2.12  /dev/sdi
    [0:2:9:0]    disk    LSI      MR9261-8i        2.12  /dev/sdj
    [0:2:10:0]   disk    LSI      MR9261-8i        2.12  /dev/sdo
    [0:2:11:0]   disk    LSI      MR9261-8i        2.12  /dev/sdl
    [7:0:0:0]    disk    ORACLE   UNIGEN-UFD       PMAP  /dev/sdm
    [
  12. Check the hardware profile of the server, and correct any errors:

    # bdacheckhw
    
  13. Check the software profile of the server, and correct any errors:

    # bdachecksw
    

    If you see a "Wrong mounted partitions" error and the device is missing from the list, then you can ignore the error and continue. However, if you see a "Duplicate mount points" error or the slot numbers are switched, then see "Correcting a Mounted Partitions Error".

  14. Identify the function of the drive, so you configure it properly. See "Identifying the Function of a Disk Drive".

13.7 Correcting a Mounted Partitions Error

When the bdachecksw utility finds a problem, it typically concerns the mounted partitions.

An old mount point might appear in the mount command output, so that the same mount point, such as /u03, appears twice.

To fix duplicate mount points:

  1. Dismount both mount points by using the umount command twice. This example dismounts two instances of/u03:

    # umount /u03
    # umount /u03
    
  2. Remount the mount point. This example remounts /u03:

    # mount /u03
    

If a disk is in the wrong slot (that is, the virtual drive number), then you can switch two drives.

To switch slots:

  1. Remove the mappings for both drives. This example removes the drives from slots 4 and 10:

    # MegaCli64 cfgLdDel L4 a0
    # MegaCli64 cfgLdDel L10 a0
    
  2. Add the drives in the order you want them to appear; the first command obtains the first available slot number:

    # MegaCli64 cfgLdAdd [20:4] a0
    # MegaCli64 cfgLdAdd [20:5] a0
    
  3. If mount errors persist even when the slot numbers are correct, then you can restart the server.

13.8 Configuring a Disk Drive

Use the bdaconfiguredisk utility to configure or reconfigure disk drives on Oracle Big Data Appliance servers.

The bdaconfiguredisk utility provides a fully-automated process for configuring disks. The utility now works with both operating system and data disks on Hadoop and NoSQL systems.

See Also: