Managing ZFS File Systems in Oracle® Solaris 11.2

Exit Print View

Updated: December 2014
 
 

General System Practices

  • Keep system up-to-date with latest Solaris updates and releases

  • Confirm that your controller honors cache flush commands so that you know your data is safely written, which is important before changing the pool's devices or splitting a mirrored storage pool. This is generally not a problem on Oracle/Sun hardware, but it is good practice to confirm that your hardware's cache flushing setting is enabled.

  • Size memory requirements to actual system workload

    • With a known application memory footprint, such as for a database application, you might cap the ARC size so that the application will not need to reclaim its necessary memory from the ZFS cache.

    • Consider deduplication memory requirements

    • Identify ZFS memory usage with the following command:

      # mdb -k
      > ::memstat
      Page Summary                Pages                MB  %Tot
      ------------     ----------------  ----------------  ----
      Kernel                     388117              1516   19%
      ZFS File Data               81321               317    4%
      Anon                        29928               116    1%
      Exec and libs                1359                 5    0%
      Page cache                   4890                19    0%
      Free (cachelist)             6030                23    0%
      Free (freelist)           1581183              6176   76%
      
      Total                     2092828              8175
      Physical                  2092827              8175
      > $q
    • Consider using ECC memory to protect against memory corruption. Silent memory corruption can potentially damage your data.

  • Perform regular backups – Although a pool that is created with ZFS redundancy can help reduce down time due to hardware failures, it is not immune to hardware failures, power failures, or disconnected cables. Make sure you backup your data on a regular basis. If your data is important, it should be backed up. Different ways to provide copies of your data are:

    • Regular or daily ZFS snapshots

    • Weekly backups of ZFS pool data. You can use the zpool split command to create an exact duplicate of ZFS mirrored storage pool.

    • Monthly backups by using an enterprise-level backup product

  • Hardware RAID

    • Consider using JBOD-mode for storage arrays rather than hardware RAID so that ZFS can manage the storage and the redundancy.

    • Use hardware RAID or ZFS redundancy or both

    • Using ZFS redundancy has many benefits – For production environments, configure ZFS so that it can repair data inconsistencies. Use ZFS redundancy, such as RAID-Z, RAID-Z-2, RAID-Z-3, mirror, regardless of the RAID level implemented on the underlying storage device. With such redundancy, faults in the underlying storage device or its connections to the host can be discovered and repaired by ZFS.

    • If you are confident in the redundancy of your hardware RAID solution, then consider using ZFS without ZFS redundancy with your hardware RAID array. However, follow these recommendations to help ensure data integrity.

      • Assign the size of the LUNs and the ZFS storage pool according to your comfort level by considering that ZFS will not be able to resolve data inconsistencies if the hardware RAID array experiences a failure.

      • Create RAID5 LUNs with global hot spares.

      • Monitor both the ZFS storage pool by using zpool status and the underlying LUNs by using your hardware RAID monitoring tools.

      • Promptly replace any failed devices.

      • Scrub your ZFS storage pools routinely, such as monthly, if you are using datacenter quality services.

      • Always have good, recent backups of your important data.

    See also Pool Creation Practices on Local or Network Attached Storage Arrays.

  • Crash dumps consume more disk space, generally in the 1/2-3/4 size of physical memory range.