Updated on 03/08/2021

Due, in part, to the different views and opinions regarding the usage of hot spare disks in our previous post, we’ve decided to add an update for clarification. 

The Problematic Aspects of Using a Hot Spare Disk

As is said in almost every theory, using a hot spare disk with ZFS, Solaris FMA or in any other data storage environment is a good solution as it will automatically react to damage in a Redundant Array of Independent Disks (RAID) array and a hot spare disk indeed helps to minimize the duration of a degraded array state. 

That being said, our goal of creating a RAID is to continue operation and not lose data in the event of a disk failure. Anything that increases the risk of data loss could be a bad idea. Let’s have a look at some of these problematic aspects of hot spare disks.

Hot Spare Disks Add Stress to Vulnerable Systems

The main problem with hot spare disks is that they allow the rebuilding (resilvering) of a system that is still actively being used as a production server. This means that, while the resilvering process is taking place, the system will also still be occupied with the usual production data reads and writes. 

Resilvering is a process that needs a lot of server resources so when it’s executed while the server is still in use, it has to compete with the production loads. Since it’s a low-priority task, it can make the entire process of resilvering take very long (even up to a few weeks). This results in the server working at maximum achievable throughput for weeks, which can have dire consequences for the disks (especially HDDs).

Having decades worth of experience, we’ve realized that the use of hot spare disks in complex enterprise systems increases the probability of additional disks failing as the resilvering process starts to put more and more stress on the existing disks and the system itself

Problems in Overall Hot Spare Disk Design

The next flaw of a hot spare disk is that it degrades over time. From the moment it is connected to the system, it keeps on working. And when, eventually, it’s time for it to be used as a damaged disk’s replacement; the hot spare disk itself could simply not be in a good enough state to actually replace the damaged disk.

Another problematic aspect of hot spare disks is that they are used automatically once the disk failure is detected so the corrupted disk might still be connected to the system. It could still try to reconnect and start working again while the hot spare disk is trying to take over its role thus adding even more stress to the system. This is yet another factor that can affect the system’s overall performance and could potentially lead to data loss.  

Hot Spare Disks Create a Single Point of Failure

If you’re looking to create a system with no single point of failure, a hot spare disk will not provide you with much confidence given that the process of automatically replacing a failed disk has been known to occasionally fail, either partially or fully, and result in data loss. 

Having spent decades providing customers with data storage solutions, we’ve heard of a lot of examples where a hot spare disk was the reason for the entire server failure and even data loss occurring. Automation here is risky since it can start the domino effect, especially when the data storage infrastructure has been working for years and the hardware is worn out. 

Our Solution

These problematic aspects of hot spare disks are why our advice would be to not rely on hot spare disks in complex data storage architectures and to use other business continuity solutions instead like High Availability (HA) clusters, backups and On- & Off-site Data Protection (ideally all of the aforementioned). 

Using the ZFS file system, it’s much easier to monitor the system and create a proper backup, with that you have the ability to retrieve data from a damaged disk and write it onto a new one. In addition to that, when using a HA cluster, there is an option of manually switching the production from the affected node to a second one so that you could perform maintenance on the affected node. 

We’d advise following this procedure once the array shows that a degraded state has occurred as a result of a disk failure:

  1. Move resources to the second node in your HA cluster if possible.
  2. Run a full data backup.
  3. Verify the backed-up data for consistency, and verify whether the data restore mechanism works.
  4. Identify the problem source, i.e., find the erroneous hard disk. If possible, shut down the server and make sure the serial number of the hard disk matches the one that’s reported by the event viewer or system logs..
  5. Replace the hard disk identified as bad with a new, unused one. If the replacement hard disk had already been used within another RAID array; make sure that any residual RAID metadata on it has been deleted via the original RAID controller.
  6. Start a rebuild of the system.

So, if using this approach, the rebuild would consist of 6 steps! Using a hot spare disk, your RAID will skip the first four significant steps and then automatically run steps 5, and 6. Thus the rebuild will be completed before you can do these other critical steps; steps that could be the difference between your data being safe and lost.

Anyway, it’s still completely up to you as to how to build a proper system. However, we’d suggest not relying on hot spare disks in a ZFS RAID array due to the potential data loss it can cause. 

 

49 Comments

  • Hard Drives External /

    14, 09 2010 09:31:35

    I found your resource via Google on Tuesday while searching for hard drive and your post regarding Why a HOT-SPARE Hard Disk is a bad idea? | Open-E Blog looked very interesting to me. I just wanted to write to say that you have a great site and a wonderful resource for all to share.

    Reply
    • Janusz /

      17, 09 2010 05:39:06

      We try our best. Thank you!

      Reply
  • José Rocha /

    16, 10 2010 10:27:34

    Had never really thought about the possibility of failure during a rebuild. Excellent approach.Thanks for the tip.

    Reply
  • Joe McDoaks /

    10, 01 2011 03:52:21

    Had never really thought of this approach, but if you get a failure during the rebuild, you would also get a failure during the backup as this stresses the disks just as much. Trying to understand but for me I’m not sure how you are further ahead?

    Reply
    • Data Daddy /

      15, 02 2011 09:13:29

      Well, rebuilding the RAID is going with each byte on the disk, and that require read/write alot on the disk, while backup could only require a read , as the write could be done on another disk, media.

      So that make the backup less stress than the rebuild.
      as the quick format for a HDD is less stress than a Full format,as the full will keep write more than the quick which only will erase the File Allocation Table (TOC table of contents).

      Reply
      • linc /

        16, 07 2013 01:04:23

        Not all RAID arrays rebuild byte by byte, most rebuild on a block level. Additionally, Hot Spares deliver a fully redundant disk. If the physical disk fails (note: the most common failure of disk is physical, not data) then a hot spare should rebuild successfully. It’s all about playing with the laws of averages, and not having a hot spare is just downright stupid in light of this.

        Reply
  • Matthias /

    13, 02 2011 07:07:49

    Well…often I think you’re really posting good things. Here I am strongly disagree!
    If somebody does not care about full backups, checking restore mechanisms and so on it doesnt make a difference if there is a spare or not.
    And to Murphys law there is a risk that another disk will fail sooner when you have no spare. Thats my long-year experience. I was happy to have spares in place for a lot of times in my life.

    Sure, RAID is no backup…and if somebody mixes that up…well…noobs are always there 😉

    Reply

Leave a Comment