Updated on 03/08/2021

Due, in part, to the different views and opinions regarding the usage of hot spare disks in our previous post, we’ve decided to add an update for clarification. 

The Problematic Aspects of Using a Hot Spare Disk

As stated in almost every theory, using a hot spare disk with ZFS, Solaris FMA, or in any other data storage environment is a good solution, as it automatically responds to malfunctions in a Redundant Array of Independent Disks (RAID) and helps minimize the duration of a degraded array state.”

That being said, the primary goal of creating a RAID is to ensure continuous operation and prevent data loss in the event of a disk failure. Therefore, anything that increases the risk of data loss could be considered a bad idea. Let’s take a closer look at some of the problematic aspects of using hot spare disks.

Hot Spare Disks Add Stress to Vulnerable Systems

The primary issue with hot spare disks is that they enable the rebuilding (resilvering) of a system that is still in active use as a production server. While the resilvering process is taking place, the system will also continue to process the usual production data reads and writes. 

Resilvering is a process that consumes significant server resources. When executed while the server is still in use, it must compete with production workloads. Because it is treated as a low-priority task, the resilvering process can take an extended amount of time – sometimes even several weeks. This prolonged operation at maximum throughput can put considerable strain on the disks, especially HDDs, and may lead to serious wear or potential failures.

Having decades worth of experience, we’ve realized that the use of hot spare disks in complex enterprise systems increases the probability of additional disks failing as the resilvering process starts to put more and more stress on the existing disks and the system itself

Problems in Overall Hot Spare Disk Design

The next flaw of a hot spare disk is that it degrades over time. From the moment it is connected to the system, it keeps on working. And when, eventually, it’s time for it to be used as a damaged disk’s replacement; the hot spare disk itself could simply not be in a good enough state to actually replace the damaged disk.

Another issue with hot spare disks is that they are activated automatically when a disk failure is detected, even if the failed disk is still connected to the system. The faulty disk might attempt to reconnect and operate again while the hot spare is taking over its role, creating additional stress on the system. This can impact overall performance and, in some cases, increase the risk of data loss.

Hot Spare Disks Create a Single Point of Failure

If you’re looking to create a system with no single point of failure, a hot spare disk will not provide you with much confidence given that the process of automatically replacing a failed disk has been known to occasionally fail, either partially or fully, and result in data loss. 

Having spent decades providing customers with data storage solutions, we’ve heard of a lot of examples where a hot spare disk was the reason for the entire server failure and even data loss occurring. Automation here is risky since it can start the domino effect, especially when the data storage infrastructure has been working for years and the hardware is worn out. 

Our Solution

These problematic aspects of hot spare disks are why our advice would be to not rely on hot spare disks in complex data storage architectures and to use other business continuity solutions instead like High Availability (HA) clusters, backups and On- & Off-site Data Protection (ideally all of the aforementioned). 

Using the ZFS file system, it’s much easier to monitor the system and create a proper backup, with that you have the ability to retrieve data from a damaged disk and write it onto a new one. In addition to that, when using a HA cluster, there is an option of manually switching the production from the affected node to a second one so that you could perform maintenance on the affected node. 

We’d advise following this procedure once the array shows that a degraded state has occurred as a result of a disk failure:

  1. Move resources to the second node in your HA cluster if possible.
  2. Run a full data backup.
  3. Verify the backed-up data for consistency, and verify whether the data restore mechanism works.
  4. Identify the problem source, i.e., find the erroneous hard disk. If possible, shut down the server and make sure the serial number of the hard disk matches the one that’s reported by the event viewer or system logs..
  5. Replace the hard disk identified as bad with a new, unused one. If the replacement hard disk had already been used within another RAID array; make sure that any residual RAID metadata on it has been deleted via the original RAID controller.
  6. Start a rebuild of the system.

So, if using this approach, the rebuild would consist of 6 steps! Using a hot spare disk, your RAID will skip the first four significant steps and then automatically run steps 5, and 6. Thus the rebuild will be completed before you can do these other critical steps; steps that could be the difference between your data being safe and lost.

Anyway, it’s still completely up to you as to how to build a proper system. However, we’d suggest not relying on hot spare disks in a ZFS RAID array due to the potential data loss it can cause. 

45 Comments

  • Tristyn Russelo /

    03, 08 2019 02:07:48

    This is flawed logic.
    The procedure in this article tells you that rebuilding is hard on the remaining drives, true, but then tells you to back up all data, verify backups, then rebuild.
    This procedure puts 3x the workload on the drives. the drives will be spun up and heads moving for 3x longer.
    Backing up is just as hard on the drives as a rebuild. Verifying is another complete read of all data. Then rebuilding.
    It will also be 3x longer before your system is back up to normal operating condition

    Reply
  • Koe /

    05, 11 2019 02:22:49

    I don’t get this. Sure a rebuild is stressing the controller and the harddrives.
    If you make a backup from a degraded array this will stress your harddrives like the same, or not ?
    So why not rebuilding it as fast as possible ?
    A backup should exist befor a array is getting degraded.

    Maybe you can explain a litte bit more about the logic behind.

    Reply
  • Boyan /

    27, 12 2019 04:28:42

    Many of the replies don’t get why is using a Hot-Spare Drive a bad idea, because… it isn’t and the the opinion expressed here is imho is false. Wait so you stay away from high stress events – YES totally correct yet you recommend TWO of them instead of ONE?

    Run full back – the highest stress event possible, then swap the bad HD and let it rebuild – second highest stress event possible instead of letting the hot spare rebuild and end up with ONE and only ONE high stress event on the books? Disagree?

    Reply
    • Michael /

      21, 02 2021 08:31:09

      Boyan, I completely agree with you! Also make note to all of you that the modern server today allows 2-disk fault taulerance which means that if one disk fails one may be inserted for rebuild with minimal stress. Even if a 2nd drive dies during rebuild in the above example, the rebuild will finish and will say ‘You are still in need to insert a 2nd drive to finish rebuild’ after it reaches 100%.

      Have a good week.

      Reply
  • Michael /

    21, 02 2021 08:17:36

    Back in 2010 when this article was posted, I would definitely agree that having a HS is a bad idea. However, today, disk failure during rebuild is much less likely and I have been using Hotspare the last 2 years since 2019 and its worked miracles for me. I recommend using 1, even 2 hotspares in your free slots.

    Reply

Leave a Comment