The Four Myths of Backup Storage

While vendors often present them as “requirements,” the four myths of backup storage targets need to be exposed, so IT professionals understand how to design an infrastructure that meets the needs of modern backup software.

The Five Myths of Backup Storage Are:

  1. All Backup Storage Targets Must-Have Deduplication
  2. All-Flash Backup Storage is the answer to RAID Rebuilds
  3. System Availability isn’t important
  4. It doesn’t play a role in recovery performance

Exposing Backup Storage Myth #1 – The Deduplication Question

Deduplication is the feature that launched the disk backup appliance market. Vendors from this area continue to cling to this feature even though most major backup software solutions have replaced it with block-level incremental backups and deduplication within the software. There is no value in deduplicating data twice. Using the software to perform the heavy lifting of deduplication means the backup storage target can focus its resources in other areas, like using 90% of available capacity without performance impact.

Exposing Backup Storage Myth #2 – The Pain of RAID Rebuilds

Backup storage targets are entirely or at least mostly made up of hard disk drives (HDD). As the density of these drives continues to increase, the time to recover from a drive failure via a RAID rebuild increase. Most backup storage targets take multiple days to recover from the failure of an 8TB drive and are hesitant to move to 16 or 18TB drives for fear of facing week-long recoveries.

While this rebuild process is going on, backup and recovery jobs are slow and, in some cases, can’t complete. Features like instant recovery become unusable.

The Four Myths of Backup Storage

Some vendors propose a QLC based all-flash array (AFA) as the backup target to get around the rebuild issue. While an AFA will, for now, recover faster than HDD based system, using one for backup makes the backup infrastructure very expensive. As capacities of flash drives continue to increase, the time required for an AFA to complete a rebuild will also increase.

The answer is to fix the software. RAID is a legacy protection strategy. Modern backup storage targets need a new mechanism to protect against drive failure, one that protects the data, not drives, and delivers sub-two-hour RAID rebuilds even when using high density, 18TB HDDs.

Exposing Backup Storage Myth #3 – How critical is Availability?

The conventional wisdom is that backup storage is a second copy of data. It doesn’t need to be as available as primary storage. As is the case with RAID rebuilds, loss of the backup storage target means that all backup jobs stop, and there is nothing to answer data restoration requests.

More problematic is the popular feature, instant recovery. When IT professionals use this feature to get an application back online, it instantiates its data volume on the backup appliance. If the production environment uses AFAs or even Hybrid systems, users will not be happy going back to HDD-only performance. The backup storage target must have a small flash tier for this use case.

The moment IT triggers instant recovery, the backup storage target becomes production storage. The application is now counting on the backup storage target to remain available and provide features like snapshots and replication.

A modern backup storage target needs to provide high availability with redundant nodes, high-performance media failure protection, and guarantees to persistent media, not RAM caches.

Exposing Backup Storage Myth #4 – Recovery Performance Matters

Recovery is when the backup infrastructure, both software, and hardware, prove the investment was worthwhile. Recovery performance has two aspects that IT needs to examine. The first aspect is how long the recovery will take? The time to move data from backup storage to production storage. Instant recovery only eliminates the network transfer. The backup software still needs to position the data to the recovery area.

Legacy backup storage targets are limited in functionality, so they tend to focus on ingesting performance or how fast they can receive backup jobs. They are not optimized for high-performance read performance, which is critical for recovery. These legacy products have no control over the low-level software that performs the reads and writes. As a result, they can’t optimize it. They are at the mercy of the “community” to perform this work.

A modern backup storage target optimizes for both high ingest performance and optimal read performance. It does this by fixing the underlying read performance issues common in many legacy storage products. It also leverages the flash tier to wait until backups are complete before moving data to the HDD tier. Leveraging flash as the ingest point enables a modern backup storage target to write data to the HDD tier sequentially, optimal for rapid recovery performance.

Time – The Gift of Modern Backup Storage

Administrators use recovery, especially instant recovery, in response to something going wrong either at the application level or a server or storage failure. Legacy backup storage targets don’t provide IT with the time to diagnose what went wrong and prevent that event from occurring again in the future, which will cost them even more time. A modern backup storage target that busts the above myths with advanced performance and availability features provides IT the gift of time. They can use this time to diagnose what is wrong in the environment so it doesn’t happen again.

Learn More

To learn how S1:Backup can save you time, money and improve your resiliency, go to A modern backup storage target can also play a vital role in consolidating your backup storage infrastructure. To learn more about consolidating backup storage, join us for Best Practices for Consolidating Backups live on October 14th at 11:30 am ET / 8:30 am PT to learn more about complete backup consolidation

Want More Content from StorONE?

Every day, we share unique content on our LinkedIn page including storage tips, industry updates, and new product announcements.

Posted in

George Crump

George has over 25 years of experience in the storage industry, holding executive sales and engineer positions. Before joining StorONE, he was the founder and lead analyst at Storage Switzerland.

What to Read Next

How to Bypass the Compromises of Legacy RAID Architectures

Traditional storage architectures force the IT professional to sacrifice either on cost or on performance, in order to obtain data protection services such as snapshots and erasure coding. This is no longer acceptable in a business environment that increasingly does not tolerate compromise on data integrity or on application performance, and that requires maximum levels of utilization of hardware resources. […]
Read More

Data Integrity: The Backbone of Competitive Advantage

Data is the foundation of business advantage in today’s economy. Analytics and artificial intelligence (AI) are helping businesses to uncover new competitive opportunities and to operate in a more efficient and streamlined fashion. At the same time, requirements for data privacy are higher than ever before, because consumers are becoming more discerning about how their […]
Read More

How to Reduce the Cost of Storage Operations

Storage managers have always been pressured to do more with less. That pressure intensifies as the volume of data explodes, as the number of performance-hungry workloads grows, and as faster-performing but also more expensive storage technologies such as solid-state drives (SSDs) and non-volatile memory express drives (NVMe) enter the equation. Delivering the throughput, processing power, […]
Read More

Learn More About the Hidden Cost of Dedupe

  • This field is for validation purposes and should be left unchanged.