How to Maximize Backup and Recovery Operations

To contend with shrinking backup and recovery windows while also reducing backup infrastructure costs, IT professionals must understand how to maximize backup and recovery operations. Maximizing backup and recovery operations requires increasing the efficiency of the backup infrastructure. While most organizations focus on the backup software they must also consider the backup storage target. If the backup storage hardware can’t keep pace with backup software innovations, then much of the promises of those advancements will go unrealized. 

The Bottleneck Maximizing Backup Operations

The primary means most organizations use to protect their data is through the use of backup software which backs up data on their primary storage arrays, which are usually all-flash-arrays (AFAs), and sends the backup files to backup targets which are usually hard disk drive (HDD) based arrays. The use of HDD arrays allows the organization to store very large quantities of data more cost effectively than on all-flash arrays. However, the performance delta between these two technologies can limit an organization’s ability to fully benefit from their software investment.

Backup Software Innovation vs Backup Hardware Stagnation

Backup software has seen incredible innovation over the last few years. Many backup software solutions now provide block-level incremental backups and Instant Recovery features which significantly speed up the backup and the recovery processes. Many also provide deduplication and compression capabilities.

Maximizing Backup Operations for AFAs

Modern backup software, when protecting all-flash arrays, will send backup data at far greater speeds than legacy storage systems can fully ingest and the backup process quickly bottlenecks. Hard drives are not the primary culprit for this bottleneck, however. Backup storage targets have shown little advancement since purpose-built backup appliances (PBA) first hit the market 15-20 years ago. These legacy storage systems still use the 20+ year old obsolete storage system software. These “solutions” essentially layer-in deduplication code on top of that legacy stack and claim to create a new technology.

The impact of the backup storage target bottleneck is that it slows down the backup process and impacts the performance of production applications during the backup process. The result is organizations are forced to backup less frequently, making it more difficult to meet ever-tightening recovery point objectives (RPO).

These legacy backup storage targets also start showing performance degradation when the drives reach approximately 55% of capacity. There is also a concern with using higher density hard drives in these systems due to very long RAID rebuild times in the event of a drive failure. These RAID rebuilds can take days depending on the size of the drive.

Maximizing Recovery Operations for AFAs

On the recovery side, modern backup software with an Instant Recovery feature will instantiate the VM and an application’s data directly on the backup storage target. As discussed in this article, “Overcoming the Recovery-In-Place challenge,” Instant Recovery changes everything we expect from backup storage.

The problem Instant Recovery creates for most legacy backup storage targets is they can’t deliver anything close to the performance of the AFA that hosted the production VMs, resulting in such poor performance as to make the application or service unusable. Most organizations have had an all-flash array in production for years. As a result most applications and environments like VMware are scaled with a dependency on flash performance. 

As we discussed in our blog “Backup Storage Needs High-Availability” instantiating a VM on the backup storage target means for that moment it is now acting as a production storage system and needs AFA performance and enterprise-class data availability, which most legacy systems sorely lack.

Maximizing Backup Operations Requires Modern Backup Storage Targets

What is needed to successfully address these various problems and shortfalls is a next generation backup storage target that can provide the following capabilities:

  • Rapid ingest of both large sequential and smaller set-sequential backup streams
  • Production-class worthy performance and availability while hosting instantly recovered servers
  • Full enablement of 18TB+ hard disk drives with sub-two-hour RAID rebuilds
  • Sufficiently large flash tier to handle backups and Instant Recovery
  • Support for all protocols: iSCSI, NFS, NVMe-oF, SMB, S3 Object and Fibre Channel

Where legacy backup storage targets fail in these areas, StorONE S1:Backup excels. Instead of counting on legacy code and open-source libraries, StorONE spent its first eight years rewriting all the old storage system algorithms from the ground up and flattening out the old Storage IO Stack to produce a modern, highly efficient storage engine.

Maximizing Backup Operations with StorONE

This engine powers S1:Backup and enables you to more fully benefit from the capabilities of modern backup storage software, which leads to faster backups, more usable instant recoveries and dramatically lower prices.

For a closer look at the need for backup targets to change, see our on-demand virtual whiteboard “How to Back Up All-Flash Arrays.” where we discuss how legacy backup hardware is keeping you from meeting the expectations of protecting and recovering from All-Flash Array failures.

Want More Content from StorONE?

Every day, we share unique content on our LinkedIn page including storage tips, industry updates, and new product announcements.

Posted in

Joseph Ortiz

Joseph is a Technical Writer with StorONE, Inc. and an IT veteran with over 40 years of experience in the high tech industries. He has held senior technical positions with several major OEMs, VARs, and System Integrators, providing them with technical pre and post-sales support for a wide variety of data protection and storage solutions. As part of his duties, he designed, implemented and supported backup, recovery and encryption solutions in addition to providing Disaster Recovery planning, testing and data loss risk assessments in distributed computing environments on UNIX and Windows platforms for various OEM's, VARs and System Integrators. He also recently served as an analyst and provided editing services as well as technical content for Storage Switzerland up to the time of its acquisition by StorONE.

What to Read Next

The Write Cache Crutch

Most storage systems create a write-cache using system RAM to accelerate performance. The write cache crutch enables these systems to improve performance. Like most crutches, however, it creates dependencies that put data at risk and complicate system design. The motivation for a write cache is simple. Most systems have poor performance when writing directly to […]
Read More

LightBoard Video – StorONE’s Q2-2020 Release Can Make You a Storage Hero

StorONE’s mission is to help IT professionals be Storage Heroes by driving down storage costs, improving data protection, and increasing performance. Our Q2-2020 release continues that commitment. Join StorONE’s Chief Marketing Officer, George Crump, as he uses the LightBoard to explain how the new capabilities in our Q2-2020 release can help you become a storage […]
Read More

What is TRUprice?

Today, we announce StorONE TRUprice and have put the rest of the storage industry on notice. Storage is far too complicated and far too expensive. With TRUprice, StorONE is the first storage vendor to publicly publish the price for a complete turnkey enterprise storage system. You don’t have to fill out a form or wait […]
Read More

Learn More About the Hidden Cost of Dedupe

  • This field is for validation purposes and should be left unchanged.