fbpx

Understanding Instant Recovery Performance

While many vendors offer an instant recovery type of feature, understanding instant recovery performance is critical to elevating backup into business continuity. The truth is that none of the backup-server software vendors that offer a recovery-in-place feature, provide truly instant recovery. The recovery is fast but it is seldom instant and the backup-storage software makes it worse.

Instant Recovery Performance

The Recovered Performance Problem

IT needs to be aware of the performance of a recovered virtual machine (VM) or application after the instant recovery occurs. The typical target for the instant recovery is either the backup-server’s storage or the backup-storage target; neither of which is optimized to run production applications. In fact, the performance of the instantly recovered VM is so bad that it is almost unusable and IT is under immense pressure to move the data back to production storage as soon as possible.

If, during normal production, the VM or application being recovered runs on flash or flash-assisted storage then in most cases users will be very disappointed in the result. IT needs to be aware of the performance of the VM or application in this state and set expectations accordingly. This is called the Recovered Performance Expectation (RPE).

Improving Recovered Performance

Improving the performance of a VM or application while it is in its recovered state is essential to enabling backup to consolidate the business continuity process which will not only lower costs but also democratize high availability across all of your mission and business-critical workloads. However, improving recovered performance involves more than just using a flash tier or an all-flash backup target. It requires the backup-optimized use of flash so that your backup storage remains in the backup price band and you still get the performance you need.

First, as we discussed in our last blog, “Elevate Backup into Business Continuity”, the backup storage target needs a large enough flash tier so that it can handle frequent block level incremental backup jobs and still have room for instantly recovered VMs or applications. A modern backup storage target should use 15.3TB high-density flash drives without compromising performance or availability.

Second, you need to extract the maximum amount of performance from the minimum number of drives. As we discuss in our white paper, Reducing Storage Costs with Maximum Drive Performance, legacy storage software, especially backup-storage software, can only extract about 10% of the raw performance of today’s flash drives. To get acceptable, production-class performance, most backup storage targets would need 24 to 36 flash drives which prices them out of the backup storage market. A modern backup storage target needs to deliver hundreds of thousands of IOPS from eight to twelve drives.

Third, you need to make sure that in the recovered state the VM or application not only performs to expectation but also that it is not vulnerable to a drive or system failure. After all, the performance of a storage system that is down is zero. During an instant recovery, the backup storage is production storage! A modern backup storage target should deliver not only production-class performance but also production-class availability and enterprise features.

All-Flash Backup? Not Yet

A few vendors are trying to address the problem by adding flash to their existing storage solution. The problem is these solutions are not optimized for flash and can’t deliver effective per drive performance. Typically, the flash area is too small, effectively a cache, or it is too expensive and moves the vendor’s product out of the backup storage category. Other vendors are offering an all-flash, often QLC based solution, claiming their QLC-based system has somehow reached price parity with hard disk drives. These vendors ignore the fact that hard disk vendors continue to innovate and vendors that are paying attention to those innovations can expect significant performance gains.

Hard disk still has a significant role in backup storage. The reality is that hard disk drives provide a 10X price advantage over flash drives today. It is safe to assume that this advantage will continue for at least the rest of this decade as hard disk drive vendors continue to innovate. IT professionals can expect 20TB hard disk drives early next year, 50TB hard disk drives within four years and 100TB drives before the end of the decade.

Production-Class Performance at Backup-Storage Prices

A backup-storage target that can intelligently blend flash storage and hard disk storage can easily maintain a price point that keeps it within the backup-storage price band. If the solution does more than “just throw flash at the problem” and instead optimizes for maximum drive performance and capacity utilization, it could do more than just solve backup problems. It could enable IT to tap into the full potential of modern backup software. The result is an elevation of backup storage to standby storage, democratizing the business continuity and high availability services to all workloads within the data center.

Learn more

Want More Content from StorONE?

Every day, we share unique content on our LinkedIn page including storage tips, industry updates, and new product announcements.

Posted in

George Crump

George has over 25 years of experience in the storage industry, holding executive sales and engineer positions. Before joining StorONE, he was the founder and lead analyst at Storage Switzerland.

What to Read Next

Hybrid Cloud Eliminates Backup

While snapshots can reduce your dependencies on it, if implemented correctly, Hybrid Cloud eliminates backup. In reality, primary storage solutions should have eliminated backup many years ago. The problem is that the limitations of enterprise-class storage features caused by inefficient software won’t allow the technology to finish the job. Because of these limitations, the features […]
Read More

Requirements for Extreme High-Availability

The requirements for extreme high-availability create a big challenge for the organization. Creating a highly available storage infrastructure that your organization can afford seems almost impossible. Achieving the goal of an affordable highly-available storage infrastructure requires a flexible storage solution that can deliver various protection strategies across a wide variety of storage hardware. The Levels […]
Read More
Better Backup Can Consolidate Storage

Better Backup Can Consolidate Storage

Implementing better backup can consolidate storage, not just data protection storage but all storage. Every data center wants to reduce the number of storage systems it supports and eliminate storage migrations. The challenge is finding a solution that can address all workloads and protocols while also waiting until various storage assets are ready for replacement. […]
Read More

Learn More About the Hidden Cost of Dedupe

  • This field is for validation purposes and should be left unchanged.