To contend with shrinking backup and recovery windows while also reducing backup infrastructure costs, IT professionals must understand how to maximize backup and recovery operations. Maximizing backup and recovery operations requires increasing the efficiency of the backup infrastructure. While most organizations focus on the backup software they must also consider the backup storage target. If the backup storage hardware can’t keep pace with backup software innovations, then much of the promises of those advancements will go unrealized.
The Bottleneck Maximizing Backup Operations
The primary means most organizations use to protect their data is through the use of backup software which backs up data on their primary storage arrays, which are usually all-flash-arrays (AFAs), and sends the backup files to backup targets which are usually hard disk drive (HDD) based arrays. The use of HDD arrays allows the organization to store very large quantities of data more cost effectively than on all-flash arrays. However, the performance delta between these two technologies can limit an organization’s ability to fully benefit from their software investment.
Backup Software Innovation vs Backup Hardware Stagnation
Backup software has seen incredible innovation over the last few years. Many backup software solutions now provide block-level incremental backups and Instant Recovery features which significantly speed up the backup and the recovery processes. Many also provide deduplication and compression capabilities.
Maximizing Backup Operations for AFAs
Modern backup software, when protecting all-flash arrays, will send backup data at far greater speeds than legacy storage systems can fully ingest and the backup process quickly bottlenecks. Hard drives are not the primary culprit for this bottleneck, however. Backup storage targets have shown little advancement since purpose-built backup appliances (PBA) first hit the market 15-20 years ago. These legacy storage systems still use the 20+ year old obsolete storage system software. These “solutions” essentially layer-in deduplication code on top of that legacy stack and claim to create a new technology.
The impact of the backup storage target bottleneck is that it slows down the backup process and impacts the performance of production applications during the backup process. The result is organizations are forced to backup less frequently, making it more difficult to meet ever-tightening recovery point objectives (RPO).
These legacy backup storage targets also start showing performance degradation when the drives reach approximately 55% of capacity. There is also a concern with using higher density hard drives in these systems due to very long RAID rebuild times in the event of a drive failure. These RAID rebuilds can take days depending on the size of the drive.
Maximizing Recovery Operations for AFAs
On the recovery side, modern backup software with an Instant Recovery feature will instantiate the VM and an application’s data directly on the backup storage target. As discussed in this article, “Overcoming the Recovery-In-Place challenge,” Instant Recovery changes everything we expect from backup storage.
The problem Instant Recovery creates for most legacy backup storage targets is they can’t deliver anything close to the performance of the AFA that hosted the production VMs, resulting in such poor performance as to make the application or service unusable. Most organizations have had an all-flash array in production for years. As a result most applications and environments like VMware are scaled with a dependency on flash performance.
As we discussed in our blog “Backup Storage Needs High-Availability” instantiating a VM on the backup storage target means for that moment it is now acting as a production storage system and needs AFA performance and enterprise-class data availability, which most legacy systems sorely lack.
Maximizing Backup Operations Requires Modern Backup Storage Targets
What is needed to successfully address these various problems and shortfalls is a next generation backup storage target that can provide the following capabilities:
- Rapid ingest of both large sequential and smaller set-sequential backup streams
- Production-class worthy performance and availability while hosting instantly recovered servers
- Full enablement of 18TB+ hard disk drives with sub-two-hour RAID rebuilds
- Sufficiently large flash tier to handle backups and Instant Recovery
- Support for all protocols: iSCSI, NFS, NVMe-oF, SMB, S3 Object and Fibre Channel
Where legacy backup storage targets fail in these areas, StorONE S1:Backup excels. Instead of counting on legacy code and open-source libraries, StorONE spent its first eight years rewriting all the old storage system algorithms from the ground up and flattening out the old Storage IO Stack to produce a modern, highly efficient storage engine.
Maximizing Backup Operations with StorONE
This engine powers S1:Backup and enables you to more fully benefit from the capabilities of modern backup storage software, which leads to faster backups, more usable instant recoveries and dramatically lower prices.
For a closer look at the need for backup targets to change, see our on-demand virtual whiteboard “How to Back Up All-Flash Arrays.” where we discuss how legacy backup hardware is keeping you from meeting the expectations of protecting and recovering from All-Flash Array failures.