Simplifying Backup Storage Scaling

Given its never-ending growth rate, simplifying backup storage scaling is a critical requirement for IT to address. Today, backup storage is typically 5x the size of production storage. The need to recover faster, retain data longer, and repurpose backup data will increase to 10X production storage capacity. IT has to find a better way to address this growth to keep costs down.

The Complexity of Backup Storage Capacity Scaling

Legacy vendors will point to scale-out backup storage as the solution for scaling backup storage. These architectures add capacity by adding nodes to an existing cluster of storage. When the backup infrastructure needs more capacity, “just add a node.” While adding a node to the cluster may sound simple, this approach has severe logistical challenges. 

Most nodes require 2U or 4U of rack space. Finding room for these additional nodes becomes a problem as the backup environment scales. When IT racks the new node, they must also supply network connectivity, typically four connections. The node also needs power and will require additional cooling. Then depending on the cluster design, IT may still have to reroute backups to that node.

Scale-out backup storage, in most cases, requires fixed capacity and a fixed device type per node. A typical configuration is limited to 8TB hard disk drives within each node. The 5X backup data multiple means that even small customers often need 1PB or more of backup. We find StorONE’s S1:Backup customers commonly asking for 3PBs or more. Scaling to 3PB may take 32 or more nodes in the typical scale-out configuration. As a result, most vendors limit the total capacity of the cluster to less than 5PBs.

An alternative is to use scale-up solutions, but these backup appliances use the same inefficient legacy storage software as most scale-out solutions. They typically only scale to 500TB before suffering a significant drop in performance. The 3PB requirement requires six independently managed storage systems.

Simplifying Backup Storage Capacity Scaling

StorONE builds S1:Backup on our enterprise storage platform, which uses the efficient StorONE IO Engine to deliver massive scale and performance from a minimal amount of hardware. StorONE can scale to 15PBs of capacity, without performance impact, in a single two-node cluster. With S1:Backup, IT can continue scaling the initial nodes to 15PBs. Instead of inflexible upgrades, IT can add higher density drives as they become available, mixing them into the same system without creating new volumes. StorONE’s S1:Backup fully supports 20TB drives, including RAID rebuilds in less than three hours.

The Complexity of Backup Storage Performance Scaling

The scaling of the backup data set results from the growth in production, application, and user data. More data puts more pressure on the backup storage targets to ingest data. The sensitivity to data loss and the ever-increasing ransomware threat motivates customers to backup multiple times per day to lower recovery point objectives (RPO). Scale-out backup storage vendors claim to keep RPO consistent as you add production capacity. As you add nodes to a scale-out backup solution, you, in most cases, will select a group of backup jobs and redirect them to the new node. Adding nodes, again, increases complexity, as does retargeting backup jobs. The scale-out architecture forces you to pay for more and more hardware because it runs on a legacy storage engine and cannot drive each node to its maximum efficiency potential.

Simplifying Backup Storage Performance Scaling

StorONE’s S1:Backup delivers maximum performance from the existing networking and storage interfaces and has plenty of bandwidth to handle more data transfers. Our Flash-First architecture ensures that the physical storage will keep pace with network transfers. The StorONE IO Engine is so efficient that it has excess CPU, Memory, and storage connectivity bandwidth available to receive more data. Still, if more ingest performance is required, the StorONE design means IT only has to add one network card instead of the cost of an entire node (server, CPU, memory, power supply, etc..).

The Complexity of Use Case Scaling

Most backup storage systems solve only one problem, storing backup data. The challenge is there are other storage needs both within the backup infrastructure and beyond it. Within the backup infrastructure, IT needs a solution to ingest the backups rapidly, a solution to store the backup data long term, a solution to securely store backup metadata, and there is an emerging need for a sterile, production-class recovery area.

simplifying backup storage

Beyond backup, use cases like Archive and Network Attached Storage (NAS) are obvious next steps for a more flexible solution. Again, most backup storage solutions don’t address the backup use cases’ requirements. And they certainly can’t address more production use cases like VMware, Hyper-V, KVM, and bare-metal database servers. 

Simplifying Use Case Scaling

No backup solution other than S1:Backup meets all the backup use case needs. It eliminates the need to create multiple storage silos within the backup infrastructure, which compounds the storage system sprawl throughout the data center. With S1:Backup, you gain confidence in the solution. As the legacy storage systems supporting other use cases “age-out,” you can integrate those use cases into the existing StorONE solution, leading you on a path to complete storage consolidation.

Conclusion

The traditional solutions to backup scaling are scaling out by adding nodes or other independent scale-up storage systems. The fundamental mistake in both approaches is not the technique; it is the underlying core storage architecture. These systems count on legacy storage IO stacks that don’t deliver the full potential of the individual hardware components. Cost-effectively scaling backup infrastructure requires a new storage IO engine that extracts the full potential from each hardware resource. The result is a more affordable solution, which is easier to manage, support, and upgrade.

Learn More:

Join StorONE on February 24th at 11:30 am ET / 8:30 am PT for our live webinar “Fixing the Three Inefficiencies of Scale-Out Backup Storage.” To learn how to overcome scale-out backup storage’s three inefficiencies:

  • Inefficient Use of Data Center Floor Space
  • Inefficient Use of Hard Disk and Flash Innovations
  • Inefficient Upgrades and Future Readiness
Register Now!

Want More Content from StorONE?

Every day, we share unique content on our LinkedIn page including storage tips, industry updates, and new product announcements.

Posted in

George Crump

George has over 25 years of experience in the storage industry, holding executive sales and engineer positions. Before joining StorONE, he was the founder and lead analyst at Storage Switzerland.

What to Read Next

Hybrid Cloud Eliminates Backup

While snapshots can reduce your dependencies on it, if implemented correctly, Hybrid Cloud eliminates backup. In reality, primary storage solutions should have eliminated backup many years ago. The problem is that the limitations of enterprise-class storage features caused by inefficient software won’t allow the technology to finish the job. Because of these limitations, the features […]
Read More

Requirements for Extreme High-Availability

The requirements for extreme high-availability create a big challenge for the organization. Creating a highly available storage infrastructure that your organization can afford seems almost impossible. Achieving the goal of an affordable highly-available storage infrastructure requires a flexible storage solution that can deliver various protection strategies across a wide variety of storage hardware. The Levels […]
Read More
Better Backup Can Consolidate Storage

Better Backup Can Consolidate Storage

Implementing better backup can consolidate storage, not just data protection storage but all storage. Every data center wants to reduce the number of storage systems it supports and eliminate storage migrations. The challenge is finding a solution that can address all workloads and protocols while also waiting until various storage assets are ready for replacement. […]
Read More

Learn More About the Hidden Cost of Dedupe

  • This field is for validation purposes and should be left unchanged.