Key Requirements for Affordable DR

Many vendors claim to have data protection and disaster recovery (DR) features, specifically snapshot and replication capabilities, but do these features meet the key requirements for affordable disaster recovery (DR)? A DR plan must strike a practical balance between rapid recovery from any disaster and the realities of the IT budget.

Key Requirements an Affordable DR Solution Should Provide:

  • Granularity – Set snapshot and replication policies per volume
  • Tiering – Automated movement of DR data to the least expensive storage
  • Integrated – Snapshots and Replication included, no additional software needed
  • Flexible – Replicate from All-Flash-Array to a Hard Disk Array at the DR site
  • Cloud – Avoid cost of DR Site by replicating data to the Cloud.

Not All Data is Created Equal

The first step in achieving affordable DR is providing an organization with the ability to assign the appropriate level of data protection to specific workloads based on the criticality of the data they contain, ideally per volume. They also need a means to automatically identify and move “cold” data to less expensive storage tiers like hard disk drives (HDD). Additionally, for DR purposes, they need the ability to replicate data between dissimilar storage systems.

Data Placement is Key to Effective Protection and Cost Containment

Most modern organizations use all-flash-arrays (AFA) and solid-state drives (SSD) as their primary storage tier to insure maximum performance for their various application workloads. However, a flash-only strategy raises the TCO of primary storage while also dramatically raising the cost of the DR infrastructure since the organization now needs an AFA DR site.

Key Requirements for Affordable DR

Storing aging, “cold” data on flash storage, wastes precious premium capacity and forces organizations to prematurely purchase additional flash media for new data. In the overwhelming number of data centers, more than 80% of the data set falls into this “cold” category. Storing “cold” data on HDD storage can reduce primary and DR site storage costs by 10X.
The challenge is that most AFA vendors do not include any data tiering capabilities, not even between their performance and capacity solutions. The customer is left to identify and move this “cold” data to less expensive flash or HDD based systems, themselves. This is a difficult, time consuming manual task and the tools to automate it are very expensive and don’t integrate well with the existing storage systems. The lack of affordable, integrated data tiering and automated tools leads most customers to assume that all-flash is their only answer. For a closer look at why AFA vendors avoid tiering, see our article, “Are All-Flash-Vendors Afraid of Tiering?”

The lack of tiering obviously drives up the cost of primary storage but it also dramatically increases the cost of DR storage, since almost all of that data is “cold”, until a disaster occurs.

Data Protection Limitations

The main problem organizations face is how to effectively protect their data in a way that allows the organization to implement a comprehensive DR plan that can recover any and all data in case of a ransomware attack, hardware or system failure, or a natural disaster. Backup software, which only runs once every 24 hours, is no longer sufficient for data protection. To meet new requirements for 100% data availability 24/7, organizations are trying to augment their backups by using two important tools that the storage system usually provides; snapshots and replication. The problem is most legacy snapshot and replication solutions fall woefully short and force IT to continue to make compromises.

Legacy Replication Weaknesses

While replication is included “free” with modern storage solutions, many organizations are unable to use replication because of the cost, complexity and inflexibility of legacy storage arrays. The problem is that most storage system vendors usually require the replication target be the exact same configuration as the primary storage system. This forces organizations to purchase expensive AFAs, which sit idle until a disaster strikes, as replication and/or DR targets.

While third party replication software lets organizations avoid these limitations by providing the flexibility to copy any type of data to any type of storage media, these external solutions are not integrated with their current storage solutions and are expensive. It’s a separate process that must be learned and managed independently. To get a better understanding of the true cost of legacy based replication, see our blog, “The Total Cost of Storage Replication”.

Third party solutions also have a tendency to be purchased by bigger companies, who are essentially admitting their replication capabilities were inadequate. A recent example is HPE’s acquisition of Zerto. Zerto customers who are not also HPE customers now must wonder about the future of a critical element of their DR strategy.

From a technology perspective, most storage vendors count on their snapshot technology to fuel their replication solution. As we’ve discussed before, the problem with this approach is first, their snapshot technology is limited. Second, counting on snapshots means that replication only occurs at specific intervals rather than continuously. Disasters don’t let you know they’re coming. The lack of continuous secondary site replication leaves your organization exposed.

Overcoming these weaknesses requires a new approach, not based on obsolete software, which can properly leverage the latest storage and networking technology to deliver a minimal TCO for the entire DR process.

Key Requirements for Affordable DR

A New Approach to an Old Problem

StorONE, founded in 2011, spent our first eight years rewriting all the obsolete storage algorithms and flattening the legacy IO stack used by all of our competitors, to create a new single layered, efficient storage engine. StorONE’s modern Enterprise Storage Platform enables customers to minimize storage TCO while maximizing data protection. This new platform based solution includes critical new features, like advanced snapshots (S1:Snap), advanced replication (S1:Replicate), and integrated auto-tiering, among others. These new features effectually overcome the shortcomings of legacy storage systems.

Key Requirements for Affordable DR

The main key to maximizing data protection is ensuring the organization can place data in multiple locations without increasing complexity or storage costs. An effective DR strategy means that data needs to reside in multiple locations and across dissimilar media types. StorONE’s S1:Replicate feature enables an organization to achieve 100% uptime by continuously replicating data from any type of storage system to another, dissimilar storage system with a DR failover plan in place. It also provides high-performance synchronous replication, for mission critical applications, asynchronous for less critical workloads and/or remote secondary sites, including the cloud, limited by latency or minimal connectivity. StorONE also includes cascading replication, allowing organizations to create multi-site (more than two) replication situations in either many-to-one or one-to-many configurations.

Conclusion

StorONE’s S1:Replicate feature solves one of the biggest problems of using primary storage for DR; the cost, but it does so while increasing, not compromising, data protection quality. When a disaster occurs, the DR site has instant access to production data in its original format and recovery consists of connecting applications to the volumes on the DR storage system and promoting it to production. S1:Replicate, along with the other new features of the StorONE Enterprise Storage Platform, maximizes data protection while minimizing TCO.

Want More Content from StorONE?

Every day, we share unique content on our LinkedIn page including storage tips, industry updates, and new product announcements.

Posted in

Joseph Ortiz

Joseph is a Technical Writer with StorONE, Inc. and an IT veteran with over 40 years of experience in the high tech industries. He has held senior technical positions with several major OEMs, VARs, and System Integrators, providing them with technical pre and post-sales support for a wide variety of data protection and storage solutions. As part of his duties, he designed, implemented and supported backup, recovery and encryption solutions in addition to providing Disaster Recovery planning, testing and data loss risk assessments in distributed computing environments on UNIX and Windows platforms for various OEM's, VARs and System Integrators. He also recently served as an analyst and provided editing services as well as technical content for Storage Switzerland up to the time of its acquisition by StorONE.

What to Read Next

Better Data Protection and Resiliency

The number one responsibility for any enterprise storage system is to protect your data. Delivering data resiliency is job one for the StorONE Enterprise Storage Platform. The high performance and low-cost of the platform have no value if you can’t have complete data confidence. You can achieve better data protection and resiliency with an Enterprise […]
Read More

Reduce Storage Costs and Risks

The primary objective of our Q2-2020 update is to reduce storage costs and risks. We’ve added two significant features that enable organizations to lower the cost of their storage infrastructure. We’ve also added two critical enhancements that better protect data both long term and during a disaster. StorONE’s Q2-2020 release is available to help you […]
Read More

LightBoard Video – StorONE’s Q2-2020 Release Can Make You a Storage Hero

StorONE’s mission is to help IT professionals be Storage Heroes by driving down storage costs, improving data protection, and increasing performance. Our Q2-2020 release continues that commitment. Join StorONE’s Chief Marketing Officer, George Crump, as he uses the LightBoard to explain how the new capabilities in our Q2-2020 release can help you become a storage […]
Read More

Learn More About the Hidden Cost of Dedupe

  • This field is for validation purposes and should be left unchanged.