Legacy storage systems are moving their obsolete, inefficient code into the new frontier of cloud infrastructure and they are creating the Cloud Storage TCO problem. The complexity of understanding the total cost of ownership (TCO) in the cloud makes it difficult for providers to convince customers to move applications and data to their infrastructures, let alone get them to stay in the cloud and grow their footprint. For potential cloud customers, the total cost of ownership of cloud storage may force them to slow their migration plans or even stay on-premises.
What Causes the Cloud Storage TCO Problem?
Legacy, on-premises storage vendors are racing to market with “cloud” versions of their software, but their addiction to the legacy storage algorithms and IO stack, which are over twenty years old, means these vendors must “force” the past to fit into the modern cloud. Due to this legacy baggage, efficiency suffers and TCO increases dramatically. It also shackles customers by locking them into a single vendor or cloud, or forcing them to deploy multiple storage solutions in the cloud, just like they must do on-premises, to cover wide ranging applications and use cases. The legacy approach increases operational costs and complexity. The problem is, in the cloud you pay the penalty for your storage vendor’s inefficiency every month.
How to Fix the Cloud Storage TCO Problem
The cure key to the cloud storage TCO problem is the StorONE S1:Enterprise Storage Platform. Legacy storage systems lack the source-code ownership required for the flexibility, efficiency and performance to cover all use cases, without compromising performance or data protection. StorONE recognized this problem and decided to use a platform approach instead of a system approach to address the limitations of the obsolete legacy storage software. We spent our first eight years rewriting the old storage algorithms from scratch and flattening the old Linux IO stack, creating in the process, a new, powerful storage system engine, S1, which was cloud ready from its inception. The S1 engine powers the S1:Enterprise Storage Platform and enables it to provide a true hybrid cloud strategy, lowering TCO on-premises and in a public cloud infrastructure.
Having deployed many cloud based storage solutions in my career, they all previously stuttered and stumbled because the vendor did not do the laborious work of rewriting the old storage algorithms and rebuilding the stack from scratch. Rather than design a platform, they instead repurposed bulky, obsolete on-premises code, which caused TCO to balloon in the public cloud.
The Requirements for Fixing the Cloud Storage TCO Problem
There are three critical areas in which the legacy stack’s inefficiency forces total costs to grow out of control. These requirements apply both on-premises and in the cloud but it is the cloud business model, which rewards efficiency that makes the lack of a platform all the more painful.
- Aligning the Cloud Compute Instance with Real Performance Needs:
Beyond just acquiring the proper cloud storage with low $/TB and $/IOPS, you need storage that intelligently leverages cloud compute infrastructure. With other cloud solutions, you always need the highest cloud compute tier, since you are saddled with a compute intensive multi-layered, legacy IO stack. Many of these legacy storage vendors also force you to pay for “ULTRA storage tiers” so they can hide their extreme inefficiencies. What you really want is a lean, intelligent, single-layered storage IO stack, with the ability to use lower performance compute and storage tiers, or even better, automatically move your data from one tier to the next, as the data ages and the access patterns change.
Critically, cloud compute instances cost you money at all tiers, so you would want a storage solution that can simply spin instances up or down, as required, to save across all tiers. You also want it to run on the lowest compute tier possible to meet a specific workload’s performance requirements. That requires an efficient, single layer code base. Running on the optimal compute instance saves money every billing cycle, and being able to intelligently execute safe power-downs saves you even more!
Many storage vendors have taken to physically moving their storage hardware into the public cloud providers’ data centers, and disingenuously calling this “running natively.” But the fact remains, this is only necessary because they are still using their old, obsolete IO stack, which is bulky and inflexible. Consequently, their so-called “cloud solution” winds up with a higher TCO, with much less flexibility and performance than a true native stack designed to run in the cloud.
If you want a low cloud storage TCO, the first thing you need is to use the minimal amount of compute possible to deliver the desired results. This requires very flexible and efficient code to achieve these results.
- It’s All About the Tiers:
Let’s focus on Microsoft’s Azure, for a moment. Azure supports a number of cloud storage tiers, each built upon various classes of media. Azure Managed Disks, today has four tiers, with a fifth tier on the way. The performance, and cost can vary significantly, and the quickest way to blow your TCO targets is for data to become trapped on one tier! Unless you have highly predictable workloads, it becomes less necessary for some data to be kept on a premium tier as it ages. It is much more cost efficient to have a cloud storage platform automatically move these variable workloads to more cost effective tiers, either promoting or demoting them, based upon the application’s needs.
If you want to lower TCO, the second thing you need is the efficient use of the proper cloud tiers. This is crucial and also requires very flexible and efficient code to achieve these results.
3. Consolidation of Workloads and Extension of Services
A storage platform provides the flexibility to run applications natively on premises and in the cloud, providing a much broader range of enterprise services than most public cloud vendors could possibly offer (with you maintaining the control to turn them on or off without incurring additional, a-la-carte costs!).
A storage platform enables you to migrate your current applications to the cloud, offering iSCSI, NFS or SMB so that this lift and shift is painless! No need to rewrite, or change the apps you have counted on for years. Your applications interact with cloud storage and compute as though it were native, and on-premises.
Whether running a hybrid cloud or purely native cloud storage architecture, all of the storage platform on-premises services must also be available in the cloud. This includes replication, snapshots, and media failure protection, without compromising performance, data integrity and durability or extensibility.
A platform approach allows the cloud to act as a DR or archive tier to an on-premises S1 instance. It seamlessly sends only changed blocks, and effortlessly allows an application to fully run in the cloud, in a planned or unplanned on-premises system/site failure resulting in a DR failover scenario. IT can use Azure as a repository for hundreds of thousands of snapshots, as an asynchronous replication target, or as the final cascade across multiple, on-premises sites.
A Platform Approach to Cloud Storage
Our new S1:Azure option is integrated into the complete StorONE S1:Enterprise Storage Platform. An on-premises instance of S1 can seamlessly connect to an S1:Azure cloud instance and replicate to it for DR or cloud-bursting of peak workloads. It can even use S1:Azure as a backup and archive target, replacing the on-premises process and lowering data center TCO. Once data is in the cloud, S1:Azure supports all workloads and use-cases, providing block, file, and object support for cloud workloads. Our single platform can deliver high-performance database and application support via iSCSI, while also serving user data or other unstructured data use cases.
If you want to minimize your cloud storage TCO, while maximizing your data protection in the cloud, your storage platform must understand and optimize cloud tiers. To do so, the code must be flexible and efficient, allowing lower cost compute instances, and it must also be intelligent, to automatically move data among tiers based upon a number of key factors (performance, mission criticality, activity). To truly leverage storage in the public cloud, your storage platform must support all of the capabilities and services it would on-premises, and must add value beyond the cloud provider’s menu of services. Powerful replication, snapshots and data protection (erasure coding) helps support a vast number of use cases, lowering TCO further by consolidating and simplifying administrative tasks. To learn more, please watch our S1:Azure webinar , and read our white paper. The S1: Enterprise Storage Platform is being used by our customers to build the most efficient hybrid cloud solutions possible. Let us show you how…