Blog
Innovation Over Integration Yields Unprecedented Storage Efficiency
We live in an age of tremendous storage hardware innovation.
Solid-state drives (SSDs) that are capable of delivering more than 100,000 input/output operations per second (IOPS) in raw performance have hit the market. The reality, though, is that customers are not getting the full benefits of these innovations. They are only able to obtain a fraction of these levels of performance from their storage arrays, because the storage array is bogged down by wildly inefficient legacy storage software algorithms.
Most storage vendors take 12-36 months to come to market since they simply integrate their inefficient legacy storage software code base with a couple of new features and faster hardware. This does not fix the storage hardware utilization problem, because it does not address the underlying root of the problem. True innovation that re-writes core storage algorithms is required to tackle this issue.
StorONE has taken the innovation, rather than the integration, approach. At StorONE, we have spent six years re-writing the storage software stack from the ground up so that customers can enjoy the full potential of modern hardware capabilities. Through high performance erasure coding and algorithm techniques, we created our Unified Enterprise Storage (UES) platform, S1. S1 unlocks previously unobtainable levels of storage system efficiency – something that we call Total Resource Utilization (TRU). Through changing the efficiency equation, S1 enables you to utilize significantly more of your hardware’s capabilities. The same results are achieved with far less hardware.
StorONE’s innovation-first culture has led to a total of 33 patents that have been granted in only six years, and we have tens of additional patent applications pending. Our large number of patents reflects our heavy investment in research and development, as well as our focus on re-architecting the storage stack for levels of efficiency that are, quite simply, transformative for our customers’ businesses.
Most recently, in the first quarter of 2019, StorONE was granted two new patents. Patent No. 10198321, entitled “System and method for continuous data protection,” recognized our groundbreaking approach to integrated data retention without compromising on performance, and Patent No. 10169021, entitled “System and method for deploying a data-path-related plug-in for a logical storage entity of a storage system,” addresses creating, verifying and executing tasks that ensure availability of data in distributed storage systems.
The new patents reflect the core focus of StorONE and of S1, which is to enable customers to obtain high-performance storage without sacrificing on enterprise-class data protection services, at a cost point that is in fact lower than both legacy storage architectures and cloud storage services.
At the core of how we are removing legacy constraints, is the fact that we have designed our data services to be highly computationally efficient, so that they are not hogging valuable CPU cycles that should be spent on serving the application itself. The customer gets more value out of each core and out of each gigabyte of storage capacity, because they do not need to overprovision on one or the other to obtain the levels of performance that they need. S1 supports the full range of high-performance and high capacity use cases, including all-flash and secondary storage, with a fully flexible, mix-and-match approach that provides the freedom for you to quickly integrate the newest innovations and to customize your infrastructure according to your unique application needs.
How to Reduce the Cost of Storage Operations
Storage managers have always been pressured to do more with less.
That pressure intensifies as the volume of data explodes, as the number of performance-hungry workloads grows, and as faster-performing but also more expensive storage technologies such as solid-state drives (SSDs) and non-volatile memory express drives (NVMe) enter the equation. Delivering the throughput, processing power, and storage capacity required by today’s workload ecosystem without breaking the bank necessitates new levels of hardware utilization that are not possible with legacy storage software.
Amazing innovations have occurred over the past five to ten years within storage media; for example, there are enterprise SSDs available on the market today that are capable of achieving hundreds of thousands of Input/Output Operations Per Second (IOPS). However, most storage software stacks have not been re-written to utilize these drives abilities, resulting in wild inefficiencies that the customer ends up paying for.
In the era of Moore’s Law and hard-disk drives (HDDs), storage software programmers did not need to worry about writing efficient code. Central processing unit (CPU) performance was accelerating at a rate with which storage media simply could not keep pace, so bloated storage software could be masked by significantly lagging HDD performance. Programmers prioritized getting their software to market as quickly as possible, versus taking the extra time that would be necessary to write more efficient code.
Today, the tables have turned as CPU performance gains have become incremental and storage media performance accelerates and density increases drastically. The end result is storage arrays that deliver only 20% or less of the IOPS that the storage media is capable of, forcing customers to dramatically overbuy to meet storage performance or capacity needs.
Extracting as much functionality and value as possible from every CPU cycle requires a rethink and a ground up re-write of the storage controller to serve as a consolidation engine.
Consolidating to a single interface for the storage operating system and services streamlines deployment and management of the underlying storage infrastructure.
- Storage managers can more easily make changes. For example, they do not need to worry about dealing with complex RAIDs when a capacity expansion is required.
- Furthermore, this reduces the impact of notoriously CPU and memory-intensive storage software services such as snapshots thus freeing up IOPS and throughput for the application itself.
- Lowering storage controller CPU resources enables the system to utilize fewer drives and use lower-priced CPUs, all without sacrificing performance and storage services.