How to Reduce the Cost of Storage Operations

Storage managers have always been pressured to do more with less.

That pressure intensifies as the volume of data explodes, as the number of performance-hungry workloads grows, and as faster-performing but also more expensive storage technologies such as solid-state drives (SSDs) and non-volatile memory express drives (NVMe) enter the equation. Delivering the throughput, processing power, and storage capacity required by today’s workload ecosystem without breaking the bank necessitates new levels of hardware utilization that are not possible with legacy storage software.
Amazing innovations have occurred over the past five to ten years within storage media; for example, there are enterprise SSDs available on the market today that are capable of achieving hundreds of thousands of Input/Output Operations Per Second (IOPS). However, most storage software stacks have not been re-written to utilize these drives abilities, resulting in wild inefficiencies that the customer ends up paying for.
In the era of Moore’s Law and hard-disk drives (HDDs), storage software programmers did not need to worry about writing efficient code. Central processing unit (CPU) performance was accelerating at a rate with which storage media simply could not keep pace, so bloated storage software could be masked by significantly lagging HDD performance. Programmers prioritized getting their software to market as quickly as possible, versus taking the extra time that would be necessary to write more efficient code.
Today, the tables have turned as CPU performance gains have become incremental and storage media performance accelerates and density increases drastically. The end result is storage arrays that deliver only 20% or less of the IOPS that the storage media is capable of, forcing customers to dramatically overbuy to meet storage performance or capacity needs.
Extracting as much functionality and value as possible from every CPU cycle requires a rethink and a ground up re-write of the storage controller to serve as a consolidation engine.
Consolidating to a single interface for the storage operating system and services streamlines deployment and management of the underlying storage infrastructure.
  • Storage managers can more easily make changes. For example, they do not need to worry about dealing with complex RAIDs when a capacity expansion is required.
  • Furthermore, this reduces the impact of notoriously CPU and memory-intensive storage software services such as snapshots thus freeing up IOPS and throughput for the application itself.
  • Lowering storage controller CPU resources enables the system to utilize fewer drives and use lower-priced CPUs, all without sacrificing performance and storage services.
StorONE has invested eight years in development, re-writing core storage algorithms (culminating in more than 50 patents) to facilitate a Unified Enterprise Storage (UES) system with its S1 storage controller.
S1 enables customers to achieve total resource utilization (TRU) via not only highly efficient storage services such as snapshots, but also through the ability to consolidate storage protocols, including block, file, and object, as well as cloud storage services. As a result, StorONE’s S1 solution is flexible enough to accommodate all storage use cases, ranging from performance-intensive all flash to capacity-oriented secondary storage deployments.
Meanwhile, because S1 is truly disaggregated from the underlying hardware, it offers customers the flexibility to utilize whatever storage media or cloud services they choose. S1 enables storage media and services to be mixed and matched, and to be procured according to the price point, specifications, and timing of the customer’s choosing. Not only can customers extract maximum and immediate ROI from their underlying infrastructure through efficient storage services, but they can also use S1 to tap into the newest innovation in storage media, and the technologies that best meet their unique workload requirements.
Posted in
Gal Naor

Gal Naor

Gal introduced the first real-time enterprise storage compression technology in 2004 at StorWize (acquired by IBM) and changed the utilization and pricing paradigm of commercial storage drives.

What to Read Next

How to Bypass the Compromises of Legacy RAID Architectures

Traditional storage architectures force the IT professional to sacrifice either on cost or on performance, in order to obtain data protection services such as snapshots and erasure coding. This is no longer acceptable in a business environment that increasingly does not tolerate compromise on data integrity or on application performance, and that requires maximum levels of utilization of hardware resources. […]
Read More

Data Integrity: The Backbone of Competitive Advantage

Data is the foundation of business advantage in today’s economy. Analytics and artificial intelligence (AI) are helping businesses to uncover new competitive opportunities and to operate in a more efficient and streamlined fashion. At the same time, requirements for data privacy are higher than ever before, because consumers are becoming more discerning about how their […]
Read More

Innovation Over Integration Yields Unprecedented Storage Efficiency

We live in an age of tremendous storage hardware innovation. Solid-state drives (SSDs) that are capable of delivering more than 100,000 input/output operations per second (IOPS) in raw performance have hit the market. The reality, though, is that customers are not getting the full benefits of these innovations. They are only able to obtain a […]
Read More

Learn More About the Hidden Cost of Dedupe

  • This field is for validation purposes and should be left unchanged.