Storage Software Limitations

IT is not able to benefit fully from storage hardware innovations because of storage software limitations. All storage systems are software powering hardware to create the storage infrastructure. The problem is storage software is not keeping pace with a decade of incredible hardware innovations like a 20X increase in per drive performance and a 4X increase in drive density. These storage software limitations mean that customers are only enjoying small percentages of these hardware innovations. As a result, storage OPEX and CAPEX costs continue to escalate out of control.

The Source of Storage Software Limitations

Instead of spending development efforts to create efficient storage software, vendors continue using twenty-year-old storage software components and IO stacks. The algorithms used by these obsolete storage software libraries can’t use the full capabilities and capacities of today’s new storage hardware. The result is an increasingly ineffective use of ever-advancing storage hardware innovation.

Storage Software Limitations

Storage vendors can no longer over-compensate and hide their inefficient software behind improving CPU power. Moore’s Law no longer applies. CPUs are becoming more powerful because of increasing core density, not because of better clock speeds. Storage, in particular, is a complex process to “thread” across multiple cores. All of these factors result in a worsening storage system total cost of ownership (TCO).

The Impact of Storage Software Limitations

Vendors are leading IT professionals to believe that they must live with poor utilization of storage resources. As a result, they assume that the small percentages of performance or TCO improvement are all they can expect. IT professionals also believe that they must also choose between either high-performance storage or data protection features. These storage software limitations impact a vendor’s ability to deliver high data integrity, rapid RAID rebuild performance, massive snapshot frequency/depth, and replication flexibility.

Because of their storage software limitations, vendors convince IT professionals that it takes 48 or more flash drives to deliver 500K IOPS even though most drives on the market promise over 250K IOPS per drive. Several can provide more than 500K!

Storage Software limitations also force vendors to convince IT professionals to slow down their adoption of high-density drives, even though they promise lower cost per GB and decreased physical storage footprint. These vendors need to avoid the use of these drives because of slow RAID rebuild times. They go so far as to blame the drive manufacturers (HDD and Flash) instead of looking at their inefficient software.  

New Ways to Present Performance Gains

Storage software limitations also require vendors to adopt unique ways of marketing their alleged performance gains. A common practice is to compare their newest storage system to their solution from last year. It seems obvious to expect some performance improvement, thanks to hardware improvements alone. These vendors also never show an IOPS or Bandwidth numerical comparison between generations. Additionally, they don’t expose what new CPU or drive models they are using, so it is difficult to measure what percentage of the potential increase they are delivering.

While it is true that IOPS and Bandwidth tests are not the only criteria IT should use in making their decisions, they are a valid starting point. If every vendor published a traditional “four corners” test for each configuration, comparisons of capabilities would be much easier for IT. Since apparently, vendors are unaware of this presentation of performance data, I’ll explain: A Four Corners test shows the boundaries of what a storage system’s performance may achieve. The performance corners are formed by marking the performance of small block reads, then small block writes, and then marking large block read and then write rates. These points show the range of possible performance levels, with real-world workloads falling somewhere within these boundaries.

Hiding Storage Software Limitations Behind All-Flash

Storage software limitations have another impact, the continual assertion that All-Flash Arrays are the only way forward for the infrastructure. Vendors want to hide their inefficiency behind the high-performance of flash media, even though they can’t fully utilize it while convincing you that hard disk drives are more expensive than flash drives. While there are certainly use cases for all-flash, many data centers will find a single hybrid solution will meet more of their use case requirements and not impact application performance but only if that solution leverages a new storage software strategy the taps into the full potential of storage hardware innovation.

Setting Hardware Innovation Free

The reality is that most data centers can meet all their performance requirements with as few as eight flash drives. They can also meet all of their capacity demands, more cost-effectively, with hard disk drives. The combination of modern storage software and current hardware innovation allows IT to establish a storage platform that meets the demands of all use cases while increasing the quality of data integrity and protection.

Storage Software Limitations

StorONE was founded in 2011, and we spent our first eight years rewriting and flattening the storage stack before coming to market with the S1 Engine. The S1 Engine flattens the classic storage IO stack creating a single translation layer allowing you to enjoy the maximum benefit of every hardware innovation. It also powers our S1:Enterprise Storage Platform, enabling IT to take a platform approach to storage infrastructure. This approach allows them to start small and address their most pressing storage challenges while laying the groundwork for a software-defined storage consolidation strategy.

A single storage platform can simplify storage operations, lower the total storage cost of ownership, and dramatically increase data protection and resiliency. To learn more, join our CEO, Gal Naor, and myself for a live webinar, “Fixing Storage – Three Foundational Shifts Required to Reduce Storage TCO,” in which we discuss how to fix storage so you can simplify the infrastructure and improve data protection.

Posted in
George Crump

George Crump

George has over 25 years of experience in the storage industry, holding executive sales and engineer positions. Before joining StorONE, he was the founder and lead analyst at Storage Switzerland.

What to Read Next

How to Bypass the Compromises of Legacy RAID Architectures

Traditional storage architectures force the IT professional to sacrifice either on cost or on performance, in order to obtain data protection services such as snapshots and erasure coding. This is no longer acceptable in a business environment that increasingly does not tolerate compromise on data integrity or on application performance, and that requires maximum levels of utilization of hardware resources. […]
Read More

Data Integrity: The Backbone of Competitive Advantage

Data is the foundation of business advantage in today’s economy. Analytics and artificial intelligence (AI) are helping businesses to uncover new competitive opportunities and to operate in a more efficient and streamlined fashion. At the same time, requirements for data privacy are higher than ever before, because consumers are becoming more discerning about how their […]
Read More

How to Reduce the Cost of Storage Operations

Storage managers have always been pressured to do more with less. That pressure intensifies as the volume of data explodes, as the number of performance-hungry workloads grows, and as faster-performing but also more expensive storage technologies such as solid-state drives (SSDs) and non-volatile memory express drives (NVMe) enter the equation. Delivering the throughput, processing power, […]
Read More

Learn More About the Hidden Cost of Dedupe

  • This field is for validation purposes and should be left unchanged.