Click Here

How to Reduce the Cost of Storage Operations

Storage managers have always been pressured
to do more with less.

April 29 2019

By Gal Naor
CEO, Co-Founder

SCHEDUlE
A DEMO

That pressure intensifies as the volume of data explodes, as the number of performance-hungry workloads grows, and as faster-performing but also more expensive storage technologies such as solid-state drives (SSDs) and non-volatile memory express drives (NVMe) enter the equation. Delivering the throughput, processing power, and storage capacity required by today’s workload ecosystem without breaking the bank necessitates new levels of hardware utilization that are not possible with legacy storage software.

Amazing innovations have occurred over the past five to ten years within storage media; for example, there are enterprise SSDs available on the market today that are capable of achieving hundreds of thousands of Input/Output Operations Per Second (IOPS). However, most storage software stacks have not been re-written to utilize these drives abilities, resulting in wild inefficiencies that the customer ends up paying for.

In the era of Moore’s Law and hard-disk drives (HDDs), storage software programmers did not need to worry about writing efficient code. Central processing unit (CPU) performance was accelerating at a rate with which storage media simply could not keep pace, so bloated storage software could be masked by significantly lagging HDD performance. Programmers prioritized getting their software to market as quickly as possible, versus taking the extra time that would be necessary to write more efficient code.

Today, the tables have turned as CPU performance gains have become incremental and storage media performance accelerates and density increases drastically. The end result is storage arrays that deliver only 20% or less of the IOPS that the storage media is capable of, forcing customers to dramatically overbuy to meet storage performance or capacity needs.

Extracting as much functionality and value as possible from every CPU cycle requires a rethink and a ground up re-write of the storage controller to serve as a consolidation engine.

Consolidating to a single interface for the storage operating system and services streamlines deployment and management of the underlying storage infrastructure.

  • Storage managers can more easily make changes. For example, they do not need to worry about dealing with complex RAIDs when a capacity expansion is required.
  • Furthermore, this reduces the impact of notoriously CPU and memory-intensive storage software services such as snapshots thus freeing up IOPS and throughput for the application itself.
  • Lowering storage controller CPU resources enables the system to utilize fewer drives and use lower-priced CPUs, all without sacrificing performance and storage services.

StorONE has invested eight years in development, re-writing core storage algorithms (culminating in more than 50 patents) to facilitate a Unified Enterprise Storage (UES) system with its S1 storage controller.

S1 enables customers to achieve total resource utilization (TRU) via not only highly efficient storage services such as snapshots, but also through the ability to consolidate storage protocols, including block, file, and object, as well as cloud storage services. As a result, StorONE’s S1 solution is flexible enough to accommodate all storage use cases, ranging from performance-intensive all flash to capacity-oriented secondary storage deployments.

Meanwhile, because S1 is truly disaggregated from the underlying hardware, it offers customers the flexibility to utilize whatever storage media or cloud services they choose. S1 enables storage media and services to be mixed and matched, and to be procured according to the price point, specifications, and timing of the customer’s choosing. Not only can customers extract maximum and immediate ROI from their underlying infrastructure through efficient storage services, but they can also use S1 to tap into the newest innovation in storage media, and the technologies that best meet their unique workload requirements.

Recent Posts

  • Innovation Over Integration Yields Unprecedented Storage Efficiency
    We live in an age of tremendous storage hardware innovation. Solid-state drives (SSDs) that are capable of delivering more than 100,000 input/output operations per second (IOPS) in raw performance have hit the market. The reality, though, is that customers are not getting the full benefits of these innovations. They are only able to obtain a fraction of these levels of performance from their storage arrays, because the storage array is bogged down by wildly inefficient legacy storage software algorithms. […]
  • How to Reduce the Cost of Performance Storage
    The advent of solid-state disk (SSD) media and non-volatile memory express (NVMe) protocols makes storage cost optimization more important than ever before. SSDs offer more capacity per unit than hard disk drives (HDDs), and they continue to become denser as the technology matures. Meanwhile, NVMe delivers faster performance and lower latency to this media, and new networking interfaces such as 40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE) increase bandwidth. All of this innovation comes at a price premium, creating the need to harness all the performance available from the storage media effectively. […]
BACK TO POSTS LIST

No Comments

Leave a Comment

TECHNOLOGY PARTNERS