Why Does IT Buy New Storage?

Why Does IT Buy New Storage?

Gal Naor Posted by Gal Naor
on March 18, 2019

Storage environments have been historically siloed

Infrastructure is dedicated specifically for production, backup and archive use cases, and each environment or workload often receives its own dedicated storage infrastructure. This fragmentation is becoming exacerbated as hyperconverged infrastructure, cloud storage services, and a new tier of performance-hungry workloads enter the equation. Quickly, IT organizations are faced with a significant degree of storage cost and complexity. This is unacceptable in terms of meeting digital business requirements for agility, simplicity and new levels of cost efficiency.

One key factor contributing to storage sprawl is the fact that businesses are adding new workloads more quickly than ever. Applications are the vehicle for employee productivity and end customer engagement, and new low-code, no-code and DevOps approaches are enabling the development of these applications at an unprecedented pace. As a result, more storage systems are added to meet growing capacity and performance demands.

These new applications do not look like the legacy set that storage managers are used to dealing with. They are generating data at a lightning fast pace, they require latency to be nearly eradicated, and they require new levels of throughput. As a result, new technologies such as solid-state drives (SSDs) and NVMe access protocols as well as new vendors are entering the storage environment.

Meanwhile, secondary storage demands are also increasing. More data must be backed up than ever before. The desire to harness more data for competitive advantage via analytics, and the arrival of new compliance regulations, vastly increase retention requirements. Backups must be high-speed to avoid slowing the business down. In the event of an outage, backups must be restored nearly instantaneously with minimal data loss (they must be recovered as quickly as possible, to just before the incident occurred) – especially for the growing number of mission-critical workloads.


To simplify their storage environments, many IT shops are seeking to consolidate onto fewer pieces of hardware. However, this strategy does not allow for infrastructure to be optimized for workload-specific requirements. It also does not allow for maximum utilization of the CPU, or of storage capacity and performance.

Hardware consolidation does not work in large part because recent storage drive and network access protocol innovations including solid-state drives (SSDs) and NVMe mean that storage media is no longer the storage bottleneck. Today, the storage architecture bottleneck is inefficient, legacy storage software that has not been rewritten to take advantage of hardware-level innovation. We see this when we evaluate IOPS performance specifications of all-flash storage arrays, which are typically only 20% or less of what the storage media is actually capable of. This creates a situation where a storage buyer may need to purchase five times additional storage capacity (and the overhead such as the CPU and software licenses) to obtain the raw performance that the drive is capable of. Additionally, it encourages vendors to sacrifice in areas such as storage services and to require proprietary drivers to better optimize performance and capacity. The result is a low return on investment (ROI) for the storage buyer.

To address this pain point, StorOne has created its Unified Enterprise Storage (UES) platform. UES consolidates at the software level, providing single-pane-of-glass control for all types of storage infrastructure and protocols. This includes all-flash and hybrid spinning and solid-state disk media, hyperconverged, backup and scale-out NAS infrastructure, and cloud storage services. In addition to providing infrastructure-level flexibility to meet all workload needs, UES maximizes performance and capacity from each platform – enabling the greatest value to be extracted from each gigabyte