Storage environments have been historically siloed
Infrastructure is dedicated specifically for production, backup and archive use cases, and each environment or workload often receives its own dedicated storage infrastructure. This fragmentation is becoming exacerbated as hyperconverged infrastructure, cloud storage services, and a new tier of performance-hungry workloads enter the equation. Quickly, IT organizations are faced with a significant degree of storage cost and complexity. This is unacceptable in terms of meeting digital business requirements for agility, simplicity and new levels of cost efficiency.
One key factor contributing to storage sprawl is the fact that businesses are adding new workloads more quickly than ever. Applications are the vehicle for employee productivity and end customer engagement, and new low-code, no-code and DevOps approaches are enabling the development of these applications at an unprecedented pace. As a result, more storage systems are added to meet growing capacity and performance demands.
These new applications do not look like the legacy set that storage managers are used to dealing with. They are generating data at a lightning fast pace, they require latency to be nearly eradicated, and they require new levels of throughput. As a result, new technologies such as solid-state drives (SSDs) and NVMe access protocols as well as new vendors are entering the storage environment.
Meanwhile, secondary storage demands are also increasing. More data must be backed up than ever before. The desire to harness more data for competitive advantage via analytics, and the arrival of new compliance regulations, vastly increase retention requirements. Backups must be high-speed to avoid slowing the business down. In the event of an outage, backups must be restored nearly instantaneously with minimal data loss (they must be recovered as quickly as possible, to just before the incident occurred) – especially for the growing number of mission-critical workloads.
A STORAGE OPERATING SYSTEM FOR DEALING WITH STORAGE HARDWARE SPRAWL
To simplify their storage environments, many IT shops are seeking to consolidate onto fewer pieces of hardware. However, this strategy does not allow for infrastructure to be optimized for workload-specific requirements. It also does not allow for maximum utilization of the CPU, or of storage capacity and performance.
Hardware consolidation does not work in large part because recent storage drive and network access protocol innovations including solid-state drives (SSDs) and NVMe mean that storage media is no longer the storage bottleneck. Today, the storage architecture bottleneck is inefficient, legacy storage software that has not been rewritten to take advantage of hardware-level innovation. We see this when we evaluate IOPS performance specifications of all-flash storage arrays, which are typically only 20% or less of what the storage media is actually capable of. This creates a situation where a storage buyer may need to purchase five times additional storage capacity (and the overhead such as the CPU and software licenses) to obtain the raw performance that the drive is capable of. Additionally, it encourages vendors to sacrifice in areas such as storage services and to require proprietary drivers to better optimize performance and capacity. The result is a low return on investment (ROI) for the storage buyer.
To address this pain point, StorOne has created its Unified Enterprise Storage (UES) platform. UES consolidates at the software level, providing single-pane-of-glass control for all types of storage infrastructure and protocols. This includes all-flash and hybrid spinning and solid-state disk media, hyperconverged, backup and scale-out NAS infrastructure, and cloud storage services. In addition to providing infrastructure-level flexibility to meet all workload needs, UES maximizes performance and capacity from each platform – enabling the greatest value to be extracted from each gigabyte.
Today’s information era places a premium on storage performance and capacity while further squeezing budgets. Data is growing at an exponential rate, and businesses are turning to artificial intelligence, machine learning and analytics workloads in a meaningful way to harness this information for new advantages. Meanwhile, these business requirements necessitate faster performance from traditional workloads such as Oracle and Microsoft SQL Server as well. This demanding workload ecosystem requires unprecedented levels of utilization of storage capacity, storage memory and storage IO performance.
The world’s largest public cloud service providers are commonly perceived as masters of storage efficiency, due to their ability to create massively scale-out architectures. As a result, many enterprises are working to “shrink” a scale-out architecture and apply it to their own storage infrastructure, in order to more efficiently accelerate workload performance and increase capacity.
The reality, however, is that a scale-out architecture is not a model of efficiency. Cloud service providers’ storage arrays are plagued with the same inefficiencies that impact those deployed by a typical enterprise. The “hyperscalers” are simply able to mask these inefficiencies by the massive scale at which they operate. Typical data centers do not have this luxury, and as a result must have a sharper focus on maximizing the utilization of their available resources.
One area of top concern for IT shops today is storage performance. With the advent of solid-state drives (SSDs) and NVMe (non-volatile memory express) access protocols, new and unprecedented levels of raw storage drive performance are possible. However, the inefficiencies of legacy storage software are causing these drives to take an 80% or more reduction in performance when they are integrated into a typical storage array. This is increasingly unacceptable as enterprises strive to serve workloads that arguably cannot receive enough performance on limited budgets.
Previously, storage software efficiency did not have much impact on system performance due to the latency of hard disk drives (HDDs); the performance of SSDs radically changes this equation. Disk vendors typically try to compensate for inefficiencies in storage software with faster (and more expensive) processors; the challenge is that the storage system is not able to take advantage of these additional cores because they cannot be run in parallel.
Storage software inefficiency has created a challenge whereby customers need to purchase far more storage capacity than they actually need to achieve the levels of IOPS that their workloads require. For example, if the IOPS utilization of a five terabyte (TB) storage array is 20%, it would require the customer to purchase five times that capacity (25 TB) for the array to deliver the raw performance of one 5 TB drive.
This problem becomes exacerbated as the density of drives increases, and as we factor in other associated infrastructure and overhead, such as servers. Compounding the storage performance utilization challenge is the fact that most arrays are deployed for a single use case, for instance specifically to address high end databases, which results in sprawl of multiple systems with low utilization per use case. In addition to adding cost and complexity, storage inefficiency creates a situation whereby typical enterprises are more limited in their ability to tap recent innovations.
HOW CAN STORAGE MANAGERS MEET INCREASING PERFORMANCE AND CAPACITY REQUIREMENTS WHILE ALSO CUTTING THEIR COST STRUCTURE?
A singular, unified storage operating environment is needed to obtain the new levels of utilization required by new workloads. StorOne’s Unified Enterprise Storage (UES) platform, S1, provides immediate return on investment by enabling customers to extract maximum performance from underlying systems.
S1 centralizes all applications and workloads to be run on a singular storage system, while normalizing underlying hardware and supporting all protocols and drive types in a “mix and match” approach. This consolidation and ability to tap the most recent drive innovations at a lower cost accelerates performance for lower capex investment. For example, customers can store more active data on an SSD or on NVMe, and also store less active data on hard disk drives as opposed to on slower-performing tape storage. Furthermore, this centralization and flexibility streamline scalability and administration.