How to Reduce the Cost of Performance Storage
The advent of solid-state disk (SSD) media and non-volatile memory express (NVMe) protocols makes storage cost optimization more important than ever before.
SSDs offer more capacity per unit than hard disk drives (HDDs), and they continue to become denser as the technology matures. Meanwhile, NVMe delivers faster performance and lower latency to this media, and new networking interfaces such as 40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE) increase bandwidth. All of this innovation comes at a price premium, creating the need to harness all the performance available from the storage media effectively.
Storage media innovations mean that obtaining the levels of performance and capacity that new workloads, such as analytics and artificial intelligence require, is not the headache facing today’s enterprise storage planner. The challenge today is obtaining this performance and capacity for the lowest possible price. Enterprise SSDs are capable of delivering hundreds of thousands of Input/Output Operations Per Second (IOPS) per drive, but most all-flash storage arrays require dozens of drives to achieve that level of performance due to old and inefficient storage software. The result to the storage buyer is an investment in systems that cost mid-six figures, but that really should cost only about $95,000. Developers need to re-think storage software so that the maximum performance of each drive is achieved.
Most storage software uses algorithms that are 20 years old for critical capabilities such as RAID, snapshots and volume management. Vendors try to avoid software re-writes by adding more powerful processors, more RAM, and more drives into the storage systems to get closer to the raw per drive performance. This approach vastly drives up the cost of the system and still doesn’t dramatically improve storage media performance.
Other vendors, in an effort to deliver closer to raw per drive performance, create proprietary software drivers loaded on field programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) to run the storage software. This further increases costs while also limiting customers’ flexibility. Furthermore, the more that the customer scales out their all-flash architecture, the more complexity, network latency, and overall resource inefficiency is introduced. For instance, a single node may become a bottleneck based on how I/O is distributed.
StorONE wrote its S1 software platform to enable customers to enjoy the latest all-flash, NVMe-Flash, and other storage media innovations with greater flexibility and lower costs by opening the storage software choke point. Notably, StorONE re-wrote core storage algorithms to make them more efficient and also multi-threaded. Using multi-threading enables a central processing unit (CPU) to run multiple processes at a time. As a result, processing may be spread out across cores more equally and enables the system to start with a less powerful (read: less expensive) processor. Multi-threading also facilitates a better balance of performance across cores, while requiring fewer CPUs. The result is greater utilization of drive performance – more IOPS with less hardware.
With S1, customers can obtain performance of up to 1.5 million IOPS, or 15 Gigabits per second (Gbps), while retaining up to 368 terabytes (TB) of physical capacity in a 2U all-flash JBOD configuration. Buyers can start small with fewer drives and scale to extremely high performance, without adding expensive additional hardware including custom drivers and CPUs that they do not need.
Meanwhile, they can obtain the benefits of enterprise storage features such as snapshots and redundancy per volume that are important in maximizing usable storage capacity and to ensuring data protection.