If storage software can take advantage of the high-performance flash media available today, then write-caches are unnecessary. The majority of storage systems today use DRAM to create a write-cache to improve their performance and hide the inefficiency of their storage software. The reality is that a write-cache, instead of increasing performance, adds complexity and increased latency.
Problems Caused by Write-Caching
This write-cache method of hiding the poor performance of persistent media causes problems for storage systems designers and puts data at risk of loss. The first problem is that the write-cache acknowledges a successful write operation before the data is written to persistent storage media. The second problem is that in the event of a power failure or server crash, all the data in the RAM used for the write-cache is lost with no way to recover it.

This forces vendors to add complicated redundancies and processes to avoid data loss. To protect data, vendors use various methods such as mirroring RAM contents to another storage node which delays the write acknowledgment until both controllers confirm they received the data. This method also requires a HA configuration, making creating an active-active cluster more difficult since the system software now needs to manage complex issues like cache-coherence and, on failure, “split-brain.”
Other vendors use some type of non-volatile RAM, often with a backup battery or capacitor, to hold data in the event of a power loss. However, they still need HA to protect against server failure, so they still need to deal with high-speed mirroring and cache consistency. These protected memory solutions are expensive and usually very limited in RAM capacity, increasing the chances of a cache miss and decreasing how well the software can “organize” the write before pushing it to persistent media. This type of memory also typically provides less performance than traditional RAM.
Write-Caching Increases Costs and Latency
But these methods significantly increase the overall storage costs and management complexity, adding more overhead and latency, which reduces the performance gains of the write-cache. They also make it challenging to maintain cache-coherency and ensure that the storage software uses the correct version of the cache during a failure.
All of this results from using 20-year-old storage software and IO stacks. These legacy designs are unable to take full advantage of the latest innovations in storage media and networking.
DirectWrite Solves the Write-Caching Problem
Eight years ago, StorONE analyzed existing storage system software and the Linux IO stack used to write to and manage storage media. We then rewrote vital algorithms in the storage software from the ground up and flattened the IO stack into a single efficient layer which we call the StorONE engine. DirectWrite, a key feature in the StorONE Storage Engine, eliminates the need for a write-cache by allowing us to write directly to the storage media faster than our competitors can complete their write cache routine. DirectWrite ensures no data loss in the event of a power failure or storage controller outage while providing better performance.

The StorONE Storage Engine also provides robust and efficient system software that allows us to write directly to storage in a way that does not impact performance. DirectWrite works even while other important storage features like snapshots and media failure protection (vRAID) are active. It also means that our memory requirements are far less than other competing systems. In most cases, 256GB is sufficient for all our operations.
DirectWrite Eliminates Write-Caching HA Complexities
DirectWrite greatly simplifies StorONE’s Storage Engine high availability implementation since it does not need to be concerned with cache coherency and “split-brain” issues. This simplicity stems from the fact that every node in a cluster has direct access to the storage media, so there is no need to synchronize cache memory. The elimination of this overhead from the S1 design improves performance by eliminating the complexity that other vendor products have to manage.
The DirectWrite Performance Advantage
While the write-cache is supposed to increase performance significantly, it is crippled by inefficiencies and complexities in its implementation, coupled with obsolete system software and the old IO stack. The result is higher system overhead and latency, which all significantly impact its performance. There is also the problem of additional costs for RAM and Non-volatile RAM, resulting in small caches, which increase the chances for a cache miss, further impacting performance.
However, DirectWrite eliminates all the overhead of managing a write-cache without impacting performance, thanks to the StorONE Storage Engine’s inherent efficiency. DirectWrite adds to the efficiency of the S1 platform, which allows S1 to deliver 80% to 90% of raw drive performance.
Conclusion
DirectWrite is faster than a speeding write-cache because it eliminates it and its costly overhead.
The DirectWrite feature is made possible by the StorONE Engine, which is the foundation of StorONE’s Storage Engine. It is a crucial feature that delivers maximum write performance while ensuring a very high level of data integrity. With DirectWrite, the application receives the write-acknowledgment from the persistent media, not from a cache that later has to move data. Generating write-acknowledgment from persistent media guarantees the highest level of data integrity. Without first ensuring data integrity, all other advanced data protection capabilities are essentially worthless.