Moving from old network-attached storage (NAS) systems to a new one is an all too frequent project for IT, so solving the pain of NAS migration is a top objective for data center professionals. The NAS migration problem stems from the fact that today, NAS systems are often the primary storage solution for most data centers, supporting virtualized environments like VMware and database environments like Oracle or MS SQL.
NAS systems are, of course, still used for unstructured data storage like user files, backup data, and archiving older data. Additionally, unstructured data capacity requirements continue to grow at an alarming rate. The “mission creep” of NAS solutions pushes NAS systems that are only two and three years old to the breaking point and has IT searching for a better NAS solution.
In StorONE’s upcoming webinar, “How to Build a Better NAS,” we help IT professionals who are interested in solving the pain of NAS migration. During the webinar will examine the five biggest NAS challenges and provide insight into how to build a NAS solution that solves them. One of the biggest pains, though, is NAS Migrations.
The Source of NAS Migration PAIN
The number one NAS challenge that IT professionals face is how to replace or upgrade their current NAS system. The migration issue frequently arises because most NAS solutions can’t scale to keep up with capacity and performance demands. These systems eventually reach a point where further expansion is either impossible or doing so compromises performance and availability.
Hitting “the wall” then means that IT must go through a storage migration process, but one that is much more difficult than a block storage migration. The NAS system stores most of the organization’s data in most data centers. Additionally, these systems may have millions or even billions of files. Transferring this data set across TCP/IP using legacy SMB and NFS protocols is a time-consuming and error-prone process that will only worsen yearly as the total capacity and number of files continue to increase.
The Cost of NAS Migration Pain
The reason IT teams are investing time in solving the pain of NAS migration is the potential payoff both professionally and personally. Suppose IT can create a better NAS solution that can eliminate the need to ever do another migration. In that case, the business outcomes are significant because migration costs are so high. First, IT needs to invest the time it probably doesn’t have in researching and testing a potential NAS replacement. In today’s “do more with less” IT reality, most organizations don’t have labs and test cages to perform real-world system testing. Second, there is the hard cost of buying a second system and running two NAS systems in parallel. Research shows that the average cut-over time from an old NAS solution to a new one is six to eight months. For most organizations, the parallel operation will last more than a year.
The final concern is the time involved and infrastructure upheaval. IT must monitor the migration process constantly and step in when something interrupts the transfer. IT also needs to account for user profile changes to ensure they are correctly pointed at the new NAS solution. As the last step, IT must ensure that all backup and disaster recovery procedures are updated.
Eliminating these hard and soft costs means tremendous IT budget savings and massive savings of IT hours, enabling them to work on more important business-advancing projects. If IT never needs to manage a data migration project, it also means that data, users, and applications are never disrupted.
Workarounds for NAS Migration Pain
Storage Vendors have, of course, tried to come up with workarounds to help IT with solving the pain of NAS migrations. The most common solution is selling another NAS system before the current system hits “the wall.” Adding more NAS systems may forestall NAS migrations but at the cost of massive complexity. Organizations using this approach often have four or five NAS systems, which requires IT to manage each of them individually. IT professionals must also constantly balance users and applications between the NAS systems.
To simplify their multi-NAS environment, some organizations buy a global file system overlay that locks them into a specific vendor and adds another layer to learn and manage. Additionally, global file system add-ons are costly, sometimes more expensive than the NAS systems they manage.
Another potential workaround is scale-out NAS systems. These systems expand capacity and performance as IT adds additional nodes to the cluster of nodes. The problem is that instead of simply adding media to an inexpensive storage enclosure, IT is now buying computing, networking, and memory each time it needs to add capacity. Often, the nodes need to match each other, so moving to new higher-density media or higher-performing media requires creating a new cluster that needs independent management and manual rebalancing of workloads.
While predictable, scale-out NAS costs are expensive because the customer never enjoys the benefits of the upfront investment. The real problem is that scale-out systems are massive consumers of network ports, which are in very short supply. Most customers tell us that lead times on additional network switches are eight months to a year.
In the end, IT will need to perform a NAS migration because of the limitations of NAS storage software. It is a complex and multi-step project.
How to Eliminate NAS Migrations
Solving the pain of NAS migration is simple, just eliminate NAS migrations. The problem is, eliminating NAS migrations requires rethinking storage software. Traditional storage software used in most storage systems is based on decades-old code that is not optimized for today’s storage hardware. It cannot support the maximum performance of memory-based storage like flash drives or the effects of high-density hard disk drives.
These systems are also very rigid. They are unable to mix in new technology as it comes to market, allowing customers to leverage their original investment while taking advantage of innovations that enable them to keep pace with the ever-growing demands placed on NAS systems. The problem is, vendors take a shortcut to the market by leveraging old code and adding a few features. Eventually, customers pay the price for these shortcuts by buying a storage solution for each use case that constantly needs migration to a new platform.
StorONE took no shortcuts. It spent its first eight years rewriting storage I/O from the ground up. The result is an efficient storage engine that extracts the full performance of memory-based storage and allows the use of high-density hard disk drives without the downsides. The StorONE engine can scale to 20PB in a single two-node cluster. It can support storage technology of different types, densities, and even manufactures but abstracts the physical hardware from the actual use case thanks to StorONE’s Virtual Storage Container (VSC) technology. VSCs enable use-case-specific quality of service (QoS) for performance and data resiliency. In the NAS use case, customers can create a flash-only VSC or Hybrid using our advanced auto-tiering algorithm. Our vRAID technology enables customers to use 20TB hard drives in hybrid systems without concerns about week-long drive recoveries.
The StorONE Engine and VSCs enable customers to move all their NAS systems to a single StorONE instance and create a better NAS experience. Users and applications will benefit from increased performance and data resiliency, the organization will benefit from reduced costs, and IT will benefit from a return of many hours.
To learn more about building a better NAS, join us on Wednesday, October 12th, at 1:00 PM ET/ 10:00 AM PT for our live webinar, “How to Build a Better NAS.” During the webinar, we will review this challenge and the other four and explain why today’s NAS systems fall short. Then we will introduce you to a better NAS solution that can exceed these requirements and become the long-lasting foundation of the organization’s storage infrastructure.