With all the chatter coming from Redmond about the new storage features in Windows Server 2012, you might get the impression that RAID is dead. If that's true, what will you do with all the storage you already have configured with some version of RAID?
I put the question of the future of hardware RAID to the Microsoft Windows Storage team and got this response through the PR agency:
Windows 8 and Windows Server 2012 platforms fully enable 3rd party (hardware and software) RAID implementations. The Windows platform remains the premier platform enabling 3rd party value-added storage solutions and, therefore, will continue to support such RAID implementations. In fact, Windows Server 2012 includes new support for offloaded data transfers (ODX) with external storage arrays and for a new class of clusterable hardware RAID controllers.
Beginning with Windows 8 and Windows Server 2012, Windows delivers storage virtualization capabilities directly within the Windows software platform, including resiliency to storage component failures. This new platform storage stack includes innovations such as Storage Spaces and ReFS, and this platform will continue to evolve in future releases.
So how do you decide between RAID and JBOD for your new storage? The answer may lie in several places. If you're planning on a clustered configuration with Windows Server 2012 managing the storage, you'll want to go with JBOD and dual-ported SAS. This type of approach will give you maximum flexibility to architect a system that can implement all the new Windows Server 2012 storage features. You will want to check with the manufacturer of your hardware if you plan on repurposing storage configured for JBOD.
If you want to take advantage of new features like offloaded data transfers, you'll have to go with a NAS or SAN solution with support for ODX. For workloads where you copy a lot of files, like creating new virtual machines from a baseline image, ODX offers the opportunity for huge performance gains. Most of the major storage vendors either support ODX now or have announced plans to support it.
If you're implementing standalone servers with local-attached storage, you definitely want to look at implementing Storage Spaces and ReFS. These two together make the most sense for a departmental or branch office server where everything is inside a single server box. It gives you the most bang for the buck in terms of implementing the key Windows Server 2012 storage features, and it should work on most servers that were purchased in the last few years.
New hardware purchases should be approached with an eye toward these different scenarios. If you plan on building clusterable systems, you definitely want to look at SAS-based storage. The lower cost of building a highly available, highly scalable file system using commodity hardware and Windows Server 2012 should make the big storage vendors nervous. It could also be a boon to the vendors of key building blocks such as dual-port SAS controllers, 10 GBe network cards and switches, and even Fibre Channel adapters.
The bottom line is that you have a lot of options. The rumors of RAID's untimely death are definitely premature. Don't be surprised if RAID hangs around for the foreseeable future.
The only thing dead about RAID is the marketing around it, or lack of, or the de-emphasis as it has hit the proverbal pleateau of productivity (for customers/users) and profitability for vendors. RAID extends all the way into consumer products including some DVRs and home appliances and continues to evolve.
However as others have indicated, its more about options, extensibility and what works best for a given situation.
tinym - Yes and a friend of mine sold 2 terra bite solution to a government customer. He had a picture of him standing in front of the racks and racks of equipment that it took to put this together. And he was holding his commission check that would have paid for a nice car. In today's market like so many technologies it becomes commoditized. What I see MS doing is creating an embedded tool that is disruptive to create a developer business more that end user business.
It sure seems the issue of storage is getting more complicated by the day. Or is it just me ? But I am glad to hear RAID will stick around for awhile longer, still trying to remember by heart which is the best configuration ! : )
I think the core backups of critical data that changes on the fly are where we want to stick with something that has been working for a while, has proven efficiency and we have people who actually know a bit about the quirks of whatever we use. I am glad that MS has been foresightfull on this issue. It would be nice if we caould all afford to post our data to a mobile backup for a weekend and move all new equipment in, but even if we could there are people who use all this to consider. Even a genius can only manage a finite limit of changes simultaneously, and I know places where the real big brains (scientific think tanks) do not want any grains of sand moved in their sandbox. The more different jobs your people do the more stable any IT environment must be. Besides, whatever happend to "Don't fix what ain't broke?
The blogs and comments posted on EnterpriseEfficiency.com do not reflect the views of TechWeb, EnterpriseEfficiency.com, or its sponsors. EnterpriseEfficiency.com, TechWeb, and its sponsors do not assume responsibility for any comments, claims, or opinions made by authors and bloggers. They are no substitute for your own research and should not be relied upon for trading or any other purpose.
Enterprise Efficiency is looking for engaged readers to moderate the message boards on this site. Engage in high-IQ conversations with IT industry leaders; earn kudos and perks. Interested? E-mail: email@example.com
Now that TGen has broken new ground in genomic research by using Dell's storage, cloud, and high-performance computing solutions, the company discusses what will come next for it and for personalized medicine.
The Translational Genomics Research Institute wanted to save lives, but its efforts were hobbled by immense computing challenges related to collecting, processing, sharing, and storing enormous amounts of data.
At the GigaOM Structure conference, a startup announced a cloud and virtualization storage optimizing approach that shows there's still a lot of thinking to be done on the way storage joins the virtual world.
We always hear about "Big" data, but a real issue in cloud storage is not just bigness but also persistence. A large data model is less complicated than a big application repository that somehow needs to be accessed. The Hadoop send-program-to-data model may be the answer.
EMC's Project Lightning has matured into a product set, and it's important, less because it has new features or capabilities in storage technology and management, than because it may package the state of the art in a way more businesses can deploy.