Back in 1940, the US Army needed a vehicle. They put out an RFP, two companies (out of 130 invitees) responded, and the ultimate result was the Jeep. Roughly a million units later, the second world war had ended and a new "car" went from a single set of customers -- military organizations -- to one of the most recognizable multi-purpose vehicles in the world. It's amazing what a functional, low-cost design can do when it's let loose upon the world.
The basic philosophy of the Jeep, washed through the open-source software movement of the last 20 years, has given the industry the Open Compute Project, the process by which Facebook is designing and deploying the servers in its new datacenters and then giving those designs to the rest of the world. Now, the company has announced that it's moving past servers and datacenters into storage design, and the potential for market disruption seems great.
According to an article on Wired.com, the design considerations for the Open Compute Project storage section extend down to the size and placement of screws in hot-swappable storage units. For enterprise customers, attention to storage, servers, and datacenters is almost certainly a good thing, but there are very real questions that can arise about the impact "open standard" hardware might have on competitive advantage and relationships with vendors.
We can admit from the beginning that very few enterprises are big enough to have a significant voice in the design of the hardware they purchase. In most cases, the advantage an enterprise can derive from a particular vendor's hardware boils down to how easy it is to configure systems in order to maximize support for software and support hardware. If significant numbers of vendors begin offering Open Compute Project-derived designs, those basic advantages won't go away. The case can be made, in fact, that configuration, integration, and support will become even more critical in the Open Computer Project world because vendors will be able to devote more time to those aspects of a total system deployment.
When vendors are competing against one another on configuration, integration, and support, it will tend to strengthen relationships rather than weaken them, though Open Compute Project designs should help remove some of the "vendor lock-in" fear that takes a toll on many CIOs. Given that storage is one of the areas in which the basic building blocks are already reasonably well-standardized, Open Compute Project definitions will represent an extension of the current way of doing business, not a wholesale change in the industry business model.
We've already seen a bit of this evolutionary change in open projects like OpenStack, a cloud operating system being developed in an open model. Some vendors, such as Dell, have signed on to both projects (and others) to develop products that meet multiple design specs. For enterprise customers, building new data centers may become a process of choosing a standard, working with a vendor on systems that meet the specs, then turning internal developers loose on applications and utilities that draw on the accumulated wisdom of the open community's participants. That is, indeed, a new model for data center rollout, but it's one that has real promise for reducing costs and increasing creativity and productivity. It's hard to see a lot of losers in the transition.