Back in 1940, the US Army needed a vehicle. They put out an RFP, two companies (out of 130 invitees) responded, and the ultimate result was the Jeep. Roughly a million units later, the second world war had ended and a new "car" went from a single set of customers -- military organizations -- to one of the most recognizable multi-purpose vehicles in the world. It's amazing what a functional, low-cost design can do when it's let loose upon the world.
The basic philosophy of the Jeep, washed through the open-source software movement of the last 20 years, has given the industry the Open Compute Project, the process by which Facebook is designing and deploying the servers in its new datacenters and then giving those designs to the rest of the world. Now, the company has announced that it's moving past servers and datacenters into storage design, and the potential for market disruption seems great.
According to an article on Wired.com, the design considerations for the Open Compute Project storage section extend down to the size and placement of screws in hot-swappable storage units. For enterprise customers, attention to storage, servers, and datacenters is almost certainly a good thing, but there are very real questions that can arise about the impact "open standard" hardware might have on competitive advantage and relationships with vendors.
We can admit from the beginning that very few enterprises are big enough to have a significant voice in the design of the hardware they purchase. In most cases, the advantage an enterprise can derive from a particular vendor's hardware boils down to how easy it is to configure systems in order to maximize support for software and support hardware.
If significant numbers of vendors begin offering Open Compute Project-derived designs, those basic advantages won't go away. The case can be made, in fact, that configuration, integration, and support will become even more critical in the Open Computer Project world because vendors will be able to devote more time to those aspects of a total system deployment.
When vendors are competing against one another on configuration, integration, and support, it will tend to strengthen relationships rather than weaken them, though Open Compute Project designs should help remove some of the "vendor lock-in" fear that takes a toll on many CIOs. Given that storage is one of the areas in which the basic building blocks are already reasonably well-standardized, Open Compute Project definitions will represent an extension of the current way of doing business, not a wholesale change in the industry business model.
We've already seen a bit of this evolutionary change in open projects like OpenStack, a cloud operating system being developed in an open model. Some vendors, such as Dell, have signed on to both projects (and others) to develop products that meet multiple design specs.
For enterprise customers, building new data centers may become a process of choosing a standard, working with a vendor on systems that meet the specs, then turning internal developers loose on applications and utilities that draw on the accumulated wisdom of the open community's participants. That is, indeed, a new model for data center rollout, but it's one that has real promise for reducing costs and increasing creativity and productivity. It's hard to see a lot of losers in the transition.
@Taimoor- I would be very surprised if Facebook started making or selling hardware of any kind in the next 10-20 years. They have no intellectual property around chips or hardware of any kind. They have no engineering team that I'm aware of that would make that possible in the future. They have no manufacturing partnerships in place.
They will diversify (perhaps into search, ads off of facebook, retail,or software development) but i just don't see that building that kind of capability being the easiest or most lucrative choice.
I do think that the idea of a few standards in the way we think of datacenters will help you "rent" the jeep instead of own it. I look at it like leasing a car. You can have one cheaper if you lease, but you lock yourself into a few things-- having to worry about how much you use it, not breaking it, and what you're goign to do with it when the lease is up.
Plus, if you own, you can stretch the mileage until you can afford a new one. If you lease, you are locked into a new lease (or walking) in a few years.
I second that as well. I've seen numerous examples in my career where the vendors are given preferential treatment based on personal relations of the project stakeholders. But normally such a thing is taken pretty lightly within the organizations.
The blogs and comments posted on EnterpriseEfficiency.com do not reflect the views of TechWeb, EnterpriseEfficiency.com, or its sponsors. EnterpriseEfficiency.com, TechWeb, and its sponsors do not assume responsibility for any comments, claims, or opinions made by authors and bloggers. They are no substitute for your own research and should not be relied upon for trading or any other purpose.
Enterprise Efficiency is looking for engaged readers to moderate the message boards on this site. Engage in high-IQ conversations with IT industry leaders; earn kudos and perks. Interested? E-mail: firstname.lastname@example.org
Now that TGen has broken new ground in genomic research by using Dell's storage, cloud, and high-performance computing solutions, the company discusses what will come next for it and for personalized medicine.
The Translational Genomics Research Institute wanted to save lives, but its efforts were hobbled by immense computing challenges related to collecting, processing, sharing, and storing enormous amounts of data.
At the GigaOM Structure conference, a startup announced a cloud and virtualization storage optimizing approach that shows there's still a lot of thinking to be done on the way storage joins the virtual world.
We always hear about "Big" data, but a real issue in cloud storage is not just bigness but also persistence. A large data model is less complicated than a big application repository that somehow needs to be accessed. The Hadoop send-program-to-data model may be the answer.
EMC's Project Lightning has matured into a product set, and it's important, less because it has new features or capabilities in storage technology and management, than because it may package the state of the art in a way more businesses can deploy.