A couple weeks ago after VMworld I visited EMC’s headquarters and spent an hour with Dan Hushon, EMC Distinguished Engineer in the office of the CTO. In our discussion Dan showed me a graph that revolutionized my thinking of hybrid cloud service providers: people are going to public and hybrid clouds for the wrong reason. And they are not realizing the savings that hybrid clouds could offer.
Before I get to that conclusion, let me share Dan’s graph on the changing price of infrastructure.
Compute power and storage capacity have exploded in the recent decades. Network bandwidth has increased, but not nearly as quickly. These differing progressions mean that bandwidth optimization is elevating its priority in datacenter design.
If this trend continues, data locality, (as in, “are my applications as close as possible to the source of their data”), will dominate the next generation of multi-datacenter design. Applications should first be placed near their data’s source. The cost of infrastructure and data storage become less relevant.
Today customers flock to EC2 for a perceive decrease in cost of infrastructure. (Personally I suspect off-premise application hosting is not as cheap as people believe. Most cost assessments ignore data transfer rates, the cost of outages, the risk of data loss, etc.) But as compute and storage costs continue to drop faster than bandwidth, people will no longer say “off-premise is cheaper”. They will say, “closer to the information source is cheaper”.
This means service providers can make money by selling (and reselling and reselling) access to common data. They can further differentiate with unique data management and manipulation services such as analytics. This is a much richer business proposition than the existing, Amazon-inspired “race to the bottom” of costs and margins.
By finding customers that was to use the same data, service providers will orient themselves to industry verticals. For instance, a social media service provider would ingest all of Twitter. It could then allow many customers to run analytics against the entire Twitter feed. Customers are paying for the analytics and reduced WAN access. Because service providers are amortizing their WAN costs across all their customers they can charge a lower price for access to the Twitter than an individual customer could get. With access to this common data, SPs could provide unique, high-value, high-margin services that customers might not be able to build themselves.
Conceptually, this changes the hybrid cloud map. In the old map, three companies analyzing Twitter trends would independently pull feeds. They would process and store the data internally and farm out some small portion of their infrastructure to their hybrid cloud service provider.
In a model where bandwidth costs dominate, a service provider would pull the Twitter feed. The provider’s customers would use in-house services for analysis and return the high value results to the enterprise.
And this model would work across multiple industry verticals, not just social media. There are common components to oil and gas, finance, retail, and many others verticals.
Bandwidth usage is considerably less in the second model. The cloud provider has unlimited options for industry-specific service creation. And with that differentiation, service providers can grow a business with high value to customers and large returns to their investors.
The world is waiting for service providers to create these industry vertical “malls”. When that happens I think we will see the real promise of the hybrid cloud ecosystem.