The hyper scale customers are sourcing their core semiconductors directly from large vendor such as Intel, Flash vendors, and network processing vendors. These servers are then assembled by large ODM’s in Asia based on high density, low power specification. Given the magnitude of the CAPEX investment needed to put in place enough compute power, the direct sourcing of semiconductor provides very large savings to these firms.
Large systems vendors like HP, now Lenovo (following the IBM acquisition), Dell on the server side, Netapps, HP, IBM, EMC on the storage system side and Cisco on the networking side are faced with a big dilemma: either ignore 50% of the market or accept to significantly lower their margins for this segment of the market.
The traditional business model built for the enterprise of large system vendors that has been in place in since the mid 1990’s is being fundamentally challenged. The traditional business model consists of the sourcing of components such as Intel processors, network processors (Cavium or Broadcom), hard disk and flash storage subsystems (Seagate or Western Digital) and then integrating it with their proprietary software. These vendors then charge prices with high gross margins to sell this integrated product.
But this business model is now being unbundled. Large semiconductor companies are rushing to supply directly to Amazon or Google, large Asian ODM’s have developed healthy white box businesses. The question arises – Will this be a repeat of the PC business model where Microsoft and Intel control all the industry profitability or another iteration of the smartphone business model where Apple, Samsung, Qualcomm control the industry profitability?
A new computing paradigm
A new computing paradigm optimized for cloud and very high workload is emerging. Dedicated dense servers such as the HP moonshot server are the new norm. The storage capacity is tied more and more directly to the servers for either caching or analytical workload. The purchasing power of these hyper scale data centers makes the price of flash storage low enough to replace hard disk based storage. The dramatic I/O bandwidth of flash storage array is a must for efficient big data processing. The very large number of ports to be connected demands a new low cost switching architecture. The storage software stack and associated file system requirements is also fundamentally different from the one used by the enterprise. Cost and scalability more than reliability will be the design driver given the massive scale out requirements.
Big data and Hadoop initially created by Google for their internal need is now a norm adopted by the whole industry. It is only the beginning where new compute inventions developed by hyper scale data centers will form the foundation of the new cloud-computing paradigm.
The established system vendors HP, Lenovo-IBM, Dell, EMC, Cisco are too slow to embrace this new computing paradigm as they are held hostage by the hefty growth margins of the business model designed for the enterprise market.. But the hyper scale data market is too large to ignore, especially as white box vendors, storage subsystem vendors and semiconductor vendors pursue aggressive penetration strategies for this new market.
The level of software innovation is massive at all levels of the stack – spanning storage, networking and database. Some companies such as Google, Facebook, and Amazon are dedicating large software engineering resources to address these needs from the ground up. But will such an approach be sustainable?
Hyper scale data center customers are eager to find companies who understand their needs and design the right product for hyper scalability and low cost. We believe that the disruption in the hyper scale data center creates a very good opportunity for building new technology companies – ones that are purpose built from both a technology and a business model perspective to meet the needs of this market. The deep intellectual property created by innovative startups can be shared with the top 20 hyper scale companies and can provide the foundations of the future cloud enterprise software optimized for such applications. This is an ideal opportunity for venture capitalists to leverage the Silicon Valley talent to design new architectures crossing the computing, storage, and networking silos. We believe that start ups will have a natural advantage given their ability to mix effectively the expertise of different domains to create the new computing paradigm for the hyper scale data centers.
Eric Buatois, BGV general partner shares his perspective on the hyper scale data center disruption.
The hyper scale data center market comprises companies such as Facebook, Google, Amazon, Alibaba, Tencent as well as large CDN service providers and represents 50% of the global server market. These firms aspire to influence a larger portion of the value chain not only in servers but also in storage and networking.
Large cloud service providers (Amazon, Microsoft Azure, Google) are developing their own hardware and software solutions and will represent purchase volumes of greater than 50% of the worldwide storage capacity over the next few years. Each of these firms are buying more than 1 million servers per year and are expected to continue this rapid rate of consumption. The requirements for servers, storage and networking for such customers are very different than that of the mainstream enterprise market. These customers require low cost, low power consumption, maximum storage capacity with minimum floor space, support of specific file systems associated with Big Data and highly efficient networking switches. Such customers are moving rapidly to SSD for their offline processing – basically Big Data reporting and analytic workload. For example E-bay has recently decided to purchase only SSD storage migrating away from hard disk based storage.
The computing industry is facing a silent revolution both at the technology and business model level.