Workload Modeling – Innovation In Infrastructure Performance Validation

Len Rosenthal, Vice President at Load DynamiX, a BGV portfolio company shares his views on the workload modeling challenges in the modern Data Center The modern data center is evolving from a client server deployment paradigm to a multi-tenant, virtualized, cloud infrastructure – managed as a service.  By 2017, nearly twice as many applications will be deployed in cloud data centers as will be in traditional data centers.  While the new architectures offer greater flexibility and potentially lower cost, they bring new challenges for performance assurance, especially for I/O-intensive applications. In traditional data centers, applications are individually matched to a dedicated server/network/storage infrastructure.  The I/O workload demands created by each application on its supporting infrastructure are relatively static and well understood.  Infrastructure over-provisioning, especially storage, is rampant in order to reduce performance risk.  In these environments, application behavior itself is the primary performance risk and IT managers approach performance assurance with solutions such as HP LoadRunner (formerly Mercury Interactive) or other Application Performance Management tools. By contrast, in the modern data center, a constantly changing portfolio of applications shares a common multi-tenant infrastructure-as-a-service (IaaS), resulting in dynamic, random and highly concentrated I/O workloads that are highly likely to push the infrastructure to its limits.  As infrastructure expenditures have skyrocketed, over-provisioning is becoming untenable as a method for performance assurance.  A poorly understood, increasingly stressed infrastructure becomes a primary performance and business risk. IT professionals require new insights into infrastructure behavior under the dynamic loads of their applications. With such insight they can assure performance, mitigate risk and optimize technology expenditures. An innovative new approach, called storage workload modeling, provides a new level of infrastructure insight and can used to validate the performance of any networked storage solution.  The process starts with an analysis of I/O patterns associated with how virtualized applications interact with the storage infrastructure.  A “fingerprint” or I/O profile is created that can then be used to create a highly accurate simulation of the installed production workloads.  This simulation of the workload can then be varied for “what-if” analysis, and combined with a load generation appliance to find the breaking points of the infrastructure.  This will ensure the most cost-effective products are deployed based on the performance requirements of the dynamic workloads. Workload modeling is the new key to infrastructure performance assurance.  It provides critical insight that enables IT organizations to predict how infrastructure will perform as applications and/or the infrastructure changes.   BGV portfolio company Load Dynamix, based in Santa Clara, CA is focused on this problem.  Their customers include dozens of G1000 companies who are trying to accelerate the roll-out of new products and services, eliminate infrastructure performance-related business interruptions and who want to cut their storage costs by over 50%.  Over the upcoming years, we see every G1000 data center and nearly all cloud service providers relying on workload modeling as a fundamental new IT process.