Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
In the last part of a three-part blog, Sonal Puri, CEO of Webscale (a BGV portfolio company) shares the company’s vision, how its cloud application delivery platform is differentiated in the market and their move to the broader mid-market. WEB ONLY, CLOUD FIRST We have a saying at Webscale – “Deliver, no matter what.” It speaks to the laser focus we’ve had since the company was founded, to deliver an amazing web application user experience to every one of our customers, regardless of the situation. For our core target of mid-market e-commerce, those situations can vary greatly. Maybe they’re experiencing a major surge in traffic caused by a successful marketing promotion, or maybe their sudden perceived popularity is not so positive and happens in the form of a DDoS attack designed to take their site down. Whatever the circumstances, our promise to our customers is that we have their back, and their site will showcase the highest performance, availability and security that we can deliver, every day, no matter what. In addition to this, is our commitment to delivering the robust feature set of our cloud-based application delivery platform with a level of simplicity that has previously alluded this segment. What do we mean by simplicity? Well, it’s ease of use, first and foremost, and that starts with getting your critical web applications migrated into the cloud, with as little effort as possible, as a software-defined infrastructure. This auto-provisioning methodology means there is no need to re-write, saving massive amounts of time and resources, nor is there any need to lift-and-shift and use only a subset of the cloud’s capabilities. Once you’re deployed in the cloud, Webscale’s automated technology stack manages the rest – from predictive auto-scaling in the event of a traffic surge, content optimization and caching to ensure fast page load times, to a powerful web application firewall that will automatically block malicious attacks and apply rules to prevent any loss of business or corporate reputation. That simplicity continues with easy monthly billing and proactive support that identifies and resolves issues often before they’re even known, and certainly before they cause disruption. End of the day, its peace of mind, and it’s one of the most important things we bring our customers. Make no mistake – the mid-market e-commerce segment is no slouch when it comes to its demands on a web application infrastructure. Flash sales, viral events and seasonal fluctuations make sudden changes in traffic commonplace, and when your customer is likely to go to a competitor if your site takes more than three seconds to load, there is zero tolerance for performance or availability issues. For these reasons, e-commerce has been an excellent foundational segment for Webscale to target, and tackling these challenges has contributed to the development of a number of features that we uniquely enable in the application delivery segment. It’s one of the reasons that Webscale was recently named a Top Innovator in cloud application delivery by research firm IDC, citing simplicity as one of our key differentiators in the space. We predict that more than 90% of enterprise applications will be HTTP/S-based by 2020 based on our own experience with working on large scale enterprise network and application deployments. With its core expertise built around the delivery of web-based applications, Webscale is in the right place at the right time, with a mature platform designed to address the performance, availability and security issues that web-based applications will face when leveraging the public cloud. From migration, to deployment and simple ongoing management, Webscale has become a true partner to businesses wanting to deliver world-class web applications that not only delight their users, but truly use the cloud the way it was meant to be used – as a powerful and utility style computing platform with infinite resources, not just a static and oversized datacenter.
0

In part II of a three part blog on cloud computing Sonal Puri (CEO Webscale Networks) and Anik Bose (BGV General Partner) share their perspective on the next big cloud disruption – the application delivery controller market. (Our first blog on this topic can be viewed at http://benhamouglobalventures.com/2016/11/10/cloud-computing-adoption-and-challenges/) Have business owners wondered why we still hear about websites crashing when too many people try to get in? Stories were rife during the recent 2016 Black Friday and Cyber Monday events with big names like Old Navy, Macy’s and Walmart, all experiencing availability issues due to surge traffic, even though these businesses have been around for decades and have adequate budgets to support their needs. Part of the problem is that many companies are still using traditional hosting and networking solutions for the scale, security and management of their websites, and web applications, instead of the cloud. And the other part of the problem is a lack of expertise in bringing all the disparate pieces of the solution together to solve for the big picture. Lying in many data centers and server rooms today is an appliance we have all heard about (hardware, software, virtual or otherwise): the application delivery controller (ADC). During periods of high demand, like Cyber Monday, an ADC is expected to distribute incoming website visitors across multiple servers until they are at capacity. The ADC space is set to grow into a $2.9 billion global market by 2020 – but what’s driving this massive potential is certainly not an appliance. Traditional ADCs – are they even worth it? Anyone who’s worked in a small-sized business knows that building a traditional server deployment with an old-fashioned ADC is expensive, time consuming and challenging to manage. When an ADC and its associated functions get converted to a SaaS-based utility, everything changes. The deployment is a service and managed by the software vendor instead of the company who needs its features but not its associated headache. One can eliminate the need to constantly buy licenses and scaling out is automated, instant and cost effective, versus installing more physical servers, which may then sit dormant for a large percentage of time when not running at peak. Old dogs can learn new tricks? Not likely Traditional ADC vendors such as F5, Radware and Citrix are moving to embrace the cloud and stay relevant in the ADC business, but their entire business model has yet to pivot from the approach of deploying hardware. And their sales and go to market models are not suited for the new world. While these companies may feel like a safe bet and their cloud vision may seem compelling and logical, the ADCs still have to be installed for a cloud deployment and the customer still needs to support it. Parallels across the storage, wan optimization and caching markets replaced almost entirely by SaaS services is difficult to ignore. Layer 4-7 functionality is moving to the cloud quickly. What the cloud can really do It is no secret that the cloud offers infinite scalability, but an ADC delivered as a utility-type service truly built for the cloud offers much more. Traditional ADCs are missing out on content optimization and the ability to offer analytics for customer insights. A built-in-the-cloud ADC solution like Webscale can detect changing requirements (sense) and respond to those requirements (control) – thereby monitoring a customer’s web traffic and infrastructure and resolving issues before they cause disruption. This can be anything from improved page load times to a full scale-out because of a sudden surge in traffic. There are many stories of Webscale customers that have experienced unplanned scale out events just like this. As a vendor, the benefit of a service is constant feedback and improvement. The ADC market remains challenged because they get limited feedback from their customers. If you don’t touch your ADC appliances (either physical or cloud-based) after a customer purchases them, how can you know what your customers are doing with your solution and if your future enhancements are on track, right or wrong? Isn’t it time to think outside the ADC box? Despite these trends and ample proof points, some organizations aren’t being swayed to adopt the cloud or services that make the cloud more efficient, even when, in today’s rapidly moving, always-on world, any enterprise can be subject to a sudden traffic overload that traditional hosting solutions can’t keep up with. Organizations should be asking themselves the following question: is it better to worry about and maintain our own systems and ADCs or instead use that time and money to focus on growing the business? The answer should always be to favor whatever route facilitates laser-focus on one’s business and leave everything else to service models available as a utility. And that is why old school ADCs are such prime targets for disruption from the cloud.  
0

In this first part of a three part blog Anik Bose (BGV General Partner) shares his perspective on cloud computing and the challenges imposed on moving enterprise applications into the cloud. Cloud computing adoption in Enterprises is being driven by the powerful benefits of CAPEX and OPEX reduction. This represents a paradigm shift where IT resources and services are abstracted from the underlying infrastructure and provided on demand, and at scale, in a shared multi-tenant and elastic environment. The National Institute of Standards and Technology (NIST) definition includes:
  • A pay-as-you-go model with minimal or no initial costs
  • Usage-based pricing, so that costs are based on actual usage
  • Elasticity, so that users can dynamically consume more or less resources
  • Location independence, high availability, and fault tolerance
  • Ubiquitous access to services, where users can access services from any location using any form factor – infrastructure as a service (IaaS); an application deployment platform with application services such as databases, or platform as a service (PaaS); or subscription-based software applications, or software as a service (SaaS).
Challenges However, even for companies that want to be “all-in” when it comes to cloud adoption, it’s not always possible because legacy applications, security/privacy and many other issues can keep portions of the IT infrastructure and applications on-premise. As a result some enterprises are choosing to build a private cloud—enterprise IT infrastructure services, managed by the enterprise, with cloud computing qualities.  Enterprise IT teams need to also balance performance, compliance, interoperability and compatibility to decide which enterprise applications and workloads make sense in the cloud, which ones should stay local or when a hybrid cloud or private cloud is a best fit.   Sometimes based on the type of application (and if SaaS-based alternatives exist), it is worth considering if the SaaS alternatives can meet both business and technical needs. Such a change is no longer an application migration but more of a replacement of the existing application with a SaaS option. New requirements The strong momentum of applications migrating to the public cloud, the adoption of SaaS applications coupled with internal applications becoming increasingly web-enabled is creating new requirements for how enterprise applications are delivered and managed including:     Need for workloads/applications to be cloud agnostic     Ease of manageability for multiple applications across infrastructures     Building resiliency within and across disparate clouds In our next blog we will elaborate on the demise of traditional approaches to managing and delivering application performance in the cloud computing era.
0

Alon Dulce, BGV Intern (and MBA Graduate from Kellogg School of Management) shares his perspective on the next phase of IT infrastructure innovation around containers. The adoption of containers is poised to become one of the most disruptive trends impacting IT infrastructure since virtualization. Companies are adopting containers at an exponential rate, and especially in large enterprises where data center costs is a significant portion of the bottom line (see chart below). Docker Docker has been the pioneer and poster child for this trend. While some pundits claim that containers will disrupt traditional virtualization software while others suggest that containers and traditional virtual machines will cooperate with each other – what is the truth and what are the implications for the next phase of innovation in this area? The Basics A container is a software package that bundles together an entire runtime environment needed to execute an application: the application itself, dependencies, libraries and other binaries, and the configuration files to run it. By using a container the differences in OS distributions and underlying infrastructure are abstracted away. This has several benefits:
  • More efficient use of resources than VMs – the claim is somewhere between a factor of 4 to 6 (read less servers = less $$ on servers and therefore on data center power costs)
  • Allow faster deployment times which lead to lower costs (whether we’re talking about cloud or on premise resources) and also allows an easy environment to manage and deploy applications. (read easy management = less $$ on maintenance)
  • Finally, since containers are easy to use and lightweight, it’s very easy for anyone to develop an app, upload it to the cloud in a container and then you have instant application portability between devices/OS etc. by using ready to run containerized applications from the cloud. This may allow elimination of platform specific development in the near future. (read – less $$ in duplicate development efforts in order to accommodate several devices/OSs).
Adoption We believe that the extent of adoption of containers will be driven by factors such as:
  • Standardization – This is being tackled by Docker (by its sheer popularity and collaborations with Google, RedHat, and other players on their open source libcontainer which make it a de-facto standard for Linux based containers). Google is also porting their programmers to using libcontainer instead of their own lmctfy library.
  • Management – Container management tools are not competitive to VM management tools such as VMWare’s vCenter or Microsoft’s System Center, which can be used to manage virtualized infrastructure. The trends to look for in containers are governance, management and monitoring of container “farms” in a similar way to how multiple VMs are managed in the data center. Elastic load balancing, performance monitoring, failover management and auto-scaling elements sit as part of this management ecosystem that provides end-to-end deployment and management capabilities to end users.
  • Security is a huge concern for many enterprises– since containers share an OS and many binaries, many applications and containers run super user authorizations on their OS, and therefore if a container is compromised it can spread to the OS and onwards. Gartner recommends running de-privileged containers or containers in a VM if there are security concerns. We see many companies trying to address security holes, but a lot of organic efforts are ongoing by container providers. As a result, building a “security for containers” may not be a viable standalone business, and may become part of the management platform.
  • OS – Another problem with the majority of container solutions is the fact that they’re almost all Linux Based, while many Enterprises have demands for Windows based development, data centers etc. ContainerX just launched the first Windows Container as a Service platform and there’s room to grow there.
Furthermore it is more likely that containers will not replace virtual machines in all use cases. Many small and large-scale enterprises are adopting containers, this doesn’t mean that they’re neglecting VMs. Containers, at least now, have a different scope than VMs – they’re specifically tailored to run a single application with fewer resources than a VM. This means that enterprises are using containers in addition to VMs to maximize processing power when running multiple applications or when deploying application (such as the Google product suite mentioned before). We also see some enterprises deploying containers within VMs as well. Next Innovation Opportunity Containers are here and now the interesting question is where is the next innovation opportunity, i.e. which container-related areas make sense for innovation and future VC funding. When looking at the evidence above there are two themes emerging: First, containers for different OS – with a focus on Windows. In the Linux sphere, there are already several leading startups backed by significant players (Docker with Google, RedHat, Parallels etc.) Second, and more important is the technology that will allow containers to mature and compete with VMs – this means security, management and analytics tools such as Twistlock , Panamax.io, Containn, Galactic Exchange, Pachyderm.io and others. This is an area where we believe there is immense future innovation potential. In conclusion, while Docker was the pioneer in the Container space we believe that there is more innovation still to come albeit in different shapes and forms from other startups – necessary to drive broader enterprise adoption in production systems beyond DevOps.
0

Hewlett Packard is very bullish on Network Functions Virtualization and they acquired the Israeli company to accelerate the “journey” of their clients to the cloud In the past, every time a telecom services provider like a cable company or Internet service provider wanted to upgrade their routers, load balancers or firewalls, they had to purchase new hardware. But in an age of increased network traffic and declining margins, telecom companies need to be more agile. As a result, they are increasingly moving away from hardware to virtual machines in the cloud. On Tuesday, HP announced that it has acquired Israel’s ConteXtream with the goal of accelerating telecom companies’ “journey” to Network Functions Virtualization (NFV for short). ConteXtream describes itself as “Subscriber Aware Carrier-SDN Fabric for Network Function Virtualizaton.” HP plans to integrate the newly acquired company into its Communications, Media and Entertainment Solutions division. Although the acquisition price was not made public, it is estimated to be in the tens of millions of dollars. Why NFV is a very big deal In a blog post announcing the acquisition, HP describes NFV as “one of the most significant developments in the communications industry.” Just as large enterprises went from dedicated servers and large mainframe machines to cloud-based services, the networking world has been undergoing a similar transition, says HP’s senior VP for telecom Saar Gillai. “In the networking world there are countless functions — firewalls, caching, all kinds of activities — and we have all kinds of monolithic hardware boxes to do these things. NFV is about saying, ‘Why can’t we put these various functions in the cloud? Why does each function need to be on specialized and dedicated hardware?’” The benefit, says Gillai, is that telecom companies could roll out services or service updates weekly as opposed to every few years. What will happen to ConteXtream after the deal ConteXtream was founded in 2006 by Sharon Barkai, Nachman Shelef and Eric Benhamou. In the past, Barkai founded Sheer Networks, which Cisco acquired in 2005. He also founded Xeround Systems. Nachman Shelef founded NiceCom, was a vice president at 3Com (which acquired NiceCom), and was a general partner and co-founder of the venture capital fund Benchmark Israel. Eric Benhamou has been the chairman of Palm and 3Com. Since its founding, ConteXtream has raised $23.8 million in three rounds of investment. The company’s investors include Norwest Venture Partners, Gemini Israel Ventures, Sofinnova Ventures, as well as founder Eric Benhamou’s Benhamou Global Ventures. In 2010, as part of the company’s second round of funding, the existing funders were joined by the investment arms of two telco companies Verizon Ventures and Comcast Ventures. Once the deal is complete, Nachman Shelef will continue to direct the ConteXtream department at HP and he will be directly subordinate to Saar Gillai, SVP/GM Communications Solutions Business and Global Leader Telco at Hewlett-Packard. According to HP, the NFV market is expected to grow to $11 billion by 2018. Yaneev Avital contributed reporting
0

Eric Buatois, BGV general partner shares his perspective on the hyper scale data center disruption. The hyper scale data center market comprises companies such as Facebook, Google, Amazon, Alibaba, Tencent as well as large CDN service providers and represents 50% of the global server market. These firms aspire to influence a larger portion of the value chain not only in servers but also in storage and networking. Large cloud service providers (Amazon, Microsoft Azure, Google) are developing their own hardware and software solutions and will represent purchase volumes of greater than 50% of the worldwide storage capacity over the next few years. Each of these firms are buying more than 1 million servers per year and are expected to continue this rapid rate of consumption. The requirements for servers, storage and networking for such customers are very different than that of the mainstream enterprise market. These customers require low cost, low power consumption, maximum storage capacity with minimum floor space, support of specific file systems associated with Big Data and highly efficient networking switches. Such customers are moving rapidly to SSD for their offline processing – basically Big Data reporting and analytic workload.  For example E-bay has recently decided to purchase only SSD storage migrating away from hard disk based storage. The computing industry is facing a silent revolution both at the technology and business model level. Business model: The hyper scale customers are sourcing their core semiconductors directly from large vendor such as Intel, Flash vendors, and network processing vendors. These servers are then assembled by large ODM’s in Asia based on high density, low power specification. Given the magnitude of the CAPEX investment needed to put in place enough compute power, the direct sourcing of semiconductor provides very large savings to these firms. Large systems vendors like HP, now Lenovo (following the IBM acquisition), Dell on the server side, Netapps, HP, IBM, EMC on the storage system side and Cisco on the networking side are faced with a big dilemma: either ignore 50% of the market or accept to significantly lower their margins for this segment of the market. The traditional business model built for the enterprise of large system vendors that has been in place in since the mid 1990’s is being fundamentally challenged. The traditional business model consists of the sourcing of components such as Intel processors, network processors (Cavium or Broadcom), hard disk and flash storage subsystems (Seagate or Western Digital) and then integrating it with their proprietary software. These vendors then charge prices with high gross margins to sell this integrated product. But this business model is now being unbundled. Large semiconductor companies are rushing to supply directly to Amazon or Google, large Asian ODM’s have developed healthy white box businesses. The question arises – Will this be a repeat of the PC business model where Microsoft and Intel control all the industry profitability or another iteration of the smartphone business model where Apple, Samsung, Qualcomm control the industry profitability? A new computing paradigm A new computing paradigm optimized for cloud and very high workload is emerging. Dedicated dense servers such as the HP moonshot server are the new norm. The storage capacity is tied more and more directly to the servers for either caching or analytical workload. The purchasing power of these hyper scale data centers makes the price of flash storage low enough to replace hard disk based storage. The dramatic I/O bandwidth of flash storage array is a must for efficient big data processing. The very large number of ports to be connected demands a new low cost switching architecture. The storage software stack and associated file system requirements is also fundamentally different from the one used by the enterprise. Cost and scalability more than reliability will be the design driver given the massive scale out requirements. Big data and Hadoop initially created by Google for their internal need is now a norm adopted by the whole industry. It is only the beginning where new compute inventions developed by hyper scale data centers will form the foundation of the new cloud-computing paradigm. The established system vendors HP, Lenovo-IBM, Dell, EMC, Cisco are too slow to embrace this new computing paradigm as they are held hostage by the hefty growth margins of the business model designed for the enterprise market.. But the hyper scale data market is too large to ignore, especially as white box vendors, storage subsystem vendors and semiconductor vendors pursue aggressive penetration strategies for this new market. The level of software innovation is massive at all levels of the stack – spanning storage, networking and database. Some companies such as Google, Facebook, and Amazon are dedicating large software engineering resources to address these needs from the ground up. But will such an approach be sustainable? Hyper scale data center customers are eager to find companies who understand their needs and design the right product for hyper scalability and low cost. We believe that the disruption in the hyper scale data center creates a very good opportunity for building new technology companies – ones that are purpose built from both a technology and a business model perspective to meet the needs of this market. The deep intellectual property created by innovative startups can be shared with the top 20 hyper scale companies and can provide the foundations of the future cloud enterprise software optimized for such applications.  This is an ideal opportunity for venture capitalists to leverage the Silicon Valley talent to design new architectures crossing the computing, storage, and networking silos. We believe that start ups will have a natural advantage given their ability to mix effectively the expertise of different domains to create the new computing paradigm for the hyper scale data centers.
0

Our past two blogs focused on challenges and opportunities in the current suite of products targeted towards cloud deployments. In this post, we explore the organizational challenges associated with the broad adoption of cloud-enabled technologies. DevOps has emerged as an area of organizational innovation as a response to some of the challenges posed by the adoption of cloud-enabled technologies. DevOps is the intersection of development, technology operations and quality assurance teams. The traditional approach of separate departments (Development, IT Operations and Quality Assurance) with siloed processes and roles/responsibilities are not conducive to the successful deployment of new development methodologies such as agile software development or to meet the demand for increase rate of production for new releases… Consequently organizations have felt a growing need for an integrated DevOps teams, an uncomfortable union between developers and IT operations. The biggest challenge with DevOps is overcoming the cultural barrier. DevOps is a philosophy, not a market. Organizations are used to different processes, have different oversight and control mechanisms i.e. they speak different languages. Majority of the changes within organizations must be focused on changing culture and processes. Tools, if chosen correctly can act as a catalyst in influencing the cultural and process changes. Understanding the needs for a DevOps team is essential
  • IT needs to have sufficient oversight and control
  • IT needs to be able to manage identity and access controls
  • Regulatory and Compliance obligations should be met
  • Developers need to have flexibility and optimal performance
Tools that can open up communication channels between teams, provide clarity and visibility between the parties involved, and tools that are easy to adopt and use are the ones that can enable teams to embrace the DevOps philosophy. One of our portfolio companies, Qubell recently announced a partnership with Grid Dynamics, another pioneer of continuous delivery and release automation deployments. The two companies have joined forces to create an integrated technology and services offering aimed at helping enterprises that run on Oracle ATG become more agile. Beyond the ATG environment, Qubell addresses the need for application level orchestration of DevOps engineers. Its value is particularly relevant in highly complexity applications requiring rich service environments. It offers a self service visual portal which handles the full automation, provisioning and orchestration required by these applications, whether in a private, public, or hybrid cloud environments. Companies can make use of the experience automation engineers at Qubell to supplement their in-house DevOps teams to build a turnkey, ready to run Agile Software factory. Given the rapid rate of innovation and changes in cloud-enabled technologies enterprises are hesitant to get locked into a specific vendor.  This creates a twofold challenge for new technology vendors (start-ups) – replacing incumbents, and at the same time, avoiding becoming replaceable. Innovative business and deployment models that enable a fast ramp-up/deployment and a pay-as-you-consume model can help replace incumbents by getting a foot in the door. The latter is very difficult. Long integration processes that usually ensure irreplaceability do not work anymore. Startups with long integration processes cannot close business opportunities and fail to scale beyond a certain point. We believe a great way to create irreplaceability is by creating customer stickiness instead. Customer stickiness can be created through delivering on specific attributes of a value proposition such virality, ease of integration, enhanced productivity, attractive business models and future proofing (see table below):  
Attribute Criteria
Virality Is broadly shared and appreciated within CISO/CIO networks, leading to viral usage growth
Has frequent (active) use as part of business operation, and not for one-time compliance use
Irreplaceability Provides ability to easily integrate with other products via APIs/known Interfaces – creating mini ecosystems
Has Key Algorithms, IP that clearly differentiates them from competitors
Empowerment and Productivity Empowers non-users to use the system features, understand and make modifications easily, without technical expertise.
Provides centralized and customizable reporting – with measurable and actionable insights used every day
Works in the background with limited user involvement, along with the ability to automate tasks
Attractive Business Models Has low Startup Costs – i.e. can be on-boarded quickly and with low initial investment
Enables businesses to make existing infrastructure more productive
Reduces Total Cost of Ownership and Maintenance costs
Future proof Is scalable – user based, volume based extensibility for future growth
Is extensible – open platforms that can incorporate future enhancements easily
Provides APIs for other tools to integrate with, thereby creating an ecosystem
 
0

In the previous post, we covered the challenges of building, managing and scaling cloud architectures from an orchestration and automation point of view.  In this post, we explore one other challenge in deploying cloud architecture – deploying hybrid clouds. The business drivers for cloud deployment are strong enough to continue the push towards cloud infrastructure. While it is easy and quick to build infrastructure on public cloud offerings due to quick IT provisioning, multiple drivers create the need for an enterprise to build a hybrid cloud – a combination of public and private cloud infrastructure. First, enterprises in Financial and Healthcare verticals tend to be far slower than other enterprises to move towards a complete public cloud deployment due to a variety of regulatory and privacy constraints. For enterprises in these sectors only non-critical frontend infrastructure can be moved to the cloud. Furthermore, most enterprises have invested in infrastructure that represents sunk costs for them. Business managers are under ever increasing pressure to extract value out of these sunk investments. Finally while building and scaling public cloud and data centers is relatively easy, the spending on this public infrastructure can grow very quickly. All of these factors create a push towards the adoption of hybrid cloud infrastructures. Building a private cloud has its own set of unique challenges. This includes specialized IT resources with the necessary skill sets as well as the longer times required to provision private cloud services. Software such as Eucalyptus, Openstack and CloudStack enables companies to build private cloud infrastructure similar to the public cloud themselves. Enterprise software vendors facing requests from their customers to build out a private cloud infrastructure can leverage these tools to do so as well. The growth of hybrid architectures is beneficial to enterprises since it provides them the flexibility to store and access data in locations that they are comfortable with. However, such architectures often create cloud information silos. Cloud computing is more than just a fast self-service of virtual infrastructure. It provides an opportunity to centrally analyze data and extract meaningful insights from data that can a) improve enterprise productivity, b) empower fast and data-backed decision-making and c) decrease costs. But with hybrid cloud architectures, information is now spread across multiple systems, making it harder to have a “single version of truth”. While several business intelligence and SIEM tools provide cloud and enterprise offerings, there is a gap in products that offer tools to seamlessly combine information from both public and private cloud systems and offer a “single version of truth” for the business user. This is an area for innovation. We believe that, while cloud computing technology has matured, it will continue to see exciting innovation along multiple dimensions, with startups providing newer ways for enterprise data management and storage, or startups exploring newer business models to gain market penetration. An example of product innovation is FormationDS – the company is working on a ground-up architected data services platform that aims to transform a traditional client-server computing model to hyper-scale computing, primarily driven by modern application models that seek to compose capabilities as API-based services. This is a pure object oriented approach to computing. On the business model innovation front, we see an increasing use of opensourcing of software to tackle market penetration challenges within the cloud management space. OpenStack and Scalr are perfect examples for this type of innovation. While making purchase decisions, enterprises face a big supplier risk, especially while dealing with new software vendors. As such, enterprises are left wondering if the new businesses will be active a few years from now. In fact, a larger vendor can acquire these companies and force them to align with a particular hardware vendor or architecture. Making the software open source alleviates this risk to enterprises, since the enterprises have access to original software under all circumstances, thereby opening up opportunities that would otherwise be closed.
0

While the business benefits of Cloud computing have become increasingly documented over the past few years, the challenges associated with building, scaling and managing cloud infrastructures are far less understood.  To better understand some of these challenges we sat down with Declan Morris, VP of Cloud and IT Operations at Splunk (and a BGV Technology Advisor).  This is the first part of a multi-series blog on this topic focusing on orchestration and automation. In a recent research report sponsored by Verizon, a survey by Harvard Business Review Analytic Services revealed that at least 70% of the organizations have adopted some form of cloud computing within their companies.  This rapid adoption has been driven by the need for business agility without greater capital investment, increased employee productivity and reduction in business complexity…. Not only are more businesses relying on the cloud for their mission critical applications, enterprises are discovering new ways to use it. Having a cloud based service offering has become a must for any enterprise software vendor. Companies must combine applications, data and support into a single, high-performing entity offered in the cloud. Invariably, enabling such a cloud service requires a combination of compute, storage, database and network services. This requirement changes the paradigm for the operations teams, who were used to building and managing infrastructure by piecing equipment together. These days, the operations teams manage infrastructure by putting together disparate cloud services such as Salesforce, NetSuite, AWS, Box etc. to provide an end-to-end solution for their customers. They have to build mash-ups of services that best fit the needs of their customers. This has created the challenge for operations teams to automate and orchestrate these disparate cloud services. This represents a dramatic shift from managing equipment to managing API calls to various services. Providing high availability, self-healing and auto-scalable infrastructure has become a norm. Meeting such requirements requires the use of a new breed of automation and orchestration tools. A few years ago, there were very few choices. One approach was “home grown” internally developed integration tools – solutions that present its own set of challenges associated with maintenance and complexity. Fortunately, a new breed of Cloud Management Systems (CMS) companies has begun to emerge to fill the gap – this includes companies like Puppet, Chef, Salt and Ansible. While each of these firms are slightly different from each other as outlined in this post, each provide a custom Domain Specific Language (DSL) or structured file format that allows the user to define the desired end state for their system without coding the procedures for getting there. However, this level of abstraction is not sufficient to address the challenge at hand. The problem of integrating multiple SaaS providers still remains. Startups such as MuleSoft, Boomi etc have emerged to solve this problem by providing integration platforms for connecting SaaS and enterprise applications in the cloud and on-premise, both within and outside the organization. Underneath their offering, these platforms make use of software such as OpenStack, Scalr and Puppet to integrate the disparate SaaS providers with the objective of providing a fault tolerant, auto-healing and scalable automation and orchestration platform for the operations team. In such architectures, multi-tenant implementations used to be a necessity to justify the cost and complexity associated with deployments. A significant advantage of adopting these tools is that, enterprise software providers can now offer true single-tenant Virtual Private Cloud solutions to their customers in a cost efficient way.
0

Victoria Livschitz, Founder and CEO of Qubell, a BGV portfolio company shares her perspective on Continuous Innovation and DevOps The remarkable thing about 2014 is how every CTO and CIO seem to have finally gotten the memo that agility in software development and operations is not a buzz or a fad, but a direct and immediate threat to the survival – their own and their company’s. A report after report from various consulting and analyst firms places agility and reduced cycle times on new software releases somewhere between #1 and top-10 of CIO priorities for 2014. In 2013, it didn’t make a top-100 list. My favorite report comes from the National Retail Foundation dated February 2014 and opens with “CIOs can summarize both their priorities and their challenges in one word: agility.”: https://nrf.com/news/online/cio-priorities As I spend my days talking to people who are chartered with making their organizations more innovative and agile, it continues to dawn on me just how complex is their mission and how confusing is the modern landscape of concepts, technologies and jargons. Those who speak of continuous delivery usually just got a basic CI infrastructure barely working. Those who own a “private cloud” still require an IT ticket and weeks of delays, approvals, negotiations, and new hardware acquisition to give the developer his requested test sandbox. Those who proudly claim to run agile development often continue to ship new features in the giant releases coming months apart. I believe we stand on a relatively firm ground with respect to two points:
  • The desire to practice “continuous innovation”, where the applications, features and content is being shaped and formed by the customer needs and wants, facilitated by a short development cycle measured in days and a direct feedback loop tracked by the analytics tools.
  • The ability to establish “continuous integration” practices which are by now relatively well understood at a single team level, leaving continuous cross-team integration out of scope.
Everything in between is a foggy mystery. I read somewhere once that there are problems and there are mysteries. We know how to solve a problem with sufficient time and resources. Mysteries cannot be solved until we understand them well enough to turn them into problems. I like this framework. Team-level continuous integration is a problem. Enterprise-wide continuous innovation is a mystery. Somewhere in between is Continuous Delivery. Amazon, Google and Facebook are awesome innovators largely because of their speed and agility. Amazon is known to deploy 3,000 live production changes per day. Here is a pretty good thread that highlights some of Amazon’s amazing capabilities: https://news.ycombinator.com/item?id=2971521. While I know of many consumer-oriented companies that have been able to achieve twice-a-week release cadence, continuous micro-releases remain a distant dream for most enterprises. To help turn a mystery into a problem, I offer this frame of reference: While the quest ends with continuous innovation, the starting point of the journey has to be continuous integration, followed by continuous delivery. Each delivers tremendous value to the organization and provides a compelling ROI in its own right. If a company can agree on a roadmap, it can focus its attention – in business terms, budget and priorities – on the right immediate targets and lift the fog sufficiently to start building a coherent set of capabilities aligned with specific incremental business returns.   Victoria Livschitz – Founder and CEO of Qubell. Founder and executive chair at Grid Dynamics. Working hard to turn good ideas into great products.  
0

PREVIOUS POSTSPage 1 of 2NO NEW POSTS