recently named a Top Innovator in cloud application delivery by research firm IDC, citing simplicity as one of our key differentiators in the space. We predict that more than 90% of enterprise applications will be HTTP/S-based by 2020 based on our own experience with working on large scale enterprise network and application deployments. With its core expertise built around the delivery of web-based applications, Webscale is in the right place at the right time, with a mature platform designed to address the performance, availability and security issues that web-based applications will face when leveraging the public cloud. From migration, to deployment and simple ongoing management, Webscale has become a true partner to businesses wanting to deliver world-class web applications that not only delight their users, but truly use the cloud the way it was meant to be used – as a powerful and utility style computing platform with infinite resources, not just a static and oversized datacenter.In the last part of a three-part blog, Sonal Puri, CEO of Webscale (a BGV portfolio company) shares the company’s vision, how its cloud application delivery platform is differentiated in the market and their move to the broader mid-market. WEB ONLY, CLOUD FIRST We have a saying at Webscale – “Deliver, no matter what.” It speaks to the laser focus we’ve had since the company was founded, to deliver an amazing web application user experience to every one of our customers, regardless of the situation. For our core target of mid-market e-commerce, those situations can vary greatly. Maybe they’re experiencing a major surge in traffic caused by a successful marketing promotion, or maybe their sudden perceived popularity is not so positive and happens in the form of a DDoS attack designed to take their site down. Whatever the circumstances, our promise to our customers is that we have their back, and their site will showcase the highest performance, availability and security that we can deliver, every day, no matter what. In addition to this, is our commitment to delivering the robust feature set of our cloud-based application delivery platform with a level of simplicity that has previously alluded this segment. What do we mean by simplicity? Well, it’s ease of use, first and foremost, and that starts with getting your critical web applications migrated into the cloud, with as little effort as possible, as a software-defined infrastructure. This auto-provisioning methodology means there is no need to re-write, saving massive amounts of time and resources, nor is there any need to lift-and-shift and use only a subset of the cloud’s capabilities. Once you’re deployed in the cloud, Webscale’s automated technology stack manages the rest – from predictive auto-scaling in the event of a traffic surge, content optimization and caching to ensure fast page load times, to a powerful web application firewall that will automatically block malicious attacks and apply rules to prevent any loss of business or corporate reputation. That simplicity continues with easy monthly billing and proactive support that identifies and resolves issues often before they’re even known, and certainly before they cause disruption. End of the day, its peace of mind, and it’s one of the most important things we bring our customers. Make no mistake – the mid-market e-commerce segment is no slouch when it comes to its demands on a web application infrastructure. Flash sales, viral events and seasonal fluctuations make sudden changes in traffic commonplace, and when your customer is likely to go to a competitor if your site takes more than three seconds to load, there is zero tolerance for performance or availability issues. For these reasons, e-commerce has been an excellent foundational segment for Webscale to target, and tackling these challenges has contributed to the development of a number of features that we uniquely enable in the application delivery segment. It’s one of the reasons that Webscale was
http://benhamouglobalventures.com/2016/11/10/cloud-computing-adoption-and-challenges/) Have business owners wondered why we still hear about websites crashing when too many people try to get in? Stories were rife during the recent 2016 Black Friday and Cyber Monday events with big names like Old Navy, Macy’s and Walmart, all experiencing availability issues due to surge traffic, even though these businesses have been around for decades and have adequate budgets to support their needs. Part of the problem is that many companies are still using traditional hosting and networking solutions for the scale, security and management of their websites, and web applications, instead of the cloud. And the other part of the problem is a lack of expertise in bringing all the disparate pieces of the solution together to solve for the big picture. Lying in many data centers and server rooms today is an appliance we have all heard about (hardware, software, virtual or otherwise): the application delivery controller (ADC). During periods of high demand, like Cyber Monday, an ADC is expected to distribute incoming website visitors across multiple servers until they are at capacity. The ADC space is set to grow into a $2.9 billion global market by 2020 – but what’s driving this massive potential is certainly not an appliance. Traditional ADCs – are they even worth it? Anyone who’s worked in a small-sized business knows that building a traditional server deployment with an old-fashioned ADC is expensive, time consuming and challenging to manage. When an ADC and its associated functions get converted to a SaaS-based utility, everything changes. The deployment is a service and managed by the software vendor instead of the company who needs its features but not its associated headache. One can eliminate the need to constantly buy licenses and scaling out is automated, instant and cost effective, versus installing more physical servers, which may then sit dormant for a large percentage of time when not running at peak. Old dogs can learn new tricks? Not likely Traditional ADC vendors such as F5, Radware and Citrix are moving to embrace the cloud and stay relevant in the ADC business, but their entire business model has yet to pivot from the approach of deploying hardware. And their sales and go to market models are not suited for the new world. While these companies may feel like a safe bet and their cloud vision may seem compelling and logical, the ADCs still have to be installed for a cloud deployment and the customer still needs to support it. Parallels across the storage, wan optimization and caching markets replaced almost entirely by SaaS services is difficult to ignore. Layer 4-7 functionality is moving to the cloud quickly. What the cloud can really do It is no secret that the cloud offers infinite scalability, but an ADC delivered as a utility-type service truly built for the cloud offers much more. Traditional ADCs are missing out on content optimization and the ability to offer analytics for customer insights. A built-in-the-cloud ADC solution like Webscale can detect changing requirements (sense) and respond to those requirements (control) – thereby monitoring a customer’s web traffic and infrastructure and resolving issues before they cause disruption. This can be anything from improved page load times to a full scale-out because of a sudden surge in traffic. There are many stories of Webscale customers that have experienced unplanned scale out events just like this. As a vendor, the benefit of a service is constant feedback and improvement. The ADC market remains challenged because they get limited feedback from their customers. If you don’t touch your ADC appliances (either physical or cloud-based) after a customer purchases them, how can you know what your customers are doing with your solution and if your future enhancements are on track, right or wrong? Isn’t it time to think outside the ADC box? Despite these trends and ample proof points, some organizations aren’t being swayed to adopt the cloud or services that make the cloud more efficient, even when, in today’s rapidly moving, always-on world, any enterprise can be subject to a sudden traffic overload that traditional hosting solutions can’t keep up with. Organizations should be asking themselves the following question: is it better to worry about and maintain our own systems and ADCs or instead use that time and money to focus on growing the business? The answer should always be to favor whatever route facilitates laser-focus on one’s business and leave everything else to service models available as a utility. And that is why old school ADCs are such prime targets for disruption from the cloud.In part II of a three part blog on cloud computing Sonal Puri (CEO Webscale Networks) and Anik Bose (BGV General Partner) share their perspective on the next big cloud disruption – the application delivery controller market. (Our first blog on this topic can be viewed at
- A pay-as-you-go model with minimal or no initial costs
- Usage-based pricing, so that costs are based on actual usage
- Elasticity, so that users can dynamically consume more or less resources
- Location independence, high availability, and fault tolerance
- Ubiquitous access to services, where users can access services from any location using any form factor – infrastructure as a service (IaaS); an application deployment platform with application services such as databases, or platform as a service (PaaS); or subscription-based software applications, or software as a service (SaaS).
- More efficient use of resources than VMs – the claim is somewhere between a factor of 4 to 6 (read less servers = less $$ on servers and therefore on data center power costs)
- Allow faster deployment times which lead to lower costs (whether we’re talking about cloud or on premise resources) and also allows an easy environment to manage and deploy applications. (read easy management = less $$ on maintenance)
- Finally, since containers are easy to use and lightweight, it’s very easy for anyone to develop an app, upload it to the cloud in a container and then you have instant application portability between devices/OS etc. by using ready to run containerized applications from the cloud. This may allow elimination of platform specific development in the near future. (read – less $$ in duplicate development efforts in order to accommodate several devices/OSs).
- Standardization – This is being tackled by Docker (by its sheer popularity and collaborations with Google, RedHat, and other players on their open source libcontainer which make it a de-facto standard for Linux based containers). Google is also porting their programmers to using libcontainer instead of their own lmctfy library.
- Management – Container management tools are not competitive to VM management tools such as VMWare’s vCenter or Microsoft’s System Center, which can be used to manage virtualized infrastructure. The trends to look for in containers are governance, management and monitoring of container “farms” in a similar way to how multiple VMs are managed in the data center. Elastic load balancing, performance monitoring, failover management and auto-scaling elements sit as part of this management ecosystem that provides end-to-end deployment and management capabilities to end users.
- Security is a huge concern for many enterprises– since containers share an OS and many binaries, many applications and containers run super user authorizations on their OS, and therefore if a container is compromised it can spread to the OS and onwards. Gartner recommends running de-privileged containers or containers in a VM if there are security concerns. We see many companies trying to address security holes, but a lot of organic efforts are ongoing by container providers. As a result, building a “security for containers” may not be a viable standalone business, and may become part of the management platform.
- OS – Another problem with the majority of container solutions is the fact that they’re almost all Linux Based, while many Enterprises have demands for Windows based development, data centers etc. ContainerX just launched the first Windows Container as a Service platform and there’s room to grow there.
- IT needs to have sufficient oversight and control
- IT needs to be able to manage identity and access controls
- Regulatory and Compliance obligations should be met
- Developers need to have flexibility and optimal performance
|Virality||Is broadly shared and appreciated within CISO/CIO networks, leading to viral usage growth|
|Has frequent (active) use as part of business operation, and not for one-time compliance use|
|Irreplaceability||Provides ability to easily integrate with other products via APIs/known Interfaces – creating mini ecosystems|
|Has Key Algorithms, IP that clearly differentiates them from competitors|
|Empowerment and Productivity||Empowers non-users to use the system features, understand and make modifications easily, without technical expertise.|
|Provides centralized and customizable reporting – with measurable and actionable insights used every day|
|Works in the background with limited user involvement, along with the ability to automate tasks|
|Attractive Business Models||Has low Startup Costs – i.e. can be on-boarded quickly and with low initial investment|
|Enables businesses to make existing infrastructure more productive|
|Reduces Total Cost of Ownership and Maintenance costs|
|Future proof||Is scalable – user based, volume based extensibility for future growth|
|Is extensible – open platforms that can incorporate future enhancements easily|
|Provides APIs for other tools to integrate with, thereby creating an ecosystem|
Eucalyptus, Openstack and CloudStack enables companies to build private cloud infrastructure similar to the public cloud themselves. Enterprise software vendors facing requests from their customers to build out a private cloud infrastructure can leverage these tools to do so as well. The growth of hybrid architectures is beneficial to enterprises since it provides them the flexibility to store and access data in locations that they are comfortable with. However, such architectures often create cloud information silos. Cloud computing is more than just a fast self-service of virtual infrastructure. It provides an opportunity to centrally analyze data and extract meaningful insights from data that can a) improve enterprise productivity, b) empower fast and data-backed decision-making and c) decrease costs. But with hybrid cloud architectures, information is now spread across multiple systems, making it harder to have a “single version of truth”. While several business intelligence and SIEM tools provide cloud and enterprise offerings, there is a gap in products that offer tools to seamlessly combine information from both public and private cloud systems and offer a “single version of truth” for the business user. This is an area for innovation. We believe that, while cloud computing technology has matured, it will continue to see exciting innovation along multiple dimensions, with startups providing newer ways for enterprise data management and storage, or startups exploring newer business models to gain market penetration. An example of product innovation is FormationDS – the company is working on a ground-up architected data services platform that aims to transform a traditional client-server computing model to hyper-scale computing, primarily driven by modern application models that seek to compose capabilities as API-based services. This is a pure object oriented approach to computing. On the business model innovation front, we see an increasing use of opensourcing of software to tackle market penetration challenges within the cloud management space. OpenStack and Scalr are perfect examples for this type of innovation. While making purchase decisions, enterprises face a big supplier risk, especially while dealing with new software vendors. As such, enterprises are left wondering if the new businesses will be active a few years from now. In fact, a larger vendor can acquire these companies and force them to align with a particular hardware vendor or architecture. Making the software open source alleviates this risk to enterprises, since the enterprises have access to original software under all circumstances, thereby opening up opportunities that would otherwise be closed.In the previous post, we covered the challenges of building, managing and scaling cloud architectures from an orchestration and automation point of view. In this post, we explore one other challenge in deploying cloud architecture – deploying hybrid clouds. The business drivers for cloud deployment are strong enough to continue the push towards cloud infrastructure. While it is easy and quick to build infrastructure on public cloud offerings due to quick IT provisioning, multiple drivers create the need for an enterprise to build a hybrid cloud – a combination of public and private cloud infrastructure. First, enterprises in Financial and Healthcare verticals tend to be far slower than other enterprises to move towards a complete public cloud deployment due to a variety of regulatory and privacy constraints. For enterprises in these sectors only non-critical frontend infrastructure can be moved to the cloud. Furthermore, most enterprises have invested in infrastructure that represents sunk costs for them. Business managers are under ever increasing pressure to extract value out of these sunk investments. Finally while building and scaling public cloud and data centers is relatively easy, the spending on this public infrastructure can grow very quickly. All of these factors create a push towards the adoption of hybrid cloud infrastructures. Building a private cloud has its own set of unique challenges. This includes specialized IT resources with the necessary skill sets as well as the longer times required to provision private cloud services. Software such as
post, each provide a custom Domain Specific Language (DSL) or structured file format that allows the user to define the desired end state for their system without coding the procedures for getting there. However, this level of abstraction is not sufficient to address the challenge at hand. The problem of integrating multiple SaaS providers still remains. Startups such as MuleSoft, Boomi etc have emerged to solve this problem by providing integration platforms for connecting SaaS and enterprise applications in the cloud and on-premise, both within and outside the organization. Underneath their offering, these platforms make use of software such as OpenStack, Scalr and Puppet to integrate the disparate SaaS providers with the objective of providing a fault tolerant, auto-healing and scalable automation and orchestration platform for the operations team. In such architectures, multi-tenant implementations used to be a necessity to justify the cost and complexity associated with deployments. A significant advantage of adopting these tools is that, enterprise software providers can now offer true single-tenant Virtual Private Cloud solutions to their customers in a cost efficient way.While the business benefits of Cloud computing have become increasingly documented over the past few years, the challenges associated with building, scaling and managing cloud infrastructures are far less understood. To better understand some of these challenges we sat down with Declan Morris, VP of Cloud and IT Operations at Splunk (and a BGV Technology Advisor). This is the first part of a multi-series blog on this topic focusing on orchestration and automation. In a recent research report sponsored by Verizon, a survey by Harvard Business Review Analytic Services revealed that at least 70% of the organizations have adopted some form of cloud computing within their companies. This rapid adoption has been driven by the need for business agility without greater capital investment, increased employee productivity and reduction in business complexity…. Not only are more businesses relying on the cloud for their mission critical applications, enterprises are discovering new ways to use it. Having a cloud based service offering has become a must for any enterprise software vendor. Companies must combine applications, data and support into a single, high-performing entity offered in the cloud. Invariably, enabling such a cloud service requires a combination of compute, storage, database and network services. This requirement changes the paradigm for the operations teams, who were used to building and managing infrastructure by piecing equipment together. These days, the operations teams manage infrastructure by putting together disparate cloud services such as Salesforce, NetSuite, AWS, Box etc. to provide an end-to-end solution for their customers. They have to build mash-ups of services that best fit the needs of their customers. This has created the challenge for operations teams to automate and orchestrate these disparate cloud services. This represents a dramatic shift from managing equipment to managing API calls to various services. Providing high availability, self-healing and auto-scalable infrastructure has become a norm. Meeting such requirements requires the use of a new breed of automation and orchestration tools. A few years ago, there were very few choices. One approach was “home grown” internally developed integration tools – solutions that present its own set of challenges associated with maintenance and complexity. Fortunately, a new breed of Cloud Management Systems (CMS) companies has begun to emerge to fill the gap – this includes companies like Puppet, Chef, Salt and Ansible. While each of these firms are slightly different from each other as outlined in this
https://nrf.com/news/online/cio-priorities As I spend my days talking to people who are chartered with making their organizations more innovative and agile, it continues to dawn on me just how complex is their mission and how confusing is the modern landscape of concepts, technologies and jargons. Those who speak of continuous delivery usually just got a basic CI infrastructure barely working. Those who own a “private cloud” still require an IT ticket and weeks of delays, approvals, negotiations, and new hardware acquisition to give the developer his requested test sandbox. Those who proudly claim to run agile development often continue to ship new features in the giant releases coming months apart. I believe we stand on a relatively firm ground with respect to two points:Victoria Livschitz, Founder and CEO of Qubell, a BGV portfolio company shares her perspective on Continuous Innovation and DevOps The remarkable thing about 2014 is how every CTO and CIO seem to have finally gotten the memo that agility in software development and operations is not a buzz or a fad, but a direct and immediate threat to the survival – their own and their company’s. A report after report from various consulting and analyst firms places agility and reduced cycle times on new software releases somewhere between #1 and top-10 of CIO priorities for 2014. In 2013, it didn’t make a top-100 list. My favorite report comes from the National Retail Foundation dated February 2014 and opens with “CIOs can summarize both their priorities and their challenges in one word: agility.”:
- The desire to practice “continuous innovation”, where the applications, features and content is being shaped and formed by the customer needs and wants, facilitated by a short development cycle measured in days and a direct feedback loop tracked by the analytics tools.
- The ability to establish “continuous integration” practices which are by now relatively well understood at a single team level, leaving continuous cross-team integration out of scope.