Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Victoria Livschitz, Founder and CEO of Qubell, a BGV portfolio company shares her perspective on Continuous Innovation and DevOps The remarkable thing about 2014 is how every CTO and CIO seem to have finally gotten the memo that agility in software development and operations is not a buzz or a fad, but a direct and immediate threat to the survival – their own and their company’s. A report after report from various consulting and analyst firms places agility and reduced cycle times on new software releases somewhere between #1 and top-10 of CIO priorities for 2014. In 2013, it didn’t make a top-100 list. My favorite report comes from the National Retail Foundation dated February 2014 and opens with “CIOs can summarize both their priorities and their challenges in one word: agility.”: https://nrf.com/news/online/cio-priorities As I spend my days talking to people who are chartered with making their organizations more innovative and agile, it continues to dawn on me just how complex is their mission and how confusing is the modern landscape of concepts, technologies and jargons. Those who speak of continuous delivery usually just got a basic CI infrastructure barely working. Those who own a “private cloud” still require an IT ticket and weeks of delays, approvals, negotiations, and new hardware acquisition to give the developer his requested test sandbox. Those who proudly claim to run agile development often continue to ship new features in the giant releases coming months apart. I believe we stand on a relatively firm ground with respect to two points:
  • The desire to practice “continuous innovation”, where the applications, features and content is being shaped and formed by the customer needs and wants, facilitated by a short development cycle measured in days and a direct feedback loop tracked by the analytics tools.
  • The ability to establish “continuous integration” practices which are by now relatively well understood at a single team level, leaving continuous cross-team integration out of scope.
Everything in between is a foggy mystery. I read somewhere once that there are problems and there are mysteries. We know how to solve a problem with sufficient time and resources. Mysteries cannot be solved until we understand them well enough to turn them into problems. I like this framework. Team-level continuous integration is a problem. Enterprise-wide continuous innovation is a mystery. Somewhere in between is Continuous Delivery. Amazon, Google and Facebook are awesome innovators largely because of their speed and agility. Amazon is known to deploy 3,000 live production changes per day. Here is a pretty good thread that highlights some of Amazon’s amazing capabilities: https://news.ycombinator.com/item?id=2971521. While I know of many consumer-oriented companies that have been able to achieve twice-a-week release cadence, continuous micro-releases remain a distant dream for most enterprises. To help turn a mystery into a problem, I offer this frame of reference: While the quest ends with continuous innovation, the starting point of the journey has to be continuous integration, followed by continuous delivery. Each delivers tremendous value to the organization and provides a compelling ROI in its own right. If a company can agree on a roadmap, it can focus its attention – in business terms, budget and priorities – on the right immediate targets and lift the fog sufficiently to start building a coherent set of capabilities aligned with specific incremental business returns.   Victoria Livschitz – Founder and CEO of Qubell. Founder and executive chair at Grid Dynamics. Working hard to turn good ideas into great products.  
0

Nachman Shelef, CEO & Co-founder, ConteXtream – a BGV portfolio company shares his perspective on Network Virtualization   Q: What is your definition of network virtualization and does this definition apply for Carriers or Enterprises or both? A: Network Virtualization, like other virtualization transformations is all about transitioning from dedicated resources per function to shared, pooled, common resources for all functions. For compute and storage virtualization this meant transitioning from dedicated compute and storage resources per application to shared, pooled, common compute and storage resources, allocated as needed when needed to support each application. For network virtualization this means transitioning from dedicated (and in some cases with proprietary hardware) networking hardware per function to shared, pooled, common compute resources and basic switching resources, allocated as needed when needed to support each network function. Traditional networking makes this transition very difficult, with three specific barriers: a) High functions are built into the plumbing (hardware) and are applied hop by hop, link by link, port by port and sub-port by sub-port. b) Identities are tightly coupled to locations. c) Control and forwarding are tightly integrated. Key to enabling this transition is to separate basic connectivity between locations from the higher functions that enable carriers to provide services per flow based on the context of the particular flow. This is done by separating functions from junctions and separating locations from identities. And by adding separation of control from forwarding we complete the disaggregation of networking nodes to allow network virtualization. We believe that while the basic concept behind network virtualization is similar for both the carriers and enterprises market segments but the requirements are quite different. For example, the scaling requirements of carrier network virtualization are very different than those needed in an enterprise network. Similarly carrier network virtualization may also require subscriber awareness, which is not needed in the enterprise scenario. Q: Do you position yourself as a – network virtualization company, Software defined networking company, Network function virtualization company? A: We are focused on helping service providers virtualize their network, improve service agility, while decreasing CAPEX and OPEX.  We do this with our unique approach to carrier network SDN. There are other ways of virtualizing networks when going the NFV route, but our approach fully leverages the capabilities of SDN while creating a very flexible, scalable and programmable network. Using our solution, carriers can virtualize their network and it also gives them many network design options on how to scale their network functions. Q: What are the top 2-3 business problems that network virtualization solves? A: Based on our experience with the Tier-1 operators, network virtualization attacks the two biggest problems that carriers are facing:
  1. The first big business problem for any carrier is that traffic is growing more rapidly than the revenues. Operators have to keep adding capacity to cope with demand, but with existing network design paradigms and products the utilization of the resources is poor. Service providers have realized that the legacy methods of network design, deployment and the legacy products have reached their limits. The Internet players and advent of cloud technologies have shown that virtualization can help achieve better utilization of resources which is a major need of the day. This is driving the current interest in Network Function Virtualization. Service providers who adopt our solution will find that they can deploy capacity in both small and large chunks, they can be extremely granular, rapidly move around resources, achieve higher utilization etc.
  2. The carriers also need to compete with Over The Top players. They need to defend/increase their revenue stream with innovative services, and they need to do so with greater agility. Network virtualization with SDN creates a programmable network that significantly reduces the time needed to both experiment and/or roll-out new services.
Q: What are the different approaches for network virtualization and why did you select your approach? A: One approach, attempts to change as little as possible in transitioning to network virtualization. In this approach, big multi-function proprietary networking boxes (sometimes referred to as god-boxes) are replaced by big multi-function Virtual Network Functions in software. The cables that connected the big boxes are replaced by a simple virtual network that provides virtual cables. In this “fat-VNF & simple-Infrastructure” approach each VNF needs to itself provide for scaling through Virtual Machine elasticity, for distribution across locations, for sharing states between VMs, for load balancing, for service chaining of sub-functions and for subscriber awareness as needed. An alternative approach is to rely on the virtual network infrastructure to provide these capabilities (scaling through elasticity, distribution, sharing state, load balancing, service chaining and subscriber awareness) to all VNFs as needed. This approach enables simpler VNFs, faster development, more innovation, reuse of sub-functions for many VNFs, less vendor lock-in, best of breed for every function. Putting these common capabilities in the virtualization infrastructure creates a VNF isolation and abstraction layer that enables fast time to market with new functions that are not connected directly inline. Implementing and thoroughly testing these inline common capabilities of the infrastructure once is more reliable than implementing them separately in each NFV. In short – though we support both the “fat-VNF & simple-Infrastructure” approach and the “thin-VNF & smart-Infrastructure” approach, we highly recommend the 2nd in order to get more of the potential benefits of network virtualization. Q: What are top 5 criteria to use in evaluating network virtualization solutions? A: The top 5 criteria for solutions that support network virtualization that we have found are: 1)    Production deployment experience: Carriers look for solutions that are based on proven technology and are looking for examples of real live deployments.  The needs of the carriers are different from enterprise and most solutions cater to the enterprise. 2)    Overlay solution: To attain highest utilization of network functions, the network should be able to support the function anywhere but be available as one logical function. We believe that overlay is in fact more than support for an encapsulation technique and that for providing scalable network virtualization, a solution needs to provide a function distribution mapping system, which allows this location decoupling. In a true virtual network a function can be anywhere and that should not impact the performance of the network. 3)    Subscriber and application awareness in steering: This is a very unique requirement for network virtualization for carriers because carrier services are typically subscriber services and the best network design is only possible if the network has the best of breed function vendors available, can right size them and then steer subscribers to these functions, based on the end consumer service requirement. We see this increasingly in both fixed broadband and mobile networks. 4)    Application-aware load balancing functionality: The right-sized VNF network virtualization approach increases the number of instances of the functions which greatly increases the complexity of managing the load across these instances therefore the solution needs to support built-in application-aware load balancing. 5)    Standards compliance: This is something operators have always required and will need for interoperability. We are committed to this and in fact take a leadership role in this whenever appropriate. Q: What are the primary benefits that customers seek when they talk to you about network virtualization – COST/CAPEX reduction; Revenue Enhancement; Other? A: From our experience, the primary benefit the carriers are seeking is the optimization of their network. They are trying to reduce both the CAPEX and the OPEX needed to provide the services to the end-users. The other benefit that they are seeking is programmability of the network. Thus they are seeking ways in which they can reduce both the operating expense while increasing the speed with which innovation can be delivered. Consequently operators can introduce new services with greater agility. Q: Which use cases do you see taking traction and which ones are still in the definition phase? A: The use case that we see in deployment at mobile carriers is the domain providing value added services between the Enhanced Packet Core and external networks also called the Gi-LAN. Carriers are motivated to address it to reduce and consolidate the number of VAS middle-boxes and the sprawl of Gi/SGi proxies and purpose-built appliances. There are also monetization opportunities through more customized service-chains to process consumer and/or enterprise traffic. Another use case that is increasingly popular is driven by the adoption of Voice-over-LTE. This is driving virtualization of IP Multimedia Subsystem (IMS) and virtualization of Session Border Controller (SBC). We support all these use cases in a manner where there is no need to rip and replace existing functions with virtualized functions but rather virtualize by using both physical and virtual instances. Other use cases that we see in the medium term are around virtual EPC, vCustomer Premise Equipment, vRadio Access Network, vContent Delivery Network. These use cases are in definition phase because either they are very complex or significant changes can cause a catastrophic service disruption. For example: EPC is a complex monolithic system with multiple interfaces and while given the growth in traffic, advent of Machine 2 Machine etc. there is a definite need to virtualize and create an elastic EPC that is more programmable. But the reason it is in definition phase is because failure can cause a serious service disruption therefore the carrier needs to take an approach that will allow them to experiment without taking too much risk. We expect vEPC to move into field trials and eventually deployment in the next 2 years.
0

Storage performance validation leader takes top honor for helping IT organizations determine which workloads require flash storage

Santa Clara, CA, May 20, 2014– Load DynamiX, the leader in storage infrastructure performance validation, announced it has been selected as a Red Herring Top 100 North America Winner, a prestigious award honoring the year’s most promising private technology ventures from the Americas business region. “We are honored to receive such a prestigious award as recognition of our unique value to the IT industry.” stated Philippe Vincent, CEO or Load DynamiX. “With the storage industry going through one of the biggest disruptions in its history, we are seeing accelerating adoption of our storage performance validation solutions. These products help IT architects ensure a successful transition to flash and hybrid storage systems by aligning purchase decisions to workload performance requirements.” Red Herring Top 100 America enlists outstanding entrepreneurs and promising companies. It selects 100 award winners from among the approximately 3,000 tech startups financed each year in the US and Canada. Red Herring’s editorial staff evaluated companies on both quantitative and qualitative criteria, such as financial performance, technological innovation and intellectual property, management quality, business model, customer footprint and market penetration. “In 2014, selecting the top 100 achievers was by no means a small feat,” said Alex Vieux, publisher and CEO of Red Herring. “In fact, we had the toughest time in years because so many entrepreneurs had crossed significant milestones so early. But after much thought, rigorous contemplation and discussion, we narrowed our list down from hundreds of candidates from across North America to the top 100 winners. We believe Load DynamiX embodies the vision, drive and innovation that define a successful entrepreneurial venture. Load DynamiX should be proud of its accomplishment, as the competition was very strong.” About Load DynamiX As the leader in infrastructure performance validation, Load DynamiX empowers IT professionals with the insight needed to make intelligent decisions regarding networked storage. By accurately characterizing and emulating real-world application behavior, Load DynamiX optimizes the overall performance, availability, and cost of storage infrastructure. The combination of advanced workload analytics and modeling software with extreme load-generating appliances give IT professionals the ability to cost-effectively validate and stress today’s most complex physical, virtual and cloud infrastructure to its limits. – See more at: http://www.loaddynamix.com/news/load-dynamix-wins-2014-red-herring-top-100-north-america-award/#sthash.tYGpJosH.dpuf
0

ContexNet SDN Solution for Service Providers Honored for Innovation Mountain View, Calif. – May 13, 2014 — ConteXtream Inc., a leading provider of carrier-grade network virtualization solutions, today announced that TMC, a global, integrated media company, has named the company’s Software-Defined Networking (SDN) solution for service providers,ContexNet, as a 2014 Excellence in SDN Award winner presented by TMC’s INTERNET TELEPHONY Magazine and SDN Zone. “Recognizing leaders advancing SDN technologies, TMC is proud to announce ConteXtream as a recipient of the annual Excellence in SDN Award,” said Rich Tehrani, CEO, TMC. “ContexNet has demonstrated innovation and will help shape the face of the quickly evolving industry. It is our pleasure to honor ConteXtream for their inspiring work.” ContexNet is a carrier-grade, distributed SDN fabric that leverages proven virtualization and grid technologies, enabling service providers to take the first step to achieving Network Functions Virtualization (NFV) as outlined by the European Telecommunications Standards Institute (ETSI). ContexNet enables operators to optimize their network through virtualization and effective utilization of resources while accelerating revenue growth through rapid introduction of new services or features. ContexNet runs on standard, off-the-shelf computing platforms, creating a distributed and scalable SDN domain, dynamically linking network functions and by steering the right traffic flows to the various network virtual functions. ContexNet is compliant with industry standard-based mechanisms such as OpenFlow, LISP and OpenStack which enable it to fully separate control from forwarding, location from identity, and orchestration from networking. “We are honored to be named a recipient of the inaugural Excellence in SDN Awards,” said Anshu Agarwal, VP Marketing, ConteXtream. “This award compliments the tremendous success we have experienced in customer deployments over the last 18 months, which includes global deployment at several Tier-1 operators, and further validates our position as a leader in SDN for the service provider market.” SDN architecture has already had a profound impact on the IT and Telecom industries and, as this technology continues to grow in popularity, the Excellence in SDN Awards recognizes the companies that are leading the way in SDN architecture and applications. Winners of the Excellence in SDN Award are published in the May 2014 issue of INTERNET TELEPHONY Magazine. About ConteXtream ConteXtream is a privately held software-defined-networking (SDN) company that enables carriers to deliver network capacity and functions in the same way cloud providers deliver applications effectively utilizing standard compute and storage. ConteXtream’s SDN offering is carrier-grade and enables Network Function Virtualization (NFV) for various solutions deployed on carrier networks. Deployed by Tier-1 operators, ConteXtream’s Carrier-SDN dynamically and elastically connects subscribers to services and enables carriers to leverage standard, low-cost server hardware and hypervisors to virtualize functions and services, while replacing costly purpose-built proprietary systems. Headquartered in Mountain View, Calif., ConteXtream is backed by Benhamou Global Ventures, Gemini Israel Funds, Norwest Venture Partners and Sofinnova Ventures as well as Comcast Ventures and Verizon Investments. For additional information, visit www.contextream.com. About TMC TMC is a global, integrated media company that supports clients’ goals by building communities in print, online, and face to face. TMC publishes multiple magazines including Cloud ComputingM2M EvolutionCustomer, and Internet TelephonyTMCnet is the leading source of news and articles for the communications and technology industries, and is read by as many as 1.5 million unique visitors monthly. TMC produces a variety of trade events, including ITEXPO, the world’s leading business technology event, as well as industry events: Asterisk World; AstriCon; ChannelVision (CVx) Expo; Cloud4SMB Expo; Customer Experience (CX) Hot Trends Symposium; DevCon5 – HTML5 & Mobile App Developer Conference; LatinComm Conference and Expo; M2M Evolution Conference & Expo; Mobile Payment Conference; Software Telco Congress, StartupCamp; Super Wi-Fi & Shared Spectrum Summit; SIP Trunking-Unified Communications Seminars; Wearable Tech Conference & Expo; WebRTC Conference & Expo III; and more. For more information about TMC, visit www.tmcnet.com.
0

Avery Lyford, Chairman of Qubell shares his perspective on accelerating the Enterprise IT software release cycle. The latest National Retail Federation report opens with “CIOs can summarize their priorities and challenges in one word – Agility”. Agility is about capturing opportunities. As IT is used to drive the topline, agility becomes the critical battleground. New technologies are dramatically redefining customer interactions and expectations. Nowhere is this more apparent than in the retail industry where mobile, social, and big data are transforming the retail business. Over half of all Americans now have a smartphone and 70% of them used it in the store last year. This environment of rapid change rewards agility. Digital natives such as Amazon, Google and eBay have invested heavily to build the competencies required to excel in this environment. Continuous delivery and continuous integration are two of those essential competencies.  And they use that agility as a competitive weapon. For instance, Amazon uses substantial A/B testing to optimize their web site and deploys new code every 11 seconds. Today the infrastructure can change as quickly as the code. For digital natives, change is constant and expected, not occasional and resisted. However most Enterprises are not digital natives and do not have the luxury of rebuilding from a blank slate. What do you do if you have a large established business and want to compete effectively? Simply working harder with existing technologies and methodologies will not provide the required 100x speed improvement. The challenge is not simply about development. The entire process from idea to implementation must be addressed. Every stage from concept to development to test to pre-production to production must be compressed. As Michael Cote of the 451 Group put it – “cloud is speed. How fast does it take to deploy a new release, IT service, and patch, provision a new box, and so on with your “traditional” setup? How fast does cloud allow you to do it?” To meet this challenge, Qubell (http://www.qubell.com) worked with key customers, such as Kohl’s, who are leading the charge to accelerate online businesses. “Rapid delivery of new, omnichannel features to online customers is critical in retail ecommerce, and we want to deliver our innovative ideas to customers more quickly,” said Ratnakar Lavu, Kohl’s senior vice president of digital innovation. “Qubell provides exactly what we were looking for: an automation platform to accelerate our application development processes and update and deploy our applications in near-real time.” Qubell utilizes an application-centric approach by treating an application as a collection of services; provides a unified view across multiple data centers and clouds; and supports a wide variety of configurations.  Qubell terms this approach as the “Agile Software Factory”. The Agile Software factory approach cleanly separates the application from the environment, captures the application dependencies, packages applications as auto configurable, and delivers accelerated release cycles The result is daily pushes of new changes to production, continuous automated testing and automated live upgrades The business gains dramatically accelerated release cycles and higher reliability while leveraging investments in existing software platforms and existing investments.
0

Len Rosenthal, Vice President at Load DynamiX, a BGV portfolio company shares his views on the workload modeling challenges in the modern Data Center The modern data center is evolving from a client server deployment paradigm to a multi-tenant, virtualized, cloud infrastructure – managed as a service.  By 2017, nearly twice as many applications will be deployed in cloud data centers as will be in traditional data centers.  While the new architectures offer greater flexibility and potentially lower cost, they bring new challenges for performance assurance, especially for I/O-intensive applications. In traditional data centers, applications are individually matched to a dedicated server/network/storage infrastructure.  The I/O workload demands created by each application on its supporting infrastructure are relatively static and well understood.  Infrastructure over-provisioning, especially storage, is rampant in order to reduce performance risk.  In these environments, application behavior itself is the primary performance risk and IT managers approach performance assurance with solutions such as HP LoadRunner (formerly Mercury Interactive) or other Application Performance Management tools. By contrast, in the modern data center, a constantly changing portfolio of applications shares a common multi-tenant infrastructure-as-a-service (IaaS), resulting in dynamic, random and highly concentrated I/O workloads that are highly likely to push the infrastructure to its limits.  As infrastructure expenditures have skyrocketed, over-provisioning is becoming untenable as a method for performance assurance.  A poorly understood, increasingly stressed infrastructure becomes a primary performance and business risk. IT professionals require new insights into infrastructure behavior under the dynamic loads of their applications. With such insight they can assure performance, mitigate risk and optimize technology expenditures. An innovative new approach, called storage workload modeling, provides a new level of infrastructure insight and can used to validate the performance of any networked storage solution.  The process starts with an analysis of I/O patterns associated with how virtualized applications interact with the storage infrastructure.  A “fingerprint” or I/O profile is created that can then be used to create a highly accurate simulation of the installed production workloads.  This simulation of the workload can then be varied for “what-if” analysis, and combined with a load generation appliance to find the breaking points of the infrastructure.  This will ensure the most cost-effective products are deployed based on the performance requirements of the dynamic workloads. Workload modeling is the new key to infrastructure performance assurance.  It provides critical insight that enables IT organizations to predict how infrastructure will perform as applications and/or the infrastructure changes.   BGV portfolio company Load Dynamix, based in Santa Clara, CA is focused on this problem.  Their customers include dozens of G1000 companies who are trying to accelerate the roll-out of new products and services, eliminate infrastructure performance-related business interruptions and who want to cut their storage costs by over 50%.  Over the upcoming years, we see every G1000 data center and nearly all cloud service providers relying on workload modeling as a fundamental new IT process.
0

Sharon Barkai, Co-Founder of ConteXtream, a BGV Portfolio company shares his views on what 2014 holds in regards to Virtualization (Article reprint from the Virtual Strategy Magazine). When looking ahead to what 2014 holds in regards to virtualization, it is perhaps wise to anchor the outlook on some of the natural trends we have been witnessing during 2013. Mobile carriers connect our personal devices, (including smartphones, tablets, cars, watches, etc.), to the cloud content we can’t get enough of, (Facebook, Netflix, Amazon, etc.). As the proliferation of these devices continues and our content choices climb, the focus on the virtualization of mobile and content network service provider networks will remain a hot and relevant topic. This is the most competitive, dynamic and demanding networking environment today, and since telcos will require innovative solutions to ensure survivability and success for both now and in the future, we will be sure to see a strong showing of virtualization actions in 2014. One underlying key assumption is that network providers such as Verizon, AT&T and Comcast will eventually have to structure their infrastructure in a manner similar to that of Internet giants like Google, Amazon and eBay. This is eminent because carriers operate their networks between the proverbial “rock” of users’ insatiable mobile apps demand, and the “hard place” of abundant, low-cost mobile Internet supply. If Internet giants can scale cloud capacity based on cheap, commercial off the shelf hardware (COTS), and can quickly roll out functionality based on easy-to-scale, core map-reduce software, then carriers have to be able to do exactly the same for the public network functions they provide. That requires virtualizing their network. But the questions are, how much of this will actually materialize during 2014, where exactly do we stand in terms of ramp-up, and what is the wave length of this trend? We went into 2013 as an industry with multiple carrier network virtualization trials and (Proof of Concept) POCs already under our belt, and at least one, large-scale U.S. Tier 1 deployment of mobile functions virtualization, aka Gi LAN or CloudGi, with a few more well underway to production. It’s difficult to pinpoint how many other large scale mobile network virtualization production rollouts were completed during 2013, but one thing is for certain; there are only more ahead of us. What’s more, is that we spent, as an industry, a full year on European Telecommunications Standards Institute (ETSI) and Network Function Virtualization (NFV) discussions following the release of the initial multi-carrier NFV white paper. We also were able to witness major progress around carrier-scale network virtualization that was achieved in the various Internet Engineering Task Force (IETF) network virtualization overlay working groups. However, industry wide architectural discussions and initial network virtualization deployments are one thing, and a comprehensive transformation of carrier cost structure and network operations to software models, is quite another. The technological gist of software defined in-network, map-reduce needed to enable carriers to make the shift to global software cloud and cost models, are more or less clear. These carrier Software-Defined Networking (SDN) technologies work, despite the fact that unlike Internet providers, carriers do not typically write their own apps and are extremely geo-distributed in nature. The question is when will these actually make their destined profound impact and transform telecom to standard IT. The software point innovations that can drive this structural shift in telecom forward are already out there and range from advanced optimizations of chatty-connections and the content itself, to smart monetization, custom enterprise tenant applications, and joint marketing that enables revenue-sharing  with over the top apps. So how much of this vision can we expect to come to fruition during 2014? Quite a bit I suspect, or rather,  hope. I expect at least one major such production project among each of the Tier 1 carriers. This can be attributed to the competitive pressures between carriers and over-the-top (OTT) providers. If there is an optimization or monetization innovation that a carrier can “weave” into the network more quickly or efficiently than their competitors, providers may find themselves knocked out of the race. One specific mobile virtualization transformation that generates a lot of curiosity and buzz is that of the virtual evolved packet core or vEPC. These specific carrier functions help connect the radio frequency (RF) network to the Internet by making moving objects seem as if they are stationary to the rest of the world. If IP is the heart of a mobile carrier, than the EPC is the brain. In this area, many industry players expect lots of trials and also some limited rollout in 2014. However, this specific transition is a bit of a catch 22 that will take a few iterations to get out of lock step. This has to do with the essence of mobility itself in which any user may be hooked to both virtualized and non-virtualized EPC components. A migration of a given region to vEPC cannot contain today’s on-the-go consumers. It will likely take a few backward computability bootstrap moves over the next couple of years, but we will eventually get there. To sum it up, I predict that when we look back at this period of evolution among carrier networks, we will be quite amazed at the monumental shift that took place and the speed at which it occurred. We will be equally impressed at the evaporation of the last resort of turn-key “mainframes” triggered by network virtualization as we were at the IP, broadband, and smartphone major revolutions. Right now we all remain students of virtualization, with big ideas about the future and its potential. While we’re no longer “freshman” and our majors as specific players are decided, there is still much left to discover. But “graduation day” is not 2014 but may be closer than we think. For more information read ConteXtream Predicts SDN & NFV Market Tipping Point for Telcos in 2014 @ http://www.contextream.com/news/pr20140122.html
0

In 1994, a long time before Israel became known as Start-Up Nation, I met a young company named NiceCom, an early technology leader in the emerging field of ATM switching. The company was then a subsidiary of Nice and was lead by Nachman Shelef. Sharon Barkai was one of his most creative engineers. At the time, I was running 3Com. We decided to acquire NiceCom that year. This was the transaction that ignited interest in Israeli high tech and unleashed a floodgate of successful start-up exits in the field. Twelve years later, I hooked up with Nachman and Sharon in the lobby of the Tel Aviv Hilton. That time, we discussed their latest ideas about re-injecting innovation in the dormant, Cisco dominated, networking industry. The concepts they were describing to me had true break-through potential. I decided to invest in their budding company ConteXtream on the spot. This is how BGV became ConteXtream’s first seed investor. I talked to Sharon again this past week to reflect upon the path traveled since our initial meeting.

ConteXtream Seeded by BGV

A conversation with Sharon Barkai, founder and CTO

1- When you and Nachman Shelef first started ConteXtream, what did you think was the most critical problem you wanted to address? What was the breakthrough idea that got you started? One of the key challenges of ConteXtream was actually that the company was established based on observations regarding the possibilities of new networking architectures, and not any specific problem in good old IP / Ethernet bridging & routing. These observations were about tasking (distributed) software with the job of .. building a network, or connecting things. The immediate implications from this idea were: a) we need a basic IP infrastructure in place so we can distribute software-defined anything and b) a software-defined network mapping function has extraordinary identity based qualities and abilities that eliminate traditional networking complexities. It was only later on that the true applications for software-defined networking presented themselves. These had a lot to do with the separation of the sequential client hardware (e.g. mobile smartphone) from the (multi-core) concurrent server hardware, the formation of elastic clouds, the mobility of virtualized environments, and cloud networking. 2- How has your original vision evolved over the past 5 or 6 years? Which part of your original vision has remained the same, and which part has changed? The basic vision has remained mostly intact and, while it has begun to materialize, has not yet fulfilled its full potential. But we had to constantly evolve its communication, both externally and internally and connect the dots between the current state of the market and where we wanted to go. For example I recall one of our early marketing offsite meetings moderated by an iconic Silicon Valley communicator. It lead us to refocus our pitch to investors entirely on network virtualization. Needless to say reaction was not very enthusiastic … It was premature, virtual machines were not yet a well known concept, neither was vmotion, elasticity, or hosting, and carrier network function virtualization was definitely beyond the “after we retire” horizon. Naturally all that has evolved after more innovative players coined the terms SDN & NFV and started articulating related notions. 3- SDN started off as a way to open network innovation in research environments, but it has fast become a buzzword and a hot investment area. ConteXtream deserves credit for being a pioneering inventor of its fundamental concepts. Can you separate the actual reality from the fiction? Which types of large scale SDN deployments are possible today? What do you expect over the next couple of years? The emergence of the SDN discussion in the industry definitely made our lives simpler in terms of communicating the kind networking architectural innovation we wanted to show. Before that milestone, there was a sense that growing the IP transport infrastructure had to be the end-all play, given the success of broadband and the Internet. Yet, as compared with the early SDN “marketechture” descriptions, we still looked a bit different. The original SDN proposals portrayed an out-of-band network software controller as if it was a smart robot controlling a Detroit Auto-Assembly line. It is only in the past 12 months or so that we have witnessed a growing realization that SDN needs to extend — not replace — the IP transport infrastructure, and that federated emergent architectures such as ConteXtream’s distributed flow-mapping design are the way to go about it. 4- Recently, there has been growing support among carriers for Network Function Virtualization (NFV), notably within ETSI. What are your views of the principal goals and benefits of NFV? How does NFV’s focus differ from or complement SDN? To us the NFV trend completes the picture of what we started day one. It is not only about the ability to network and connect identities using distributed software and distributed flow-mapping, but also about making every function carriers want to apply to their traffic from the access to the core and back be instantiated the same was as a Google search, in other words enabling these giant communication service providers to operate just like Internet companies! I am referring to their capex efficiency, their service velocity, their time to market — even though they don’t write their own apps , and they operate using a much more geo-distributed set of points of presence. 5- Why should mobile carriers embrace SDN today? Mobile carriers connect the things that matter most to consumers, their personal device, phone, tablet, car, watch.. to the cloud. This is the most competitive, dynamic, and demanding networking environment today. Innovation is needed in every aspect of this business to ensure survivability and success. If there is a service optimization or monetization opportunity that your competitor can “weave” into the network more quickly or more efficiently than you, then you may not be able to live to play another round. In essence this is the capability that SDN or network function virtualization brings to the table. It functions in the direct line of traffic and complements the virtual compute-storage environment which was introduced in the past decade. It is a win-win-win for the carriers, end-customers, and technology innovation vendors. ConteXream already today is able to offer its customers in production the ability to “chain in” a menu of functions, and to efficiently load balance and service match a 50 million white-pages subscriber base to the yellow-pages set of functions, 50 billion times a day! I think we could say this is definite evidence of SDN has reached prime time with mobile carriers.
0

NO OLD POSTSPage 2 of 2NEXT POSTS