Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Anik Bose, BGV General Partner shares his perspective on the digital transformation of manufacturing and the challenges associated with mingling the worlds of information technology and operational technology. The explosive growth in sensors, data and analysis is bringing asset intensive industries into a new era of unprecedented connection and information. This transformation offers these industries the ability to significantly improve their operations and achieve higher levels of productivity. It is estimated that every 1% increase in production efficiency in manufacturing represents $200,000 saving per day per plant in a large manufacturing operation.  This specific example was illustrated by FANUC, a top two industrial robot vendor in the world (e.g., if the utilization rate of a large factory goes up from 85% to 88%, the factory will save $600 K per plant per day. The greater the complexity of the supply chain, the higher the value creation potential. To unlock this value manufacturers are increasingly looking to adopt big data and analytics to improve operational efficiency and increase product quality, across multiple verticals such as pharmaceuticals, chemicals, energy and automotive systems. However, this comes with some inherent challenges due to the complexities of mixing the Information Technology (IT) and the Operational Technology (OT) worlds. To deliver on the promise of the inherent value creation potential we need to build stronger connections between IT and OT at both the technology and organizational levels. The challenge lies in the fact that each system was purpose-built, but neither was designed to work with the other. Technology Challenge In today’s enterprise there is a substantial communication gap between IT and OT technologies. Each uses its own method of connectivity, from the physical connectors and buses that data rides on, to the language each uses to convert bits and bytes into human readable and actionable information. Industrial devices have been designed for long life cycles and as a result use varied physical communication layers, mostly proprietary to their industry. The first step to connect such legacy industrial systems to the IIoT is to provide some type of conversion from these application specific physical buses to open, ubiquitous physical interfaces such as Ethernet and wireless. There is also a need to aggregate smaller, simpler devices like non-networkable sensors or electric circuits into a networked gateway device, in order to transmit the sensor level signals onto standard network interfaces and then into the primary Internet communications protocol – TCP/IP. The biggest challenges to this proposition come from the:
  1. Large number of devices and sensors
  2. Need for low power and low bandwidth connectivity and
  3. Fragmented nature of the vendor market
While a custom protocol can be useful in a single given application, it creates a hurdle in accessing the data required to realize the benefits that digital manufacturing offers. In contrast, IT networks use the same open standards and protocols found on the Internet. The Internet was founded on open standards like TCP/IP. Application specific protocols are layered on top: HTTP/S, SMTP, SNMP, MQIT etc. The Internet uses programming languages like JavaScript, Java and Python and presents information using technologies like HTML5 and CSS, all of which are open. To achieve the promise of Digital Manufacturing, OT and IT technologies must converge, allowing connection and communication. Today, the existing systems and protocols have created “islands of connectivity” caused by the lack of interoperability between open and proprietary protocols. This convergence between them is likely to be enabled through an evolutionary transition beginning with solutions such as protocol gateways, OPC servers and middleware. In the long run, OT/IT convergence will demand a flattened architecture and seamless communication between assets, utilizing open, standards-based protocols and programming. Another area, which is critical for this IT/OT convergence, is the security aspect. The OT systems had inherent built-in security due to the physical separation of the networks – these systems were “air-gapped” from the IT systems. Connecting OT systems creates points of failure that can cause real disruption to the business. Imagine a ransomware attack holding up a factory floor for ransom. Enabling the convergence of IT and OT systems in a secure way is essential for this transformation. People Challenge The above challenges are further compounded by the different skill sets and resistance to change that exists between IT and OT teams. Traditionally there have been separate departments for IT and OT – with different people, goals, skills and projects. Continuing to operate separately not only creates a significant barrier to the adoption of technologies that fall outside the operations- teams’ comfort zone but also exposes companies to fault or security risks that could significantly impact production. To rectify this situation, the strategies of the IT and OT departments need to be aligned and IT and operations managers need to have some common and goals and targets. Joint projects will harmonize duplicate or overlapping systems and processes, and promote the development of the interdisciplinary skills now missing in most companies. This is a significant cultural shift that requires time, trust and a progressive plan. Simple pilot projects are a great way to offer tangible value, train resources and progressively develop the skills of IT/OT skills in the team members. Getting started BGV portfolio company Bayshore Networks ( ) enables industrial enterprises to connect to the internet securely while protecting the manufacturing assets from cyber-based threats.   The company’s product enables asset intensive industries that are seeking operational efficiencies to bridge their IT and OT environments, collect the big data and apply the analytics required to unlock the value of digital manufacturing and mobilize its workforce into the connected world. One key use case is granular secure remote access to industrial devices.  While traditional VPN allows a remote maintenance technician to dial into the OT zone, but the problem is that once that maintenance technician is inside the OT zone, they have access to all industrial devices (Siemens, ABB, Yokogawa, etc), which is a major security problem.  This is why traditional VPN is not a viable tool to enable secure remote access for the OT networks.  Bayshore’s granular Layer 7, secure remote access solution allows remote workers to dial into specific PLC’s, without giving access to all industrial devices. Other use cases range from providing CIP compliance for Utility customers (i.e., ability to enforce/block NERC-005-5), protecting data and systems from attacks initiated through IOT apertures for Data Center customers and safely/securely connecting IT/OT to enable OT data transformation for Manufacturing customers.

As transaction volumes grow exponentially in the digital world it is accompanied by a rapid increase in fraud, money laundering and the compliance costs. David Andrews, Director of Marketing at Identity Mind Global and Eric Buatois, General Partner at BGV are sharing their perspective. Since October 2015, online e-commerce fraud has jumped 11% in the US. In monetary losses, that means that $4.79 out of every $100 are at risk of fraud. In 2016, fraud is predicted to hit 4 billion in losses and the expectation is that that number will reach 14 billion by 2020. In 2014, fraud caused 12% of losses in P2P online lending. The Financial Action Task Force (FATF) estimated that implementing AML regulations cost $7 billion annually in the U.S. alone. In addition, regulators are keeping close tabs on digital transactions. The resultant regulations mean more fines associated with non-compliance than ever before. NERA (From ” Developments in Bank Secrecy Act and Anti-Money Laundering Enforcement and Litigation”, NERA June 2016) The Case for Automation Manual processes are effective at addressing risk and compliance only in a handful of use cases such as those with few recognizable patterns, those requiring unique expertise and the inspection of the human eye. Manual processes by their very nature are not able to scale. Why ? Because processing more volume manually means more employees, higher costs, and increased likelihood of inconsistencies and errors. While a computer can work around the clock with a level of accuracy that does not vary and capture large volumes of data effectively this is not possible with manual processes. This results in a far greater likelihood of being out of compliance. Further manual processes are more difficult to change. Far better results and greater efficiencies can be achieved by complementing automated processes alongside manual ones. Automated processes can replace some processes that don’t scale well when handled manually. For instance with high volume transaction monitoring, automation delivers efficiencies through the consistent application of software-based rules, alerts and case management. However, when there is a real exception and a transaction is flagged, people can be brought into the operational process, culminating in a report and a filing as appropriate. Automation can also supplement manual processes, e.g. prepopulating information for Suspicious Activity Reports before they are reviewed and manually sent. RegTech is the Disruptor RegTech is a set of technologies focused on the prevention of fraud, the management of risk and on complying with governmental regulations. RegTech provides the agility for organizations to:
Reduce risk management and compliance costs Expanding the team can significantly increase cost. Increasing capacity for an automated process is as simple as the elastic scaling capabilities of the risk management and compliance vendor
Increase compliance speed and accuracy Manual transaction monitoring can be slow with the quality varying with the experience and expertise of the team. Automated processes are fast and consistent regardless of the volume.
More efficient access to data Data can overwhelm manual systems where additional inputs and analysis can greatly slow processing speed. On the other hand, an automated system can capture data across multiple systems and analyze it regardless of volume. These systems can produce easier and faster access for information reporting from businesses through to their regulators.
Quickly address new regulations and process changes Changing a manual process requires training and a period for the team to absorb the new process. Changing an automated process can be as simple as changing one or more business rules.
For banks, fintech companies and merchants looking to get more efficient and effective, RegTech is the new way. It offers a lower cost, agile solution that is focused on operational efficiency across high volume processing and regulatory compliance. It also provides analytics for decisions that helps close the gap with the best members of your team. Areas where RegTech can be applied include:
  • Automated onboarding
  • Automated payment risk management
  • Automated compliance monitoring and execution
  • Automated reports generation
  • Automated notifications
However, not all solutions are created equal. In the digital world, companies deal with a wide variety of issues and customers. This spans customers from different demographics, geographies with different risk profiles as well as transactions that span the full life cycle from onboarding to purchases. So, when searching for warning signs in transactions it is critical to review multiple transaction attributes (e.g. IP address of user, phone number of user etc). Such approaches are far more accurate at detecting fraud. Furthermore, if the identity of the user behind a transaction is known, one can detect if they are suspicious i.e. attempting to make multiple transactions at the same time, attempting smurfing, structured layering or whether, they are legitimate customers with good behaviors based on their transaction history. Consequently, a broader full life cycle solution provides a stronger foundation over time, with greater coverage across a variety of risk and compliance issues that a business is likely to face. BGV portfolio company Identity Mind Global is one of the emerging leaders in RegTech.  IdentityMind provides a risk management and compliance platform that securely analyzes the entities involved in each transaction (e.g. consumers, merchants, cardholders, payment wallets, alternative payment methods, etc.) to build payment reputations. IdentityMind enables companies to identity and reduce potential fraud, evaluate merchant account applications, onboard accounts, enable identity verification services, and identify potential money laundering (    

Eric Benhamou BGV Founder and General Partner shares his impressions of the 2016 RSA conference and trade show This year, I dedicated almost 3 full days to attending the RSA cyber security trade show and conference. I was in good company: over 40,000 attendees and 400 exhibitors. My head is still spinning and my feet aching from the experience. I was reminded of Interop in the hey days of the 90’s, except in those days, you didn’t bump into other attendees texting on their smartphones while walking the show floor …. As I strolled through the aisles and listened in on the various pitches of the vendors and expert speakers, I was struck by several impressions: They all sound the same !!! Every pitch starts with the obligatory statistics recounting the spectacular growth in sensational hacks (Target, Sony, and the office of the US Government Personnel are the favorite poster children, but there are many others), and the mounting costs of these attacks. Once you have been sufficiently scared by the sheer catastrophes brought about by the bad guys, you are then exposed to the vendor’s supposedly unique technology (patent pending) which is usually described by a combination of buzzwords picked randomly from the following list: advanced machine learning, virtually no false positives, virtually no false negatives, deep threat intelligence, real time alert correlation, automated incident response, adaptive policy enforcement, intelligent on-demand sandboxing, next generation advanced persistent threat prevention, cloud-based containerized security. If your eyes are not totally glazed over by then, you may partake in a canned demo showing a tiled dashboard comprising colored rings, bar charts, and exploded pie charts. This would all look pretty similar to your Fitbit daily health and exercise dashboard, except there usually is at least one tile showing 1980’s Unix style time-stamped alerts on a black background, which is there to suggest there is serious computer science wizardry under the hood. It sure is tempting to read these clues as clear signs of commoditization of the cyber industry. While this would be partly true, it would be equally wrong to lose interest for it. Taken as a whole, the cyber security market is estimated to grow at a CAGR of 9.8% from 2015 to 2020, according to a report from Markets and Markets. It is a far cry from the 20% CAGR of the network infrastructure industry I remember from the 90’s, but it is nothing to sneeze at: it remains about 4 times the growth rate of the world GDP. Furthermore, sectors such as threat intelligence, end point security and cloud security are growing several times faster than the cyber security industry as a whole. If you are, like me, an investor in the world of cyber security startups, how are you supposed to place your bets? A sobering fact to keep in mind is that while the cyber security end markets are growing at this good (but not great) clip, the VC industry is pumping capital into it on a CAGR of close to 50%. This impedance mismatch portends a fair amount of capital waste and blood on the floor (i.e. funded startups who never reach take off velocity). Speaking of impedance mismatch, how can CISO’s possibly absorb the ever expanding plethora of new tools competing for their attention? Ultimately, their limited capacity to evaluate, conduct POCs, triage, integrate and deploy new technologies is the gating factor that will prevent at least 50% of these new aspiring young cyber startups from ever reaching critical mass. Whether they address a top 3 or top 5 CISO priority in a compelling enough way, and whether or not they can easily integrate into a cyber environment that precedes them will determine their fate. As I was debating these observations with my partners, we came to the following conclusions: Punting is not an option. Cyber security budgets are growing faster than most other budgets across all enterprises. The quality of the team is an important mitigation factor against the risks of commoditization due to intense competition and blurred differentiation. By quality, I do not mean IQ: I have rarely met a cyber entrepreneur whose IQ is below 150. In fact, when I visit Israel, a microcosm of the cyber industry, the cyber entrepreneurs I meet there seem to have all come out of the famous unit 8200, in many cases from its even more selective Talpyot program. By quality, I am refering to their intellectual and psychological ability to re-invent themselves and pivot multiple times before finding the product-market fit that really has traction. There is no substitute for the hard work required to gain a detailed understanding of the sector and for obtaining fine grain customer feedback. Market reports are misleadingly high level. Early customers can provide biased feedback (because they may be friendly with the entrepreneur or because they don’t pay full price). Demos are misleading, because by definition, they do not reproduce a realistic customer environment. In short, more work is needed than other sectors and the bar must be raised even higher. Finally, choose your co-investing partners carefully to cross the valley of death — the period of time during which the company experiments, acquires customers one at a time, and consumes cash. Embarking on this journey with insufficient capital is tantamount to crossing a desert with just enough water to reach the mid point, and hoping to find an oasis along the way …

Anik Bose, BGV General Partner shares his perspective on the state of the cyber security sector.  “It was the best of times and it was the worst of times, it was the age of wisdom, it was the age of foolishness.” I believe that these lines from Charles Dickens Tale of Two Cities are an accurate description of the state of the cyber security sector today. Why it’s HOT ? Security budgets are increasing across the board. Gartner is predicting that enterprise security budgets are shifting towards an increased focus on detection and response, and 60% of security budgets will be allocated to these two areas by 2020. PWC Security Survey states that information security budgets increased by 24% in 2015 as a response to 38% YoY increase in security incidents. IDC predicts that Security Analytics, threat intelligence, Mobile Security and Cloud Security will be hot areas of growth. Additionally we believe that IoT security a relatively new market will be a significant growth area in the future. Consistent with the above we continue to see market pain points attracting innovation and VC funding in areas such as threat intelligence (e.g. Survela,, anti-fraud/identity management (e.g. Identity Mind Global,, encryption, next generation end point, network visibility and isolation (e.g. Spikes Security, and automated incidence response (e.g. Packet Sled, This rate of innovation is fueling a leadership shift amongst the vendors in the cyber security industry. Old guard companies like Symantec, HP, Cisco, Dell/EMC, Trend Micro, Blue Coat and Intel/Mcafee are scrambling to stay relevant in the rapidly changing market. New guard larger companies like Palo Alto Networks, Cyber Ark, Palantir and FireEye are staking out a lead along. Finally startups like Cylance, Illumuo, SkyHigh Networks and Tanium are poised to transform sub segments of the industry. In summary strong sector growth and an industry structure ripe for change is attracting innovation and capital at unprecedented levels. Why it’s NOT ? The cyber security sector has attracted more than $3.3Bn in funding in 2015 across 130+ deals. The practical reality today is that CISOs cannot absorb and deploy anywhere close to the amount of new cyber technologies getting funded. In other words, there is a cyber tools saturation phenomenon which will force out all but the very best and most critical new cyber technologies — those most critical to their cyber security priorities and which can best be integrated in their existing environments. We believe that only very large enterprises will be able to invest in internal capabilities to vet and integrate a variety of best of breed startup technologies while other Enterprises will rely on their trusted security vendors and or MSSP’s to vet, source and integrate best of breed innovation. Valuations are at all time highs – early stage pre revenue series A companies are being valued at pre-money valuations of $20-30M. Late stage companies like Tanium, Illumio, Okta and Zscaler with revenues in the tens of millions are being valued in excess of $1bn, multiples that could be difficult to maintain in public markets. However recent public market volatility is leading investors to a “back to basics” mentality in venture and late stage funding – looking at growth coupled with profitability and cash flow generation. Companies like FireEye that were enjoying lofty valuations based on growth alone have seen their valuations come down reflecting the “back to basics” mentality. Companies like Palo Alto and Cyber Ark that are delivering growth and profitability are being valued at far higher multiples. CISO’s at enterprises are becoming more cautious when working with startup cyber vendors making ambitious claims or pricing assumptions that are inconsistent with the value they deliver – they are increasingly seeking a level of vetting that is creating extended POC’s and long sales cycles for these startups competing for mindshare. Furthermore many CISO’s are increasingly looking to their trusted vendors and MSSP partners to vet best of breed products and deliver integrated security solutions. Finally strategic acquirers are also becoming more cautious with respect to paying the frothy valuations seen in recent year – preferring instead to work with the startups over a period of time, either through an investment or through their accelerator programs. In summary the cyber security sector is overfunded with troubling signs of valuation froth with startups struggling to compete for mindshare with Enterprise CISO’s leading to extended POC’s, sales cycles and ultimately increased capital intensity. BGV Conclusion We believe that cyber threats are endemic and the demand for effective counter measures is strong. This combined with an industry leadership structure in flux and scarce cyber talent represents the best of times – opportunities to invest in and create young innovative companies. However capital being available at unprecedented levels coupled with frothy valuations and “noise levels” competing for enterprise CISO mindshare represent the worst of times. Investing to build strong companies in such an environment requires a thoughtful and disciplined approach to investing while seeking to create eco-system alignment with CISO “trusted” strategic security vendors and or MSSP’s. One that discerns between investing in technologies that will create successful companies valued on fundamental metrics (customer value, growth and profitability) versus “quick flip expensive” bets that will deliver good returns predicated only on frothy strategic M&A valuations. BGV remains disciplined on valuations (have walked away from several cyber deals when valuations approached unjustifiable levels). We also continue to invest time in validating customer value (ROI), the technology and technical teams (with the expertise to tackle complex cyber problems) by leveraging our privileged relationships with ex CTO’s of cyber portfolio companies, with trusted strategic security vendors (eg Palo Alto Networks) and trusted MSSP’s (eg Cap Gemini).

Websites infected with malware are a major culprit behind cyber attacks, and unsecured Web browsers are a common attack vector for hackers. A growing Silicon Valley startup is trying to solve that problem—by taking the Web browser out of the equation. Spikes Security says its browser-isolation technology protects computers from malware and Internet-borne attacks by creating a virtual machine that isolates the user’s own browser from the Internet. Infographic: Just browsing? Malware shops for data Company founder Branden Spikes came up with the idea in 2008 from the need to protect rocket scientists. He since has launched a startup that recently rolled out its first commercial version—and has sparked interest from several sectors, including credit unions. Free IDT911 white paper: Breach, Privacy, And  Cyber Coverages: Fact And Fiction The technology, called AirGap, works by using secure Linux appliance hardware that renders Web pages outside of the user’s network. The browser’s session is streamed back to the user through a high-performance, remote desktop connection of sorts—think of it like streaming a very high-quality video. “We come at it from a preconceived assumption that all browsers are malware by their very nature,” says Spikes, who serves as company CEO and CTO. A recent Ponemon Institute study sponsored by Spikes Security found that 81 percent of the 645 surveyed IT practitioners consider unsecured Web browsers as a primary attack vector. The same number found that Web-borne malware could be completely undetectable despite various security tools. “I feel like most of the other attack vectors have been solved or can be shut off,” Spikes says. “A Web browser cannot be turned off.” From rocket science to Silicon Valley Spikes was a consultant who installed firewalls when he met a guy named Elon Musk, who was working on an Internet startup. Musk went on to co-found PayPal, and later founded the aerospace developer and manufacturer SpaceX. At PayPal, Spikes oversaw cybersecurity, along with Web systems, databases and “all the sort of blinking lights that sit in the data center.” When Musk moved on to focus on SpaceX, he brought Spikes along. Spikes, who spent 10 years as CIO at SpaceX, was exposed to all sorts of network attacks there. But despite state-of-the-art defenses, one type consistently got through. “There’s one thing that was always able to defeat my defense mechanisms, and that was end users’ Web browsers,” he says. When Musk announced in the mid-2000s that he wanted to launch astronauts into space, Spikes says he started to lose sleep. “(I) had to defend the livelihoods of human beings with my network. …If I was unable to stop browser malware, I would really likely fail,” he says. Spikes’ job at SpaceX, essentially, was to not allow the bad guys to hack the network. That high bar, he says, meant he had to solve the challenge of protecting intellectual property that resided entirely on laptops and desktops. Spikes invented AirGap in 2008 while at SpaceX, but maintained the intellectual property rights. Four years later, he spun off his own startup. Today, Spikes Security employs 30 people in Los Gatos, Calif. The company received an $11 million Series A investment last fall that has allowed it to expand its engineering team and accelerate the development of new features. After a couple of years perfecting the technology and extensive beta testing, the product was rolled out officially earlier this year. About two dozen customer deployments are in place, with another two dozen in various testing stages. “When you’re building an innovative product like this, it requires a lot of ongoing education and collaboration with customers to ensure the product is meeting and exceeding expectations,” says Chief Marketing Officer Franklyn Jones. Challenging the competition The idea of browser isolation technology is not new. Several other vendors are offering isolation technology, including some big players. Typically, they’re trying to solve this through a sandbox or micro virtual machine. The problem, Jones says, is that if malware escapes the sandbox or VM, the network becomes affected. “We keep all the bad stuff outside the network,” he says. The company is having some early success with credit unions. “Small banks and credit unions are prime targets for cyber criminals because, very often, these firms do not have the IT staff or budget to build state-of-the-art security infrastructure,” Jones says. The company plans to launch a mobile version this year, and another major product announcement is due in a couple of months. “One of our long-term goals is to ensure that the Web is safe for everyone, everywhere, all the time,” Jones says. “We are on track to meet that objective before the end of the year.

Anik Bose and Eric Buatois (General Partners at BGV) share their perspective on securing the Internet of Things. We believe that the intersection of IoT and Security will present a profound opportunity for technology innovation and Venture backed start-ups to create value. This belief is based on several factors.  First the attack area presented by the IoT is immense.  The broad surface area is created by the billions of connected devices and the IoT need for Cloning of Things (for example two cloned devices can still be associated and work together) – this opens up backdoors for all kinds of illegal activity.  Furthermore the IoT opens up opportunities for malicious substitution of things, eavesdropping attacks, man-in-the-middle attacks, firmware replacement attacks and extraction of security parameters to name a few specific threats.  We also know from experience that end points tend to be weak in dealing with security and that IoT devices have constrained resources making the implementation of security at the device level challenging.  As a consequence the economic cost of breaches could be staggering. Securing the IoT presents a unique set of policy challenges since the data and information generated by various IoT applications will be extremely sensitive. Who owns it? How can it be shared? Does it belong to the supplier or the customers? Can the data cross international boundaries (i.e. critical energy grids)? Policies, which do not exist today, will have to be defined, put in place and enforced.  As an example, will data have to be encrypted to move from one data center to another one within the same cloud or between different clouds? The cost and availability of security solutions will also shape the policies. These policies will either rely upon open standards or even generate the creation of new standards. Traditional security is an all IP approach but an all IP approach is unlikely to work because both standards and IoT communications over CoAP are not fully evolved.  802.15.4 defines security procedures but it is still evolving while work within the standards is only focused on end to end security and secure group communications.  So standardization will be a key factor enabling the development of effective security solutions versus ineffective “pieced together” solutions with the help of consulting firms.  Traditional solutions like Sandboxing, signature based detection will not work in an IoT environment.  IoT security solutions will need to manage tradeoffs between performance and security as well as choose between distributed or centralized architectures.  Centralized architectures like Trust Center (Zigbee), 6LBR (Border Router for 6LoWPan) and Central key distribution (KDC) are emerging and are likely to be winners against decentralized mechanisms that require strong P2P mechanisms which are very difficult to implement. While we are still in the early days of IoT adoption there are a few good examples of startup innovation at the intersection of Security and IoT : –       Device Authentication (eg Launchkey) –       RTOS (eg Mocana, Icon Labs and Red Balloon Security), Wireless Sensor Networks (eg Sensalyze acquired by ARM, Dust Networks and Green Pack) –       End to End systems for Key management (eg Dyadic security) and Intrusion detection systems (eg Argus, ScadaFence). These technology segments can address broad application areas and tend to be industry agnostic.  However success in these areas will likely require partnerships with Sensor Vendors in this space. The winning companies are likely to be the one not only creating and or supporting emerging IOT standards but also understanding the critical policies needed to manage, share and exchange critical IOT information. BGV is building key partnerships to source innovation and build companies that will secure the IoT.  These include relationships with IoT focused accelerators, Corporate Partners and University incubators.

Anik Bose, BGV General Partner shares his perspective on “separating the wheat from the chaff” in Cybersecurity investing. At BGV we have recently observed a 200% growth in Cybersecurity deal flow.  We believe that this deluge is driven by a combination of facts and hype.  A few data points: –       According to a June 2014 report from the Center of Strategic Studies, crime involving computers and networks has cost the world economy $445Bn annually –       Hackers have been in the news headlines with increasingly sophisticated attacks on Fortune 1000 corporations like E-Bay, Target, Neiman Marcus, JP Morgan –       Information security public company valuations are sky high – FireEye a company that is yet to turn a profit is valued at $5.7Bn –       CB Insights reported that VC firms invested a record $1.4Bn in 239 cybersecurity companies in 2013 –       451 Research Enterprise Security Practice (January 2014) reports that significant proportion of cybersecurity products end up as shelfware in enterprises – most common being Security Information and Event Management (SIEM), Intrusion Detection Systems (IDS), Governance, risk and compliance (GRC), and Web application Firewalls (WAF) To find the real opportunities to invest in building cybersecurity companies BGV evaluates opportunities by attempting to answer two fundamental questions: –       Is there a market opportunity for building a best of breed company in the target market segment? –       Does the product deliver a compelling differentiated value proposition in terms of: a) Broader coverage of mode of operation and method of detection; b) Creating stickiness with enterprise customers; c) Providing quantifiable and measurable improvement metrics A few examples to illustrate how BGV applies the above approach: –       We believe there are more opportunities to build of best of breed cybersecurity companies in segments such as Anti-Malware (anti-botnet, antimalware suites, reverse engineering/anti-malware analysis) but far fewer in saturated segments such as Identity/Access Management and Mobile Security (Access control, Digital rights management). –       To deliver a compelling differentiated value proposition a product must be able to deliver on multiple methods of use such as continuous real time monitoring and advanced threat detection for STAP to name a few, while addressing at least one mode of operation such as Network security services (NFV, Cloud based SaaS) and Vulnerability Detection and Monitoring (STAP, Malware and APT identification and blocking) etc. –       To ensure customer stickiness a product must be used frequently (versus one time compliance use), be able to integrate with other systems, work in the background with limited user involvement and be based on key algorithms that make them difficult to be replaced by other solutions –       Last but not least the products must be able to deliver clear and measurable improvement metrics such as reducing time from attack to detection, time from detection to mitigation, reduced false positives, false negatives and or automation/productivity cost savings As early stage investors and company builders BGV believes it is critical to be discerning and not be swept away by market hype and herd mentality.   We do so by focusing on the fundamentals to evaluate and select the best early stage cybersecurity opportunities.

Marc Willebeek-Lemair, CEO and Founder of Click Security shares his perspective on Real Time Network Security Analytics A hundred years ago, when someone had a fever, broke an arm, was delivering a baby, or contracted some rare disease, you called the town doctor. The doctor would come to your house and look you up and down and typically prescribe two aspirin and tell you to call him in the morning!  The doctor served the entire community and had to have an answer for every type of ailment.  Today, we have medical specialists for just about every conceivable malady. The industry is far too specialized to ever believe a single type of doctor will be effective.  Well, the challenge most enterprise IT security teams face today is a lot like the town doctor 100 years ago!  Often the security team (2 or 3 staff at best) needs to know about every type of security threat against every type of server, client, application, protocol, cloud service, you name it.  Furthermore, the list of targets is hyper-dynamic, no longer able to be dictated by IT, yet being preyed upon by a growing, well-armed, well-funded, highly motivated army of adversaries.  Most security teams just don’t stand a chance. So now what?  Enter the era of Real-time Network Security Analytics. This technology enables security teams to get ahead of the bad guys and take back control of their networks.  Unlike the medical profession, security organizations are just not able to increase their headcount by an order of magnitude or two.  By capturing human expertise in the form of analytics (virtual expertise), individual security teams gain a force multiplier to address the ever-evolving, complex threat landscape. Ultimately, given the right data and the right insight into what questions to ask or nuances to look for (analytics), a faster and more accurate diagnosis and treatment is possible. This, however, poses several challenges:
  1. Big Data: we need the right data and it needs to be clean and timely
  2. Big Analytics: we need the right analytics and lots of them running continuously
  3. Visualization: we need fast and intuitive interfaces for human analysts
Let’s explore each of these challenges: Big Data –The data can be voluminous, but rather than attempt to capture all possible forms of data, it makes more sense to select the data most useful to the analytics.  The right combination of log sources, network data, file data and endpoint data along with external threat intelligence is key. Big Analytics – Ultimately analytics can automate much of what the human analyst performs manually – leveraging broad expertise packaged into software. Analytics can be used to separate the signal from the noise – by converting many independent low-fidelity events into a high-fidelity, actor-based alert.  Analytics can also automate the contextualization around an actor – further coloring its severity and accelerating the time to understand what is happening and formulate an appropriate response. Running many different analytics simultaneously in real time against a steady flow of data, however, is a challenge, requiring the right type of stream processing engine. Visualization –100% automation without human intervention is unfortunately not feasible against most modern threats.  Often, final diagnosis of a high fidelity alert requires a human analyst.  For this human interactive stage, analytics that pre-process context and provide intuitive visualization capabilities can greatly accelerate the security analyst’s ability to respond. Big Data and Security Analytics – particularly Real-time Network Security Analytics – are powerful levers that can enable IT security “Town Doctors” to combat the increasingly-challenging cyber threat landscape. Think of them as antibiotics and MRIs.  They enable you to see what is important, distilled out of the mass of data; be more efficient and effective in analysis and response; and to automate your analyses so that you do not have to do the same thing over and over again.

Paul Stich, CEO of Appthority shares his perspective on Mobile App Risk Management Mobile devices (smartphones and tablets) are playing an ever increasing and strategic role in today’s corporate environments. Increased employee use of mobile devices, along with the growth of the Bring Your Own App (BYOA) economy, introduces new risks to the enterprise. The average employee has between 50 and 150 mobile apps on their device, with many of those apps capable of accessing and sharing critical and sensitive corporate and personal data. Developers for web based mobile applications are inclined to choose functionality over security when trade-offs must be made.  For example, Ernst & Young (Mobile Device Security) has tested numerous mobile web applications where the password complexity requirements or account lockout features had been reduced or removed entirely. Restrictions on JavaScript or persistent session data have also led developers to place sensitive information and session information within the URL of every request to the server. In addition, network bandwidth limitations may encourage developers to create mobile device-formatted sites that cache additional information from web pages, potentially exposing this information if the device is compromised.  Client based mobile applications need to support different operating systems and SDKs that developers use to create applications.  Each of these platforms has a different security model that affect how developers address security within their own applications. So, what would be considered a mobile app risk?  Here’s an example: Have you ever noticed an app that’s constantly running in the background (that really has no need to do so?) It’s possible that it’s tracking your location and sharing it with outside parties for advertising purposes.
  App developers will often ask for these types of permissions upfront, but unfortunately that’s not always the case; or, the language they use is intentionally vague or deceptive. In the larger context of BYOD (Bring Your Own Device), these types of mobile app behaviors are not only a significant risk to users, but to organizations as well.  Without a fully automated way to check for mobile app risk, it is very challenging for organizations to identify which mobile apps put corporate data at risk versus which apps are benign.  As organizations embrace the productivity and connectivity gains of the mobile workforce, it is important to address the risks commonly found in 3rd party apps on employee devices. Some interesting data about mobile app risk:
  • Surprisingly, iOS apps exhibit more risky behaviors than Android apps (91% of the top 200 iOS apps exhibit at least 1 risky behavior as compared to 83% of the top 200 Android apps)*
  • Free apps are riskier than paid apps: 95% of the top 200 free iOS and Android apps exhibit at least one risky behavior vs 80% of the top 200 paid apps.*
* Source: Appthority App Reputation Report. Appthority was founded on the principle of helping organizations automate the management of mobile app risk, and empower a smarter, safer mobile workforce.  For more on Appthority, please visit  

Developing a multi-pronged Cybersecurity strategy is a critical job for CSOs today.  Neil Daswani shares his perspective on this important topic.  He is currently at Twitter, serves on the faculty of Stanford’s Advanced Computer Security Program, and is a friend of BGV. Chief Security Officers (CSOs) have a tough job.  They need to protect an organization against many, many different forms of attack, and need to do their best to close as many vulnerabilities as possible if not all of them.  Attackers, on the other hand, need to find just one vulnerability to get their foot in the door.  As such, it is important for CSOs to employ a well-thought-out, multi-pronged strategy based on an understanding of what are the most significant risks and threats to their organization.  Just as Anik Bose mentions in his blog post on more general Strategy in Start-Ups, thinking about the “who, what, and where” is just as important for the CSO as it is the CEO  — in particular, for the CSO, strategic questions that need to be tackled are:
  • Who are you trying to defend your organization against?
  • What are the attackers after?
  • Where is the attack emanating (or going to emanate) from?
The typical profile of the attacker (“who”) has changed over the decades, as well as what they are after and where the attack will emanate from.  The attacker profile has shifted from being teenagers who just wanted to experiment or make a name for themselves, to cybercriminals who were out to make money, to now nation-states that have corporate espionage and military goals in mind.  In the mid 1980’s to the early 2000’s, relatively unsophisticated “one-man” attackers (e.g., graduate students, hobbyists, amateur programmers) would write worms, such as the Morris, Code Red, and SQL Slammer worms.  Worms were simply viruses that would copy themselves onto other machines over the network (a process that occurred quickly and sometimes with a payload that could do something worse), but mainly generated a lot of traffic and productivity disruption in the process of copying themselves.  For instance, SQL Slammer was the first such worm that the White House was notified of due to its disruption of ATM machines and travel reservation systems.  However, these attacks weren’t targeted at any one particular organization. By contrast, cybercriminal attacks that grew through the mid- to late- 2000s were conducted by teams of attackers whose goal was more focused — specifically, focused on making money for the attackers.  Such groups of cybercriminal attackers structured themselves in a manner that resembled legitimate, for-profit corporations, and within just a few years an “underground economy” arose.  The operations of cybercriminal groups were in some cases more profitable than physical crime, not to mention could scale faster, and presented less harm to the attackers as they could be thousands of miles away from their targets and victims, evading law enforcement.  Examples of cybercriminal schemes included charging ransom to banks to stave off DDoS attacks that would take their sites offline, conducting large-scale botnet-based click fraud to defraud advertisers and search advertising networks, and selling fake anti-virus software en masse to consumers whose machines really were not infected.
Time Period Typical Attackers Typical Goals / Motivations Examples
Mid 1980’s to Early  2000’s Mostly “one-man” shows or small teams Disruption / Defacement Worms (Morris, Nimbda, Code Red, SQL Slammer), Activism / Hacktivism
Early to mid 2000’s Organized groups of cybercriminals Steal money / conduct fraud Phishing, Identity Theft, Data Theft, Click Fraud, Pharming
Mid 2000’s to present Nation-states Steal intellectual property, Identify dissidents, Disrupt nuclear arms development Operation Aurora, Stuxnet, Watering Holes

Summary of attacker types and motivations from mid-1980s to present

Today, organizations also face the threat of nation-state attacks, in which governments or groups hired by governments are the “who” behind the attacks.  Such groups are typically very well-funded, patient (may conduct their attacks over a period of years), and sophisticated (may manufacture zero-day vulnerabilities as well as new technology to conduct their attacks).  They have a variety of motivations, of which corporate espionage is one.  Stealing the intellectual property of foreign corporations and replicating products without having to incur the cost or time involved in R&D may be a quick path that a government could pursue to enable its constituents to compete.  Operation Aurora, in which Google as well as three dozen other corporations were targeted, as well as APT1, in which over 150 organizations were victimized over a 7 year period were examples of “advanced persistent threat” types of attacks in which corporate espionage was a suspected or likely goal.  In these types of attacks, spear phishing, malware drive-by-downloads, social engineering, and watering hole websites are common mechanisms used as part of the attack.  In addition to corporate espionage, nation-states may also conduct attacks to attempt to degrade an adversary’s capability to manufacture weapons.  In the Stuxnet attack discovered in 2010, for instance, malware that targeted centrifuges that could be used to enrich uranium infected 60% of the computers in Iran.  By speeding up or slowing down centrifuges, the malware interfered with the enrichment process that could be used to develop weapons-grade uranium and manufacture nuclear weapons. Note that as time has progressed, attacks have only gained in volume, diversity, and sophistication giving truth to the saying that “attacks only get better.”  It is also interesting to note that malicious software, or malware, has been a common thread across attacks and has been used as a key tool used in conducting progressively more sophisticated attacks over time. In this article, while we have focused on the “who” and “what,” the “where” is equally important.  Attacks on the Internet can, of course, emanate from anywhere, and often cannot be prevented from emanating, but the “where” can be extremely important for detection, containment, and recovery.  It is important to prevent attacks whenever possible, but preventing every possible form of attack is usually cost prohibitive.  Corporations need to determine what their highest, most significant risks are, and invest resources to prevent those, while, at the same time, investing in countermeasures that allow them to detect, contain, and recover from medium and low priority risks.  (Corporations also need to invest in detection, containment, and recover for high priority risks, just in case they don’t get prevented.)  The origin from which web traffic, emails, phone calls, and other communications emanate from can often provide a signal of how suspicious the communication is.  Even when attackers “proxy” their communications or obscure their actual source, any signal that indicates that the original source is being obscured can also serve as a signal. In this article, I have mainly discussed general security trends in the “who” and “what” that have affected many organizations over the past few decades.  That said, each organization is unique and must put in the appropriate effort to determine the “who” and “what” they need to be the most concerned about as a paramount step in their cybersecurity strategy formulation.