Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Need to move large datasets for Big Data, IoT, and ML/AI adversely impacts application performance and storage/networking/server costs, according to a recent survey from NGD Systems and G2M Research.

 

Irvine, Calif. – December 12, 2017:

 

G2M Research, an analyst firm covering the Non-Volatile Memory Express® (NVMe) marketplace, today released the results of its recent survey on the need for “Intelligence storage” for applications with large data sets. The survey, sponsored by NGD Systems, was conducted across 112 respondents from organizations involved in Big Data, artificial intelligence/machine learning, and Internet of Things (IoT) applications.

 

The purpose of the study was to gauge whether the movement of large data sets across existing processing and storage architectures negatively impacts the cost and usability of the data by applications. The results of the survey show that existing compute and storage architectures adversely impact the performance and cost of these applications, and that new architectures are needed if these applications are to continue to scale in size and capabilities.

 

“Datasets for applications such as Big Data, AI/ML, and IoT continue to grow at an exponential rate,” said Mike Heumann, Managing Partner, G2M Research. “Our research study shows that the majority of users in these application spaces are very concerned about how this growth will impact their ability to use these applications over the next 12 months. The majority of these end-users also believe that new approaches like processing data within storage devices will be necessary to overcome existing data movement bottlenecks.”

 

The movement of very large data stores is increasingly critical for real-time analytics in a variety of applications. However, this data movement is not without cost or impact. The key findings of the survey include the following:

 

1.    52% of respondents consider the movement of large data stores between storage systems, storage devices, and servers will be a significant problem for their organization either today or within the next 12 months.

 

2.    92% of respondents expect that data movement will adversely impact their organization, with 62% responding that it will impact server, networking, or storage costs, 48% saying it will impact application performance, and 29% saying it will limit the way data can be used.

 

3.    Over 79% of respondents believe that current processing/storage architectures will not be able to handle the amount of data in their industry in the next 5 years.

 

4.    64% of respondents believe that processing or preprocessing data inside storage systems/devices could help solve the data movement problem.

 

G2M Research has produced a report and infographic summarizing the data from the survey, which is available at http://g2minc.com/research.

 

“As the capacity of SSD drives and the number of SSDs within servers continue to increase, moving the data out of these drives into the CPUs will be become exponentially harder and more cumbersome” said Nader Salessi, President and CEO of NGD Systems. “The G2M Research survey clearly illustrates the issues that large data sets present to application architects for Big Data, IoT, and AI/ML, among others. In-situ processing like that of the NGD Systems Catalina 2 SSD provide a compelling alternative to moving large amounts of data between storage systems, storage devices, and servers/CPU complexes.”

 

One of the most promising concepts to address the storage-CPU bottleneck is the use of in-situ processing within storage devices. In-situ processing revolutionizes the deployment of a variety of applications that today require huge clusters of expensive multi-socket servers with large amounts of RAM. By significantly reducing the amount of data that has to be moved between storage systems/devices and servers/CPUs/GPUs, in-situ processing within NVMe flash solid-state drives (SSDs) can significantly reduce network size/complexity, CPU/GPU workload, and power consumption for applications utilizing high number of IOs.

 

NGD’s Catalina II NVMe SSD enables the capability of in-situ processing, and is the first product to help reset the CPU/GPU-storage gap and improve the data center TCOs. NGD’s NVMe SSDs also has the industry’s highest capacities and lowest power per TB (W/TB).

_____________

Source: http://www.ngdsystems.com/pr_20171212.html

0

Eric Buatois, BGV general partner shares his perspective on the hyper scale data center disruption. The hyper scale data center market comprises companies such as Facebook, Google, Amazon, Alibaba, Tencent as well as large CDN service providers and represents 50% of the global server market. These firms aspire to influence a larger portion of the value chain not only in servers but also in storage and networking. Large cloud service providers (Amazon, Microsoft Azure, Google) are developing their own hardware and software solutions and will represent purchase volumes of greater than 50% of the worldwide storage capacity over the next few years. Each of these firms are buying more than 1 million servers per year and are expected to continue this rapid rate of consumption. The requirements for servers, storage and networking for such customers are very different than that of the mainstream enterprise market. These customers require low cost, low power consumption, maximum storage capacity with minimum floor space, support of specific file systems associated with Big Data and highly efficient networking switches. Such customers are moving rapidly to SSD for their offline processing – basically Big Data reporting and analytic workload.  For example E-bay has recently decided to purchase only SSD storage migrating away from hard disk based storage. The computing industry is facing a silent revolution both at the technology and business model level. Business model: The hyper scale customers are sourcing their core semiconductors directly from large vendor such as Intel, Flash vendors, and network processing vendors. These servers are then assembled by large ODM’s in Asia based on high density, low power specification. Given the magnitude of the CAPEX investment needed to put in place enough compute power, the direct sourcing of semiconductor provides very large savings to these firms. Large systems vendors like HP, now Lenovo (following the IBM acquisition), Dell on the server side, Netapps, HP, IBM, EMC on the storage system side and Cisco on the networking side are faced with a big dilemma: either ignore 50% of the market or accept to significantly lower their margins for this segment of the market. The traditional business model built for the enterprise of large system vendors that has been in place in since the mid 1990’s is being fundamentally challenged. The traditional business model consists of the sourcing of components such as Intel processors, network processors (Cavium or Broadcom), hard disk and flash storage subsystems (Seagate or Western Digital) and then integrating it with their proprietary software. These vendors then charge prices with high gross margins to sell this integrated product. But this business model is now being unbundled. Large semiconductor companies are rushing to supply directly to Amazon or Google, large Asian ODM’s have developed healthy white box businesses. The question arises – Will this be a repeat of the PC business model where Microsoft and Intel control all the industry profitability or another iteration of the smartphone business model where Apple, Samsung, Qualcomm control the industry profitability? A new computing paradigm A new computing paradigm optimized for cloud and very high workload is emerging. Dedicated dense servers such as the HP moonshot server are the new norm. The storage capacity is tied more and more directly to the servers for either caching or analytical workload. The purchasing power of these hyper scale data centers makes the price of flash storage low enough to replace hard disk based storage. The dramatic I/O bandwidth of flash storage array is a must for efficient big data processing. The very large number of ports to be connected demands a new low cost switching architecture. The storage software stack and associated file system requirements is also fundamentally different from the one used by the enterprise. Cost and scalability more than reliability will be the design driver given the massive scale out requirements. Big data and Hadoop initially created by Google for their internal need is now a norm adopted by the whole industry. It is only the beginning where new compute inventions developed by hyper scale data centers will form the foundation of the new cloud-computing paradigm. The established system vendors HP, Lenovo-IBM, Dell, EMC, Cisco are too slow to embrace this new computing paradigm as they are held hostage by the hefty growth margins of the business model designed for the enterprise market.. But the hyper scale data market is too large to ignore, especially as white box vendors, storage subsystem vendors and semiconductor vendors pursue aggressive penetration strategies for this new market. The level of software innovation is massive at all levels of the stack – spanning storage, networking and database. Some companies such as Google, Facebook, and Amazon are dedicating large software engineering resources to address these needs from the ground up. But will such an approach be sustainable? Hyper scale data center customers are eager to find companies who understand their needs and design the right product for hyper scalability and low cost. We believe that the disruption in the hyper scale data center creates a very good opportunity for building new technology companies – ones that are purpose built from both a technology and a business model perspective to meet the needs of this market. The deep intellectual property created by innovative startups can be shared with the top 20 hyper scale companies and can provide the foundations of the future cloud enterprise software optimized for such applications.  This is an ideal opportunity for venture capitalists to leverage the Silicon Valley talent to design new architectures crossing the computing, storage, and networking silos. We believe that start ups will have a natural advantage given their ability to mix effectively the expertise of different domains to create the new computing paradigm for the hyper scale data centers.
0

Marc Willebeek-Lemair, CEO and Founder of Click Security shares his perspective on Real Time Network Security Analytics A hundred years ago, when someone had a fever, broke an arm, was delivering a baby, or contracted some rare disease, you called the town doctor. The doctor would come to your house and look you up and down and typically prescribe two aspirin and tell you to call him in the morning!  The doctor served the entire community and had to have an answer for every type of ailment.  Today, we have medical specialists for just about every conceivable malady. The industry is far too specialized to ever believe a single type of doctor will be effective.  Well, the challenge most enterprise IT security teams face today is a lot like the town doctor 100 years ago!  Often the security team (2 or 3 staff at best) needs to know about every type of security threat against every type of server, client, application, protocol, cloud service, you name it.  Furthermore, the list of targets is hyper-dynamic, no longer able to be dictated by IT, yet being preyed upon by a growing, well-armed, well-funded, highly motivated army of adversaries.  Most security teams just don’t stand a chance. So now what?  Enter the era of Real-time Network Security Analytics. This technology enables security teams to get ahead of the bad guys and take back control of their networks.  Unlike the medical profession, security organizations are just not able to increase their headcount by an order of magnitude or two.  By capturing human expertise in the form of analytics (virtual expertise), individual security teams gain a force multiplier to address the ever-evolving, complex threat landscape. Ultimately, given the right data and the right insight into what questions to ask or nuances to look for (analytics), a faster and more accurate diagnosis and treatment is possible. This, however, poses several challenges:
  1. Big Data: we need the right data and it needs to be clean and timely
  2. Big Analytics: we need the right analytics and lots of them running continuously
  3. Visualization: we need fast and intuitive interfaces for human analysts
Let’s explore each of these challenges: Big Data –The data can be voluminous, but rather than attempt to capture all possible forms of data, it makes more sense to select the data most useful to the analytics.  The right combination of log sources, network data, file data and endpoint data along with external threat intelligence is key. Big Analytics – Ultimately analytics can automate much of what the human analyst performs manually – leveraging broad expertise packaged into software. Analytics can be used to separate the signal from the noise – by converting many independent low-fidelity events into a high-fidelity, actor-based alert.  Analytics can also automate the contextualization around an actor – further coloring its severity and accelerating the time to understand what is happening and formulate an appropriate response. Running many different analytics simultaneously in real time against a steady flow of data, however, is a challenge, requiring the right type of stream processing engine. Visualization –100% automation without human intervention is unfortunately not feasible against most modern threats.  Often, final diagnosis of a high fidelity alert requires a human analyst.  For this human interactive stage, analytics that pre-process context and provide intuitive visualization capabilities can greatly accelerate the security analyst’s ability to respond. Big Data and Security Analytics – particularly Real-time Network Security Analytics – are powerful levers that can enable IT security “Town Doctors” to combat the increasingly-challenging cyber threat landscape. Think of them as antibiotics and MRIs.  They enable you to see what is important, distilled out of the mass of data; be more efficient and effective in analysis and response; and to automate your analyses so that you do not have to do the same thing over and over again.
0