Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
 
Modern applications like big data analytics, facial recognition, IoT and video streaming, as well as next generation applications like artificial intelligence and machine learning place unique demands on both the compute and storage infrastructures. Most of the modern and next generations operate against a vast store of data. The application makes a query against that data set, which is processed in some way yielding the answer. They all require vast amounts of data to be stored and then count on compute to identify the subset of data the application needs to respond to the queries.

The Problems with Moving Storage Closer to Compute

The most common strategy to addressing the challenges presented by modern and next generation applications is to move storage closer to compute. The strategy is install storage inside each compute node and let the data reside there. Each query requires a large section of data, in some case all of it, is sent to the compute tier to identify the needed sub-section. Moving storage to the compute as a strategy does reduce network latency. However, while the CPU-to-media interconnect has improved with advancements like NVMe, there is still latency in the connection. There is also a complication of making sure the right process has access to the right data locally.

Moving Compute Closer to Storage

The first step in the process for most of these modern and next generation applications is to reduce the working set of data. Essentially, if the data set is the haystack, the application lives to find the needles in that haystack. If this is the case, it may make more sense to move the compute to the storage. That way the media can perform the data reduction or qualification before data is sent to the main compute tier. For example, a facial recognition program searching for Elon Musk dressed in black may send out a request to each drive for images of Elon Musk. Those images are sent to the main compute tier which does the more fine-grained search for Elon Musk wearing black. The first value of such an architecture is the compute for the environment scales, and does so at a very granular level, at the drive. The second value is the amount of bandwidth required to transfer the data to the main compute tier is greatly reduced since the drives are sending a much smaller subset of data instead of all of it. The third value is the compute tier does not have to scale as rapidly because the drives are doing more work.

Introducing NGD Systems

NGD Systems is announcing the availability of the industry’s first SSD drive with embedded processing. This is not a processor for running flash controller functions (it has that too) this is a processor specifically for off-loading functions from the primary applications. Developers of these modern and next generation applications will find adopting their applications to take advantage of the new drives relatively straightforward. The NVMe Catalina 2 is now available in PCIe AIC and U.2 form factors.

In-Situ Processing

While not a controller company NGD Systems does incorporate an “in storage” hardware acceleration that puts computational capability on the drive itself. Doing so eliminates the need to move data to main memory for processing, reducing bandwidth requirements and server RAM requirements. It also reduces the pace the compute tier needs to scale, which should lead to reduced power consumption.

Elastic FTL

Beyond onboard compute, the drives themselves also have top notch controller technology. The controllers (separate from the compute) on the NGD Systems SSD use proprietary Elastic FTL and Advanced LDPC Engines to provide industry leading density, scalability and storage intelligence. It enables support of the event changing availability of drive types including 3D TLC NAND, QLC NAND as well as future NAND specifications. The company also claims the lowest watt-per-TB in the industry.

StorageSwiss Take

Moving compute to the storage is the ultimate in “divide and conquer”, which may be the best strategy for applications needing to operate on large data sets. If every drive in the environment can reduce the amount of data that needs to be transferred into main memory for processing the environment becomes infinitely more scalable. Unlike many flash memory announcements, the NGD Systems solution should have immediate appeal to hyperscale data centers looking to improve efficiency while increasing response times. NGD Systems will show a demonstration of the technology at Flash Memory Summit 2017, August 8-10 in Santa Clara, CA. Vladimir Alves, CTO and co-founder of NGD Systems, will also make a presentation on August 10th, at Flash Memory Summit Session 301-C, entitled, “Get Compute Closer To Data”.
Source: https://storageswiss.com/2017/08/08/move-compute-closer-to-storage-ngd-systems/
0

IdentityMind Global, today announced the latest version of its Enterprise Fraud Prevention platform, designed to increase operational efficiency and reduce manual review time for medium to large Etailers. The new version extends the user interface with customizable dashboards, queue management, reporting, and machine learning analysis intelligence, in addition to IdentityMind’s eDNA™ trusted digital identity core technology.

According to the Merchant Risk Council (MRC) Global Payments Survey, the typical manual review rate for online orders was 8% in 2016 with an average per transaction review time of 5.6 minutes (2015). In the same survey, 46% of merchants site “lack of sufficient internal resources” as a major fraud challenge.

Version 1.29 of the IdentityMind platform addresses this head on by enabling fraud analysts and managers to configure operational dashboards with widgets tailored to expedite transaction review. Through the dashboards, analysts can quickly see overall transaction processing statistics as well as exceptions that require manual review, and they can resolve transactions in bulk, assign to queues, and review individual or escalated transactions. The average manual review time is below 4 minutes for IdentityMind’s enterprise etailer beta clients, versus the average of 5.6 minutes per transaction reported by MRC. This average reduction of nearly 30% translates into better processes and better cove rage by fraud analyst teams. In addition, through its use of graph intelligence to analyze digital identities, IdentityMind can reduce transaction fraud by 60% and review of card not present (CNP) transactions by 50%.

“Etailers and other enterprises require solutions that can efficiently handle large volumes of transactions seen online,” said Garrett Gafke, CEO of IdentityMind Global. “IdentityMind not only provides a highly scalable solution that leverages digital identities to help Etailers make the best automated risk decisions, but we provide a solution that increases the efficiency of your manual operations to reduce manual review time and make every one of your fraud analysts, your most efficient fraud analyst.”

Enterprises require highly scalable solutions that aid account opening and transactional fraud decisions in real time across all channels where they interact with their customers. IdentityMind addresses this requirement starting with its core strength in digital identities. IdentityMind’s patented eDNA™ engine continuously builds and validates identities. These identities grow with each customer interaction across the secure IdentityMind Identity Network and are validated through a variety of third party data services available through the IdentityMind API. Using machine learning and graph intelligence, IdentityMind builds reputations for each identity allowing enterprises to understand the true risk of doing business with any particular entity. The Rize report allows enterprises to understand where they can maximize revenue and minimize risk, and which rules they need to modify to get there.

IdentityMind’s Enterprise Fraud Prevention platform can be leveraged as a platform with dashboards, graph intelligence, reports and digital identities included, or a la cart via the IdentityLink API that allows companies to integrate IdentityMind’s advanced analytics into their existing risk management platform. IdentityMind’s newest platform is readily available worldwide.

About IdentityMind Global

IdentityMind, creator of Trusted Digital Identities (TDIs), offers a SaaS Platform for online risk management and compliance automation. We help companies reduce and improve client on boarding fraud, transaction fraud, AML compliance, sanction screening compliance and KYC compliance. IdentityMind continuously builds, validates and risk scores digital identities through our eDNA™ engine to ensure global business safety and compliance from customer onboarding and throughout the customer lifecycle. We securely track the entities involved in each transaction (e.g. consumers, merchants, cardholders, payment wallets, alternative payment methods, etc.) to build payment reputations, and allow companies to identity and reduce fraud, evaluate merchant account applications, onboard accounts, enable identity verification services, and identify money laundering. For more information, visit: http://www.identitymindglobal.com


Source: http://www.prweb.com/releases/2017/08/prweb14579646.htm

0

Companies are increasingly using video to engage their audiences, a trend that Sherpa Digital Media Inc. perceives as a major revenue opportunity. The Redwood City, California-based startup has secured $5.5 million as part of a funding round announced today to capitalize on the growing role of multimedia content in the enterprise. Sherpa offers a platform that enables companies to centrally coordinate video delivery across all the different outreach channels they use. A programming interface makes it possible to plug into each channel, be it internal or external, with relative ease. Marketers looking to make a better impression on leads could use Sherpa Stream to embed client testimonials into their companies’ websites. Team leaders can employ the platform’s live streaming features to share updates with employees and conduct training sessions. To allow for replays, it also provides the ability to turn a stream into an on-demand video once the presentation is over. According to Sherpa, the fact that it’s all handled through a single interface makes management easier compared with when content is spread out across multiple systems. One benefit is that users can quickly replace old items if company messaging changes to ensure everything is up-to-date. They also have the ability to centrally monitor how viewers engage with videos. Sherpa Stream logs key engagement metrics to provide insight into how content can be improved. A company could, for example, check how much time workers typically spend watching training content and adjust the length of streams accordingly. Over on the front end, Sherpa said, the feed is automatically optimized for each user’s device. Sherpa’s platform has been adopted by major companies such as Intel Corp., Levi Strauss & Co. and the Walt Disney Co. K.C. Watson, the startup’s chief executive, told VentureBeat that the new funding will be invested in developing learning machine features to widen the appeal of the software. The round was led by early stage-fund Benhamou Global Ventures with participation from Rally Ventures and several returning backers.
Source: https://siliconangle.com/blog/2017/08/07/sherpa-raises-5-5m-help-enterprises-broadcast-video/
0