BGV and SVB Bring Industry Leaders Together For Roundtable Discussion on Ethical AI Governance

On September 9, BGV and SVB hosted a roundtable where industry thought leaders, AI specialists and technology entrepreneurs gathered to discuss Ethical AI governance. Executives from Samsung, ARM, IBM, National Grid, TCS, Zelros, ClerioVision, TruEra, Drishti, Sama, LabelBox, Everest Labs, and more, discussed trends, issues, and potential challenges  presented by  AI innovation and implementation. The event, hosted at the Ram’s Gate Winery in Sonoma, California, ended with all attendees agreeing on a call to action: to establish a community to democratize the growth and development of ethical AI governance.  The diversity of perspectives shared by this community of practitioners seeks to spur responsible adoption of Enterprise AI while protecting against regulatory overreach, big tech self interest and media hacktivism. 

The roundtable discussion covered key topics of interest including regulation, liability, surveillance, AI audits, and AI misuse fines. Below is a breakdown of each topic, what was discussed, and key takeaways.

Regulation

Should the responsibility for Ethical AI Governance lie with the public sector or the private sector? Speakers argued that a fragmented approach to oversight in which the federal government abdicates responsibility to states, creating a patchwork of rules and regulations, could spell disaster for the industry. Too much regulation could curtail the speed of innovation, which may be appropriate with higher risk use cases. Less regulation, however, or “product labeling” type solutions, may well be better suited for lower risk use cases.   The optimal approach, they suggested, entails a smart global regulatory framework that permits a flexible posture across different industries depending on the risks tied to their associated use cases. 

Bias & Liability

Human bias in innovation has been well researched and understood for years, but AI model bias opens a new chapter in this discussion. Algorithmic bias, for a variety of reasons, can find its way through AI systems and data sets introducing systematic and repeatable errors that create unfair outcomes for one arbitrary group of users over anotherSo how should practitioners grapple with this challenge Even the most advanced machine learning algorithms can fail, so where does liability for AI technology lie? 

The roundtable participants observed that  simulation tools exist to detect bias and the root cause of bias in AI models, but what is lacking is benchmarks (e.g., what is an acceptable bias?) that will vary based on use case. The group emphasized the importance of clean data as the chief culprit introducing bias into AI solutions. Humans must also be retrained from making decisions to validating decisions via human-in-the-loop processes, since the machines are now equipped to do the decision-making.  

Privacy

From closed-circuit television (CCTV) in the UK, China and the US,  governments around the world are deploying massive camera infrastructure and will use data with AI. Private sector surveillance, they argued,  should focus on detecting and remedying technology in use problems. Drishti, who’s co-founder is a participant in the roundtable, provides video analytics and traceability for manual assembly lines to enable improvements in worker performance. Observing a manufacturing assembly line, he argued,  should focus on surveying actions (not faces) to enable workers to improve productivity and not be displaced by automation. This requires a different approach of asking workers to provide inputs to be part of the innovation process. 

AI Audits

The group spent some time debating AI Audits, and whether or not they should be required by law.  There were opposing perspectives that emerged during the discussion along with the practical constraints of using humans to perform audits of AI models given the complexity of advanced AI. A potential solution proposed entailed leveraging AI solutions, such as submodular optimization, to conduct AI audits. Questions, however, naturally arise: Who are the auditors? Are they skilled enough to perform audits? How do you train the auditors? The answers to those questions will decide how courts, attorneys and litigators will approach these issues in the legal realm.  What’s more, would judges hold AI auditors to the same standards as humans?  

AI Misuse Fines

Finally, what happens in the case of violations?  Should large technology companies that use AI to manipulate human judgment or data breaches be subject to heavy fines? A strong, affirmative perspective was expressed at the roundtable on the assumption that a few companies being fined could play a constructive role in  driving the right behavior across the industry. For example, Facebook’s $5B Cambridge Analytica settlement,the largest of its kind  for a violation of consumer privacy, led to tangible change. It’s an example of how Tort Laws have continuously led CPG companies to take corrective measures when their actions have been detrimental to society.  

Key Takeaways

So, what are the key takeaways from this roundtable? Overall, it was agreed upon that smart global regulation frameworks are needed for Ethical AI governance. Regulation should vary by use case, and collaboration is needed between the public and private sectors. Bias is both a human and AI problem and acceptable bias will vary by use case. Automated AI audits (AI on AI) can be a creative way to build trust in AI models. Lastly, and most importantly, the recognition that there is a broader need to democratize the growth and development of Ethical AI governance to achieve the full promise of AI. This cannot be left to big tech and regulators alone. Rather, a diverse grass-roots community will be necessary for the development of Ethical AI governance.