The Role of Ethics When Investing in AI

July 8, 2022 | Anik Bose, General Partner, BGV

What role should ethics play in investing in AI ?

That’s a question I have begun to hear more frequently as a general partner at BGV and as an investor who cares deeply about responsible tech and AI governance.

Imbedding ethics in company culture and product design can become a competitive advantage in AI first startups. By placing ethics at the core of product design, startups can accelerate market adoption by mitigating risks around bias, explainability and data privacy and continue to grow in an environment where values and ESG are becoming increasingly important with customers and employees.

A values-driven industry

At BGV, we invest in early-stage immigrant founders who are building AI-centric products for the Enterprise 4.0 market. Whether the technology is robotics, NLP, or computer vision, these organizations have deep tech at their core. 

We believe that far more value can be created through AI use cases that augment humans rather than AI use cases that focus solely on replacing humans through automation. The former drives exponential growth in productivity while the latter commoditizes skilled labor leading to inequalities in income and wealth distribution.  AI used only for labor substitution may make sense for use cases that are dangerous (ie mining) or those where there is a shortage of labor (ie recycling).  That’s why we screen our deal flow to better understand the value creation impact rather than investing purely for automation use cases.  

We also screen founders for prior track record of transparency and ethical behavior. As early stage investors it’s vital that we can trust founders to deliver on their vision and promise.  At the end of the day the VC Business is about people, technology is a people business because we bet on the integrity and honesty of our founders as much as the innovations they are bringing to the world. 

Red flags and green lights

During our due diligence process, we look for red flags and green lights.

What’s a red flag? How has a startup founder performed in his or her entrepreneurial career? Have they engaged in ethical behavior with customers? We also look at a founders’ past experience at established companies. If we find that he or she has a track record of not fulfilling their promises, that’s a big red flag for us. 

The flip side of the red flag is the green light. If we find a startup entrepreneur has a consistent record of ethical practices and that their past customers and colleagues praise his or her leadership and integrity, that endorsement speaks volumes.

A human approach

We also believe that our founders have to trust us. It’s a two-way street. Our practice is to introduce a founders seeking an investment to other founders in our network. They need to do their own diligence on BGV and hear that we are a values based firm whose actions match the words, that we are truly committed to integrity and that we support our founders through the good and bad times. 

That is vital, because as VCs we are building an 8-10 year relationship with a startup company. There will inevitably be ups and downs. Mistakes will be made. That’s why a relationship built on trust is at the core of our investing strategy.

Ethical AI governance strategy 

There’s a difference between saying that you care about AI governance and actively engaging with startups to help them build responsible AI companies.  This is one of the reasons we founded EAIGG, a community platform of AI practitioners and investors dedicated to sharing AI governance best practices.

We have been pleasantly surprised that, indeed, many young entrepreneurs care about making the world a better place. Of course every young company wants to be a unicorn. But if a company’s values are lost along the way, and they are purely mercenary in pursuing their financial goals, then something important has been lost. 

During our initial conversation with startup founders we ask them point blank: do you care about AI governance and data privacy ? Do you believe that AI can make humans expendable? We don’t expect that they’ll have everything figured out. But we do expect that the issues are important to them.

It’s important that startups have a roadmap for AI governance.  Because 10 years in the future, when the small startup has become a corporate brand, it’s nearly impossible to retrofit technology and product architectures for AI governance and data privacy.  This cannot be an afterthought. 

A holistic view

When dealing with AI, it’s important to take a holistic view. I call this approach enlightened self interest. As a founder, it’s in the entrepreneur’s interest to build a great product. But it’s also in his or her interest to ensure market adoption, this implies ensuring elimination of model/data bias, addressing explainability and data privacy concerns to ensure that AI technology remains human-centric. 

We’re excited about the promise of AI but we also believe it’s critical to put humans back in the equation. AI is projected to create $3 trillion of value over the next 10-15 years. Part of that equation is to contribute towards setting the guard rails so that AI development and deployment is democratized and creates value for both employees and owners of capital.