By Debu Chatterjee, CEO of Konfer.AI
Examples of AI-based decisions are increasingly coming under scrutiny today—from the job boards biases in New York City to claims denials in healthcare. While AI is helping improve decision making for enterprises, it is also becoming imperative to prove that the process of AI-based decision making is understood —what assets were used, how, and what was the logic?
Post the financial crash of 2008, when banks and financial institutions were reined in for their wildly unpredictable lending practices, in 2010, the Dodd–Frank Wall Street Reform and Consumer Protection Act was enacted in the US as a response to the crisis to “promote the financial stability of the United States.” The Basel III capital and liquidity standards were also adopted by countries around the world.
Today, AI is in a similar situation. Countries and ethics groups are stepping up to announce their AI policies—varying from guidance to detailed requirements. Global corporations are now faced with complexities that could hinder confident AI innovation and adoption:
- How do we know we have any AI working in our organization? (Most software today have some AI in them, and business leaders maybe unaware of them)
- Which of our businesses are impacted by which specific AI regulatory mandate?
- What are our risks and do we know how to manage them?
- Our global footprint could cause our organization to be exposed to multiple regulations and mandates. How do we keep pace with them?
Konfer has launched an AI GRC product to accelerate AI adoption in the enterprise—you guessed it right, by leveraging Generative AI!
Across the world, industrial consortia and statutory organizations are laying down complex compliance requirements that are contextual to their respective needs—unique, different, and complicated to understand the minute details. The regulations will proliferate, and so will the assets. Chief Risk Officers are going to have to handle two wild horses—the regulations and their complex interpretations, and the enterprise’s internal development of models, use of LLM services and apps that will proliferate AI use.
A standardized approach to managing the regulations and resources will reduce cost without compromising competitiveness, and innovation.
Konfer ingests a regulatory document, and by leveraging AI, generates a series of control questions that is the start of the governance journey. An enterprise’s internal policy documents can also converted in to identical governance controls. These controls are published to organizations stake holders across the enterprise—app, model, and data developers, business leaders, and others. Stake holders responses to these questions are generated by AI from the documents for models, data, and apps. The results measure the risk exposure and generates a risk score for various criteria—from the regulations or from internal processes.
The product will help an organization’s compliance and legal officers, and other stakeholders involved in AI initiatives, create internal checkpoints designed for their unique businesses, and ensure they meet the ever-changing AI compliance landscape.
The Konfer Confidence Score, like the FICO® score, is designed to help enterprise leaders make AI governance decisions that will impact their brand, customer engagement, and market optics.
Konfer’s unique playbook approach—build once, run multiple times—helps chief risk officers create specific playbooks based on industries, business units, and geographies, thus building granular risk visibility and mitigation controls for their companies.
At any point of time, the company’s AI posture is available to all – internal or external. This transparency is the foundation for Trust in AI, and accelerates adoption of innovative solutions that drive productivity. This standardization of GRC into frictionless control workflows speeds up innovation, even while continuously maintain compliance for business-as-usual efficiency.
Learn more: https://konfer.ai/ai-grc/