The key to regulating AI is explainability. The key to explainability may be causal AI.

Since OpenAI unveiled the Large Language Model (LLM) based ChatGPT 3 generative AI chatbot late last year, there has been increasing support for regulating AI. The call to regulate has risen to a fever pitch with the subsequent release of an occasionally deranged, ChatGPT-based Bing chatbot, a much more powerful ChatGPT 4, and the eminent debut of rival platforms from Google and others.

 

An April 16 opinion piece by Ezra Klein in the New York Times that surveys regulatory proposals for AI argues that the White House Blueprint for an AI Bill of Rights released last October is the most promising of several new government plans, despite being overly broad. By contrast, the EU’s 2021 Artificial Intelligence Act proposal is out of date and overly focused on specific use cases, writes Klein, and China’s proposed regulations are too restrictive.

One of the most important — and challenging — recommendations of the White House Blueprint, says Klein, is the requirement that an AI can explain its reasoning. The proposal states: “Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context.”

The challenge, continues Klein, is that AI developers have made little progress on interpretability: “Force [an AI] to provide an explanation, and the one they give is itself a prediction of what we want to hear — it’s turtles all the way down.”

 

Indeed, generative AI chatbots are almost entirely lacking in transparency and are capable of massaging the truth to please the prompter. Most AI systems do not use generative AI and are not inclined to tell the user what they want to hear. Yet, most are similarly opaque.

 

The “black box” nature of neural networks is a significant problem for commercial AI deployments. The lack of explainability could also hinder several other AI recommendations listed by the White House Blueprint, including ensuring safety and effectiveness, protecting privacy, and correcting the well-documented discriminatory tendencies of many AIs regarding race, sex, and other factors. The final recommendation cited in the Blueprint, and one that is especially favored by Klein, is the right for a user to opt-out of an AI solution to choose a human alternative.

 

While Klein is skeptical of the prospect of an explainable AI, we take a more hopeful view. An emerging type of AI called Causal AI has the potential for doing much better at explaining its conclusions. 

For example, the Leela Core engine that drives the Leela Platform for visual intelligence in manufacturing adds a symbolic causal agent that can reason about the world in a way that is more familiar to the human mind than neural networks. The causal layer can cross-check Leela Core’s traditional NN components in a hybrid causal/neural architecture. Leela Core is already better at explaining its decisions than NN-only platforms, making it easier to troubleshoot and customize. Much greater transparency is expected in future versions.

  

What about this idea of building better “guardrails?” Could a causal AI agent be merged with an LLM chatbot to help it explain its inner thinking? Probably. Could a causal AI supervisor help reign in generative AI’s tendency to hallucination and unethical recommendations? Possibly.

 

Meanwhile, as the Times article points out, there are many other interesting ideas for AI regulation beyond the government plans, including proposals from The Future of Life Institute and the A.I. Objectives Institute. Boston Dynamics’ Ethical Principles is also worth a look. With generative AI already creating new legal, economic, and ethical challenges, and with the growing prospect that an untethered chatbot could escape into the wild, time is of the essence. 

The good news, writes Klein, is that most AI developers tend to be among those who are urgently calling for regulation. Count the Leela AI team among them. Last November, Steve Kommrusch, Leela AI’s Director of Research, presented at the ISO/IEC Artificial Intelligence (AI) Workshop, which was gathering insights for future standards around risk management in AI. A related ISO/IEC 23894 guidance document on risk management was published in February. Leela AI plans to participate in other AI standards efforts in the future.

 

Share the Post:

Connect with Leela

Leela AI’s visual intelligence software is in commercial use by several leading manufacturers. The Leela SaaS platform is available with flexible subscription pricing.

Fill out the form below to talk with us, see a live demo, or request a quote. Find out how you can transform your business with resilient AI.