By Mardi Witzel, Governance Coach and Niraj Bhargava, CEO NuEnergy.ai
Is AI Governance a Vitamin Pill or a Painkiller?
It isn’t always possible to see how decisions are made about AI. It’s time for organizations to get serious about taking concrete steps toward effective AI governance.
Let’s say you work for a bank that uses automated systems to make decisions about loan applications, or hiring, or internal promotion. These systems include machine machine-learning tools designed according to a set of criteria, trained on historical data sets, then freed to do their mysterious work. Maybe you personally were passed over for a promotion.
Now, imagine that sometime later, you learn that the artificial intelligence (AI) making this decision was flawed. Perhaps the data used to train it was biased, or the model was poorly designed. Maybe the system “drifted,” as machine-learning models are known to do (drift happens when a model’s predictive power decays over time due to changes in the real world). It’s one thing to get turned down by a human you can challenge. But there’s much grey area with AI. It isn’t always possible to see how decisions are made.
This truth underlies the widespread call for trustworthy AI — that is to say, for transparency, fairness and accountability in the development and use of AI solutions. Despite the great promise of these tools, the risk of negative outcomes is not far-fetched. AI bias is documented and real. This is why it’s time for organizations to get serious about taking concrete steps toward effective AI governance.
Read the full article published September 15th, 2022 on the Centre for International Governance Innovation’s website here: https://www.cigionline.org/articles/is-ai-governance-a-vitamin-pill-or-a-painkiller/