The call for trustworthy AI is evolving from an era characterized mainly by the existence of principles-based frameworks, to one where regulations and standards promise to inject a sense of urgency and pragmatism to the task.
The introduction of proposed AI legislation in the European Union, the U.S. and now Canada, means that private sector organizations who are using or considering the use of AI, will soon have a concrete reason to get serious about AI assurance.
The rise of regulatory requirements may well be a driver of firm-level capacity to govern AI, but there are good reasons not to wait for hard laws to be in place. AI is already a broadly applied technology, with its attendant risks and opportunities.
In this article, initially published with the Centre for International Governance Innovation (CIGI) and also appearing in The Toronto Star (modified for length), Mardi Witzel explores some of the question arising from the introduction of Bill C-27 in Ottawa last month.
As Mardi states in the article, “The proposed Artificial Intelligence and Data Act is one of three regulatory components in the Digital Charter Implementation Act, 2022. If passed, it will require firms designing, developing and using high-impact AI systems to meet certain requirements aimed at identifying, assessing and mitigating bias and harm”.
She closes by saying “In the next five years we will see the growth of a major new market in technology assurance, and much of it will be built around AI. Disclosure will be the linchpin in this emerging field.”
Read the full article here published July 6, 2022: https://www.cigionline.org/articles/can-we-trust-ai-from-ottawa-a-qualified-yes/
Article re-published here, published July 10, 2022: https://www.thestar.com/opinion/contributors/2022/07/10/will-artificial-intelligence-save-humanity-or-supersede-it-a-vanguard-is-carving-a-path-on-ai-governance.html