Last week, I had the opportunity to participate in the World Economic Forum’s workshop,“Reimagining Regulation in the Age of AI.” Hosted by the Forum’s Centre for the Fourth Industrial Revolution (C4IR) the workshop focused on a pilot underway with New Zealand to test new concepts for governments to regulate this disruptive technology.
Bringing together experts in industry, government, civil society, and academia, we were challenged to raise bold ideas, potential issues, and questions related to what the regulation of AI could look like, what the scope of regulation would be, and who could be responsible for overseeing this work.
Discussions and sessions took place along three themes, “how to demonstrate that AI can be used to improve the well being of the public,” “Options for AI Governance,” and “Approaches to Assessing AI Ethics.”
Some of the key questions raised were how to get practitioners to know where to start, what processes to trust and follow, and who should be at the table when discussing all of these issues. A white paper will be released that will capture the full discussion, but for me, some of the most interesting takeaways included:
- Collaborate with people from Diverse Perspectives – There is a continued need to consult with diverse and independent voices including users and experts. As AI continues to rapidly evolve and is being added into many of our services, it’s important there is an independent review done. Whether this is for the creation of a policy or the analysis of data, it needs to be more than just lip service or box checking, it is important to listen to the people you bring together to discuss these issues.
- Do agile review – The review of AI can be incorporated into existing governance practices, however, new frameworks for assessment will be required, yet there are already governance practices in most organizations that can assess the risk, ability to address an issue if one occurs, and whether or not your technical stack is safe and secure. Leverage those, now!
- Create one set of rules – Whatever new policies are being developed for government’s use of AI should be aligned to how government will regulate industries development and deployment of these tools.
Given my experience leading the development of the Directive on Automated Decision Making and the AI List of Qualified Vendors for the Government of Canada these discussions were not only familiar (and fairly cathartic) but also the reason I feel like the answer doesn’t lie solely within government.
These are exactly the questions that we are asking ourselves at AI Global. Which is why we are developing tools to help address these needs, including the Design Assistant which will help everyone designing or delivering an AI system from engineers, to program managers, to lawyer know how to do so in a more responsible way. One of the key lessons learned from my experience in Canada is that assessments available at a design or concept phase raises key questions at the start before it is too difficult to go back and make changes. They also serve as a good mechanism to bring all the necessary perspectives to the table to answer the assessment.
While this workshop reminded me that this topic is not straightforward and is more complex than I ever imagined, it’s important that we work together now to take action.