I’m not usually one for new year’s resolutions, but a goal that I’ve committed to this year is being better at sharing. Our team has lots of interesting work underway and I’m fully aware that we can not be successful doing this on our own. By sharing more about what we are up to at AI Global, we hope to build a large and diverse community, making our efforts even more meaningful! So, since we are still (yes, barely) in the first month of the year, I’m trying to get off to a good start by doing a quick review of some of last year’s highlights and share a bit more about what’s to come.
First and foremost! I had the pleasure of becoming Executive Director of AI Global in the fall of 2019. With a talented and experienced board of technology experts, academics, and business leaders who had an important and ambitious mandate in mind, I knew this would be an incredible opportunity.
Focused on building tangible tools to promote the responsible use of artificial intelligence, I was confident that AI Global was the right home for me to continue to build on the work that I led at the Government of Canada.
In an effort to focus our efforts to ensure the biggest impact possible, and create the most value for our members, we have been examining what the most significant barriers to implementing AI in a responsible way are. While there is much work to be done, we have decided that based on our experience and initiatives, we are going to focus on helping our members and the community better navigate regulatory issues as well as work with governments to help inform regulation through both public and private governance initiatives.
As such, I have been working with the Board and our growing community to develop a robust set of initiatives which include:
- The responsible AI Portal is an authoritative repository combining reports, standards, models, government policies, open datasets, and open-source software to help members better navigate the AI landscape and directly connect with the experts who build these tools.
Responsible AI Design Assistant
- The responsible AI Design Assistant is a virtual assessment to help members anticipate problems and future-proof their AI system. This tool brings together research and industry best practices to help designers, developers, and product owners keep in mind key AI challenges including data rights and use, privacy, security, explainability, fairness, bias, and robustness.
Responsible AI Check (Certification Mark)
- The responsible AI Check is an independent certification mark to help consumers easily identify trusted AI services and tools. Based on an AI Trust Index, the certification mark is continuously aligned with principles, best practices, and standards for responsible design, implementation, and use of AI.
Key accomplishments of this last year include:
- Launching the Responsible AI Community Portal at the Open Government Partnership Summit.
- Building a strong and robust community of members and partners.
- Development of a Responsible AI Document Mapping.
- Drafting of a Responsible AI Certification Whitepaper.
- Creation of an AI Standards mapping.
- First Draft of a Responsible AI Trust Index.
- Presentations at several key AI conferences including:
- Symposium on AI for Science, Industry, and Society, hosted by OECD and CERN
- Connect 2019, hosted by the Government of British Columbia
- 2019 Women in Finance Summit, hosted by Women in Finance
- Ethics in AI workshop, hosted by Deloitte
- Becoming a member of key standards organizations, including ISO JTC1/SC 42 and OCEANIS.
- Featured in several media articles.
Since I’m committed to keeping my promises about sharing more, this post will be followed by several others providing additional details on the activities listed above.
If you have any questions about AI Global please feel free to reach out! I hope that you will join us by providing your insights and giving feedback as we continue to grow and evolve.
Here’s to an innovative and responsible 2020!