Creating a Responsible AI Trust Index: A unified assessment to assure the Responsible Design, Development, and Deployment of AI 

By April 28, 2020Uncategorized

Artificial Intelligence (AI) has demonstrated the ability to enhance our daily lives by augmenting human capacity and capabilities. However, AI systems have also shown that when not designed and managed in a responsible way they can be biased, insecure, and in some cases, often inadvertently, violate human rights.

Simply put, AI has the potential to do great things, and, like we’ve seen with other great technologies eg.  social media, rules are required to give the necessary guardrails to those making these systems in order to protect individual and public interests. 

In recent years, there has been a growing group of voices from research institutes, policy makers, journalists, human rights advocates, and companies sharing their unique perspectives and insights on how AI systems should be built and managed.  In a wide array of research reports, whitepapers, policies, guidelines, and articles many efforts have been made to identify the key challenges, and address ways to mitigate harms posed by the increased adoption of AI.

Additionally, given the scope, scale, and complexity of these tools establishing comprehensive and measurable rules that work in all contexts (domain, type of technology, and region) is extremely challenging. As such, we are often left with high-level, non-binding principles or frameworks that leave a lot of room for interpretation. Having built one of these policies, I know first hand that this is not enough. 

It is imperative that we take the next step, that we start to work together to determine a way to establish those guardrails. And most importantly, that we find a way to take these concepts out of theory and put them into action. In their paper, “A Unified Framework of Five Principles for AI and Society,” Luciano Floridi and Josh Cowls do a comparative analysis of “six high-profile initiatives established in the interest of socially beneficial AI.” 

Prior to having the opportunity to meet with Floridi and become more aware of his work on creating such an important ontology we had started a similar exercise, even referencing four of the same initiatives. Our purpose in doing so was to create an easy way for practitioners to make sense of this increasingly complex landscape and to use these best practices, and frameworks to build an easy-to-use assessment, this was the creation of the Responsible AI Trust Index. 

The dimensions, our version of a unified framework, include:

  1. Accountability
  2. Explainability and Interpretability 
  3. Data Quality
  4. Bias and Fairness 
  5. Robustness  

By mapping the principles and criteria from these various frameworks into an Index, we have developed a way to evaluate AI systems and models against best practices. 

To validate this work and to ensure that it is as comprehensive as possible, we have been working  in the open with a collective of subject matter experts ranging from engineers, ethicists, public policy makers, security experts, lawyers, and auditors we have built the first version of a Responsible AI assessment (complete list below).

While we aspire to build the Responsible AI Trust Index to be an accredited and auditable assessment framework, we believe that this work is so important so we are releasing an open source version, the Design Assistant, which will help those who are designing and developing these systems build their systems in a responsible way from the start. 

Recognizing that a lot of the challenges with AI are rooted in existing technologies and challenges that bioethicists have tackled before, we have built the Trust Index in a way that references existing and effective rules as much as possible, and applies an AI lens to them. 

The results of the Design Assistant will help to inform the Trust Index, only making it stronger and based in real life uses of AI. 

Complemented by a user guide, we are trying to make it as simple as possible for those designing, developing, and deploying to know what they should be thinking about, and what types of conversations they need to be having, with who, in advance of deploying an AI system. 

The principles that we followed, and will continue to follow, to build this work include:

  1. Develop in the open –  you get a better product that helps more people.
  2. Be collaborative – none of us are subject matter experts for all aspects of AI, so we need to work together with people who have diverse expertise and perspectives.
  3. Repurpose instead of reinvent – there is a lot that can be drawn from existing and historical work, we believe it’s important to take what’s useful and adapt it to the current context. 
  4. Incorporate and Iterate – While we strive to be as complete and comprehensive as possible, we recognize that we need to start somewhere and iterate. As AI is a technology that is evolving so quickly. Our tools will evolve quickly as well. 

The beta version of the Design Assistant is now live for testing! As you will see it has feedback boxes. Please let us know what you think so that we can continue to improve it, making this a valuable tool for the responsible AI and tech community. We want to know everything from where there are typos to what other content should be included. 

As you share with us, we will continue to share with you by not only improving the Trust Index and Design Assistant, but we will also share how others are using it and how it has benefited them. You will notice that the questions are one-size-fits all right now. We will continue to build out different facets of the Design Assistant that are specific to industries and regions. 

If you are interested in learning more about the Design Assistant or AI Global. Please contact us at admin@ai-global.org.

Visit the Design Assistant now!

Collaborators

A huge thank you to everyone who contributed to the creation, review, or research of the Trust Index and Design Assistant. Collaborators are listed in alphabetical order by first name. 

Special thanks to Oproma for their development contributions. Without them, the Design Assistant would not have been possible.

Abhishek Gupta, Montreal AI Ethics Institute

Afshan Ahmad, Deloitte

Allison Cohen, Deloitte

Amaya Mali, AI Global

Andrew Young, GovLab

Ana Brandusescu, University of McGill

Ben Zevenbergen, Oxford Internet Institute

Carl Maria Morch, MILA

Charles Michael Ovink, United Nations Office for Disarmament Affairs  

Dan McNaughtan, Oproma

Devina Parihar, AI Global 

Dobah Carre, Prudence AI

Esther Keymolen, Assistant Professor in Ethics, Law, and Policy of new data technologies, Tilburg Institute for Law, Technology and Society

Edward Teather, Office of AI, Government of UK

Emmanuelle Demers, Alogra at MILA, Hudon Avocats

Greg Lane, AI Global

Gopal Krishnan, Cognitive Scale

Julian Torres Santeli, Deloitte

Joydeep Ghosh, University of Texas

Kasia Chmielinski, Data Nutrition Project

Keith Jansa, CIO Strategy Council

Marc Pageau, Oproma

Marc-Antoine Dilhac, MILA

Marta Janczarski, Standards Council of Canada

Martha Czernuszenko, AI Global

Michael Stewart, Lucid AI

Michael Karlin 

Mihail Dungarov

Mirka Snyder Caron, Montreal AI Ethics Institute 

Monika Viktorova, Deloitte

Nihar Dalmia, Deloitte

Nik Pavesic, Nekki  

Preeti Shivpuri, Deloitte 

Renee Sieber, University of McGill

Richard Bélisle, Oproma

Rim Khazall, Government of Canada, Treasury Board Secretariat

Sandy Barsky, United States Government, Government Services Agency

Stefanie Haefele, Deloitte

Slavka Bielikova, Consumers International 

Tina Dacin, Stephen J.R. Smith Chaired Professor, Smith School of Business, Queen’s University

Will Griffin, Hyper Giant

Xanthe Couture, Consumers International 

About AI Global

AI Global is a non-profit committed to a world with trustworthy, safe, and fair technology. Our mission is to transform the way disruptive technologies, like Artificial Intelligence (AI) are designed, built, and regulated. As a “Do Tank” we’ve created easy-to-use tools that support practitioners as they navigate the complex landscape of Responsible AI. By working closely with those building AI systems, we use real use cases to inform AI policymakers. Creating a strong feedback loop enables a technology-enabled world that improves the social and economic well being of society.