Introduction to AI Ethics

By March 4, 2017Uncategorized

In February 2014, Google announced the purchase of DeepMind a small artificial intelligence company based in London. Deep Mind was focusing on the deep learning branch of artificial intelligence. One of the things which caught the eye of the Press was that as part of the deal with Google Deep Mind created an ethics advisory board. Many asked for the names of the participants but to date Deep Mind has not released them.

However, their action, coupled with an opinion piece for the UK newspaper, The Independent May 2014, by Drs Stephen Hawking, Stuart Russell, Max Tegmark and Frank Wilczek that AI would be humanity’s best or last invention ignited the now vigorous debate around the safety of AI. Many media pieces are coupled with ‘Terminatoresque’ imagery and the negative perception of their suggestion that AI would destroy humanity became much more spoken of than AI as humanity’s best invention. Also, this type of media failed to concentrate on their assertion that “All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks”, and their call for more research into the effects of AI in our society now and in the future.

This debate received further fuel from Nick Bostrum’s book on “Superintelligence” published in summer of 2014. In addition to the long-term concerns as to how a superintelligence might interact with inferior beings, us humans, there was a growing recognition that problems we had in society now would be exacerbated by AI. These included privacy and automation.

Many have stepped up to address the challenges presented. The European Parliament has passed very important legislation such as the General Data Protection Regulation. Organizations to study AI safety have been founded by industry, academia and concerned individuals. These include, the Future of Life Insitute (which received a $10m grant from Elon Musk for research), the Open AI project also funded by Musk, the AI Partnership funded by Google, Deep Mind, Facebook, Amazon and IBM, AI-Austin which specializes in practical ethical uses of AI to benefit the community, Centre for the Future of Intelligence funded by a £10m Leverhulme grant, MIRI, CMU ethics and AI project funded by $10m K&L Gates grant and the recently announced $27m grant to Harvard and MIT. However, one of the most important groups is The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (AI/AS) for which our Executive Director serves as Vice-Chair. The IEEE produces a Report each year comprised of the work of over 150 experts from around the word. So far twelve of the recommendations made by those committees have started their way through the IEEE process by which they will become twelve standards in the area of ethical design of AI/AS.