The Case for Practical and Responsible AI

By March 3, 2017Uncategorized

Most industry forecasts point to rapid increases in use of AI in virtually EVERY domain, such as healthcare, public safety, transportation, commerce, education, security, and entertainment to name a few.  Studies point to the proper use of AI driving significant societal and business benefits ranging from a sharp decrease in road casualties associated with the increase in self-driving cars to significant reduction in controlling the runaway costs of chronic disease management in healthcare. With 2bn people expected to be over 60years old in 2050 AI has the power to help them live healthy and productive lives while enabling the younger generations to start the businesses of the future. These will be in IOT, smart cities and technologies yet to be discovered with contracts executed by smart contracts using AI and blockchain. However, all of these areas of potential for AI must be devised ethically as the ability to infringe the individual’s human rights becomes ever easier.

Leading business, academic and industry experts have voiced serious concerns about the perils of AI:

  • Are computers soon going to be our new digital overlords?

  • Will AI replace our jobs?

  • How will AI change the skills composition and educational requirements of next generation students and white collar workers?

These discussions are picking up momentum as AI systems make certain human jobs redundant.  While jobs will be lost, AI is, more importantly, expected to change the character of future jobs. Just like the steam engine augmented and amplified physical strength, AI will be able to pair humans and machine to augment and amplify our mental capacities with elevated[k2]  intelligence which enables us to achieve more. [k3] The key takeaways of this recent poll by Infosys of 1600 business leaders, were:

  • “A clear link between an organization’s revenue growth and its AI maturity: Organizations who report faster growth in revenue over the past three years were also more likely to be further ahead when it comes to AI maturity. AI is perceived as a long-term strategic priority for innovation, with 76 percent of the respondents citing AI as fundamental to the success of their organization’s strategy, and 64% believing that their organization’s future growth is dependent on large-scale AI adoption.

  • While there are ethical and job related concerns – 62% believe that stringent ethical standards are needed to ensure the success of AI – most respondents seem optimistic about redeploying displaced employees with higher value work.

  • The majority, 84%, plans to train employees about the benefits and use of AI, and 80 percent plan to retrain or redeploy impacted employees.”

https://www.infosys.com/aimaturity/

In order to turn deep learning into valuable, ethical applications, all AI powered systems must be designed and introduced to cover at least these core practical characteristics of Responsibility (as defined by one of AI Austin’s founding advocates, CognitiveScale, (an Austin based startup founded by senior executives from IBM Watson):

  1. Responsible AI systems deliver positive business and societal outcome and impact

  2. Responsible AI systems augment and scale human intelligence and experience

  3. Responsible AI systems are transparent, accountable, and trustworthy

Delivering positive outcome and impact:

AI Austin believes that all AI systems should be designed with clear, measurable business outcomes in mind while taking into consideration societal impact. Societal impact should include economical, demographical or regulatory components such as ethics, law, privacy, trust, and accountability.

Augmenting and scaling human intelligence and experience:

Responsible AI approaches should be geared towards not replacing people with machines but to empower every person and organization to achieve more. Data in all forms is exploding and is outstripping the human capacity to understand the meaning hidden within it to improve business and society.  AI should be used as a powerful technology to help humans understand and act on the hidden meaning within this data in a manner that is quick, precise and evidence based.

Building systems that are transparent, accountable, and trustworthy

Researchers and developers must now shift their focus towards the goal of improving transparency and build trust rather than performance. Most AI implementations today do not take these considerations into account. Specifically, AI Austin believes that Responsible AIs should, in terms of Transparency, be able to clearly explain the way they do what they do. Algorithm development has so far been led by the goal of improving performance, leading to opaque black boxes. We are already seeing systems that work as a black box being rejected by users, regulators and companies as they fail the regulatory, compliance and risk requirements of corporations dealing with sensitive personal health, financial and other information.

Developments in machine learning technology are rapidly enabling AI systems to autonomously decide and act without direct human control whether it be identifying pictures of cats on the internet or powering self-driving cars. AI systems should also be accountable to mitigate risk i.e. they should be able to explain and justify decisions. This will become especially important as the use of AI increases leading to new processes and longer chains of responsibility (and risk) powered by machine intelligence.

Finally, in addition to delivering positive business outcome by augmenting human ability and experiences, AI systems should be trustworthy. They should be inclusive and respectful both in form of understanding and complying with existing human morality and ethical norms and practices.