By Sudipto Ghosh at AIThority
Tell us about your journey in technology and how you started in this space.
I have an unconventional background compared to most in the tech industry. While it’s not what I originally set out to do, my curiosity continues to lead me to the intersection of innovative technology and its impact on society.
I started with a career in Politics and went to school for Political Science and Economics. Having grown up in an era with evidence-based decision making for campaigns, I was surprised when I started my first role in a large organization to see that basic data management practices weren’t in place, being used to provide more direct and improved services or utilized to assist with decision making or evaluating programs.
Starting in municipal government, I had the chance to work for a progressive CIO, who was moving us toward a digital government well before it became trendy. Unbeknownst to me at the time, this turned out to be excellent training for what was to come when I started to work at the federal government in more technical roles. Charged with leading the development of Canada’s Open Government Portal coupled with additional courses in enterprise architecture, I gained the necessary experience and knowledge to better understand Big Data challenges.
Determined to deal with the foundational issues associated with the promises of open and digital Government, I was quickly tasked with developing policies for the enterprise’s use of data, open-source, and AI. I’ve been fortunate to have many mentors, colleagues, and experts available to answer my neverending questions – allowing me to learn how to appropriately apply technology to real-life issues.
Our audience would like to know about your daily interaction with new-age enterprise-level technologies like AI, Machine Learning and Robotics. Please tell us about it.
Since my job revolves around AI, I have the pleasure of interacting regularly with experts from all across the field. I’ve been able to collaborate with those who are implementing RPA systems to expedite basic business functions, developing cognitive automation tools to better interpret financial decisions or theorizing about the implications of AGI in the Healthcare system.
However, my focus surrounds the ethical and responsible implications of AI. In addition to the technicians and practitioners building and implementing these systems, I also have the opportunity to work with Academics, Governments, Researchers, Journalists, and standards organizations that are ensuring these tools are implemented in a responsible and meaningful way. This has included fascinating conversations with Anthropologists, Economists, Lawyers, and Human Rights experts. Understanding how all of these perspectives need to be brought together and applied to these technologies is at the heart of what we are doing at AI Global.
Tell us how businesses can bridge the “Innovation Achievement Gap” in the AI development industry.
I believe theories such as training models with more contextual information about the conditions in which they are making decisions make a lot of sense. Still, I would have no clue how to start making that happen. This said I do think that before we advance too far in an effort to close this gap, we need to better understand how these systems work and how they are learning.
As a society, we also need to find a way to have a conversation to determine whether or not we actually want to develop certain systems. I feel like a lot of our discourse around AI has been focused on the advancement of technology for technology’s sake versus actually questioning what outcomes we are targeting with this technology. If we reflect more on this, it might also bring out other subject matter experts that will help to provide the appropriate contextual information needed to close that gap.
I find that while I personally define AI as a broad grouping of technologies with both shallow and deep capabilities, it’s imperative that the right people are at the table when designing and implementing these systems – which is not typically how I’ve seen technology developed in the past.
What is Responsible AI, and how Ethical AI standards fit into it?
Most use the terms responsible AI and ethical AI interchangeably. I prefer to use the term responsible AI instead of ethical AI because I think it is all-encompassing and removes any sense of judgment in terms of what a person, organization or country deems ethical. While there are standards associated with ethics, these standards are not often a common practice throughout various domains and regions.
Since AI tools transcend both domain and region, applying ethical standards becomes incredibly tricky. What is important is that we take these ethical standards, including human rights standards, and think about how to apply them to our machine interactions in a similar way to how we think about applying them to our human policies and processes.
Drawing on academic and industry best practices, AI Global looks at responsibility from the dimensions of fairness and bias, explainability and interpretability, data quality and rights, compliance and accountability, and system robustness. We are working with subject matter experts across various industries to navigate existing standards and work to set standards within domains where they don’t currently exist. It will take a long time to think through all of the ethical implications and determine thresholds for what’s good or bad.
But right now, responsible AI means raising awareness for those who are designing, developing and deploying AI systems to think about these dimensions and direct them to principles and rules that should be followed.
How do you see global AI-as-a-service scenario evolving?
I think we’ve already seen several service companies recognizing how AI has become an important component of their service bundles. Whether it is improving business processing with RPA or supporting security measures with cognitive automation, AI tools are often augmenting traditional lines of service.
What’s most interesting for me is seeing rapid growth in services provided by the open-source community. Recognizing that a lot of these products can be repeatable in different organizations, we are starting to see services like chatbots that can be found through open source repositories. This is in addition to a lot of the components for AI, like PyTorch or TensorFlow.
In my last position as Director for Data and Digital at the Government of Canada, we built an open-source tool to support the responsible implementation of AI. We developed both the policy and companion tool, the Algorithmic Impact Assessment, in the open. Our dynamic online assessment tool was also open-sourced, and can now be used by other governments or organizations.
Which technologies have been the biggest disruptors in this industry?
In my opinion, the most disruptive technologies are edge computing and IoT tools. While they might not be the most technologically advanced, they are being implemented in such a discrete and seamless way yet completely change the way that we function.
IoT occurs in such an incremental way with a wealth of data being distributed, compiled and analyzed. Beyond security concerns, I’m concerned that we are not having an informed dialogue about the societal pros and cons of these advancements.
Would you agree that AI can successfully fill in for the lack of quality talent in the tech industry? For which human skills are leaders hiring and training?
From what I can see, AI is not replacing so much as augmenting. This will definitely lead to displacement, but not to the extent that has incited a lot of fear. That said, there are certain industries that are getting hit harder, and it will be imperative to have the right training programs or alternative job opportunities in place.
What are your observations about tech leadership trends in North America versus that in Asia, particularly focusing on the US, Canada versus China and India?
I can see that everyone is working furiously to understand how AI technologies can best be used and support the growth of this work as it rapidly expands. I also see that these countries have engaged in the importance of responsible and ethical use of AI systems. I think it is essential that we bring together all possible perspectives to tackle the biggest challenges humanity is facing.
What are your predictions on the future of AI Development in 2019-2024? How can business owners safeguard against these challenges?
I predict that there will be greater mass adoption and use of AI systems without understanding the ethical implications. Business owners should connect with AI Global or other organizations to learn more about how they can ensure they are developing their systems in a responsible way.
What is your opinion on the “Weaponization of AI and Machine Learning”? How do you promote your ideas in the modern Digital economy?
Weaponization is real. It’s happening in significant and seemingly insignificant ways around us all of the time. For example, automated selection processes choose someone for a job based on gender or skin color, and sensors and monitors increasingly tabulate our every movement. These are outcomes that might not adhere to the initial intent. This is why a responsible process and practice is required when developing and deploying these systems.
We encourage businesses and organizations to do so by being vocal on social media, fostering discussions at conferences and sharing our message through in-person or online consultations.
What digital technology start-ups and labs are you keenly following?
I’m very interested in how we take best practices related to what responsibility and ethicality means and actually put them into action. As most in this space, I follow the big organizations and industry players, but I’d like to highlight some that might be less familiar.
UK: Digital Catapult is looking to create a Digital Testbed to help developers understand how to approach their work in a responsible and ethical way. The Alan Turing Institute is doing great research, and doteveryone is working to understand root causes and solutions for responsibility in the broader tech space.
Canada: The Montreal Declaration, has one of the most robust tools to help a designer or developer think through different implications of how they are implementing AI. Canada’s research hubs MILA, Vector, AMMI, and Simon Fraser University, are also doing some incredible work.
US: I’m most excited about the Data Nutrition Project, as they are trying to make it easier for people to more easily understand the efficacy of their data – which I think is one of the most important foundational components to be done in this space. I’m impressed with how CognitiveScale has put the AI Trust Index, which was developed in collaboration with us at AI Global, into action in their Cortex Certifai product. The FICO-like composite risk score analyzes any black-box decision-making model based on five dimensions – fairness, explainability, robustness, data rights, and compliance.
What technologies within your industry are you interested in?
I’m interested in the implications of all technologies, but when building out policy or regulation work, it’s more about the outcomes that I’m interested in. While I personally find technologies like Natural Language Processing and text-to-speech incredibly interesting for enhanced communication, I think it’s equally important to ensure that people are treated in a fair manner independent of what language they speak.
As a tech leader, what industries you think would be fastest to adopting AI with smooth efficiency? What are the new emerging markets for these technology markets?
There has been an astonishing change in legal and health services over the past few years. Any job where we can automate research and do initial analysis while allowing highly capable humans to make the final decision is where I think we will see a big uptake. These tools are packaged in more easy-to-use Cloud services where you don’t need to have a significant amount of knowledge or infrastructure to adopt. There is a lot of research going into education and military, but I don’t see these transformations being as smooth.
Which superhero character/movie/sci-fi book are you most inspired from –
I watched a lot of Battlestar Galactica when I was younger, but that was also coupled with political drama, and books on former presidents by Doris Kearns Goodwin – so it’s hard to tell where the influence really comes from. More recently, I’ve really enjoyed the Jessica Jones series.
I love the balance of superpowers and imperfection. She often questions when you have extreme capabilities and if you should use them. I think that we need to have more frank conversations about this in our current context. The more I do this work, the more I’m looking back to history and philosophy to understand how we came up with a lot of our current constructs. It helps to understand how we can avoid some of the pitfalls that we’ve made up to this point.
What’s your smartest work-related shortcut or productivity hack?
I’m fortunate to have the ability to be able to focus and work from almost anywhere. I was recently traveling through Europe and was able to balance enjoying real life and history but also getting work done. I would say focus is super important to me, and to get there, I often require good headphones and a playlist that fits my mood.