Responsible use of AI should start with a detailed assessment of the key risks posed by AI [1], followed by a good understanding of the principles that should be followed [2], and then the governance of AI from a top-down and end-to-end perspective [3]. We have discussed these in our previous articles [1, 2, 3]. In this article, we focus on the first line of defense and dive into the nine-step data science process [4] of value scoping, value discovery, value delivery, and value stewardship and highlight the dimensions of governance.
Given the focus on governance we look to answer…
‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures declared the well-known journal Nature on 30th November 2020. Shortly after, we also heard Business Insider report that “DeepMind faces criticism from scientists skeptical of ‘breakthrough’” and One professor at the University of California branded DeepMind’s announcement ‘laughable’. Unfortunately, this is not the first time that we have had proponents and opponents of AI, debate the latest achievement by an AI algorithm.
Let’s first consider the viewpoint of AI proponents. At the core of all these AI achievements is a narrow, well-specified problem, with the criteria for…
Over the past couple of years AI risks and ethical considerations of AI are coming to the forefront. With the increased use of AI for contact tracing, workforce safety and planning, demand forecasting and supply chain disruption during the pandemic, a number of risks around privacy, bias, safety, robustness, and explainability of AI models have emerged.
AI risk identification, assessment, and mitigation varies by the level of AI maturity, company size, industry sector and country of domicile. PwC’s Global Responsible AI survey, of over 1,000 C-level executives, conducted in November 2020 reveals a number of insights as it relates to…
Modeling for uncertain times: Approaches, behaviors, and outcomes
The spread of the pandemic or COVID-19 — first in China and South Korea and then in Europe and the United States — was swift and caught most governments, companies, and citizens off-guard. This global health crisis developed into an economic crisis and a supply chain crisis within weeks. Less than 100,000 global confirmed cases in early March 2020 has ballooned to more than 101 million by January 28, with more than 2.1 million deaths.
Every aspect of life for almost every individual on this planet has been impacted by COVID-19. From…
There is little doubt that COVID-19 has been the single most influential driver of our lives and livelihoods in 2020. The impact of the pandemic however has been very mixed for certain groups of society, certain industry sectors, certain companies and for certain technologies. Three key themes emerge around PwC’s Global Responsible AI survey, of over 1,000 C-level executives, conducted in November 2019. The survey respondents were across seven industry sectors (i.e., …
Responsible AI is a broad topic covering multiple dimensions of the socio-technical system called Artificial Intelligence. We refer to AI as a socio-technical system here as it captures the interaction between humans and how we interact with AI. In the first part of this series we looked at AI risks from five dimensions. In the second part of this series we look at the ten principles of Responsible AI for corporates.
In this article we dive into AI Governance — what do we really mean by governance? What does AI governance entail? …
In the first part of this series, we looked at AI risks from five dimensions. We talked about the dark side of AI, without really going into how we would manage and mitigate these risks. In this and subsequent articles, we will look at how to exploit the benefits of AI, while at the same time guarding against the risks.
A quick plot of search trends shows that the words “AI Ethics”, “Ethical AI”, “Beneficial AI”, “Trustworthy AI”, and “Responsible AI” started becoming extremely popular over the past five years. In my (first author’s) early exploits of AI in the…
Get started on your journey towards Responsible AI
Thirty years from now, will we look back at 2020 as the year when AI discriminated against minority groups, disinformation propagated by special interest groups and aided by AI-based personalization caused political instability, deep fakes and other AI-supported security infringements basically rendered AI untrustworthy and propelled us into yet another AI winter, or will we look upon 2020 as the year that provided the impetus for the world bodies, corporates, and individuals to come together to ban autonomous weapons systems, assess, monitor, and govern sensitive AI technologies like deep fakes, facial recognition…
In Part 1 of this series, we examined the key differences between software and models; in Part 2, we explored the twelve traps of conflating models with software; in Part 3, we looked at the evolution of models; and in Part 4, we went through the model lifecycle. Now, in our final part of the series, we address how the model lifecycle and the agile software development methodology should come together.
Based on our previous discussions, we are primarily concerned with how the model lifecycle process — with its iterative value discovery, value delivery and value stewardship — can be…
In a recent conference on Responsible AI for Social Empowerment (RAISE), held in India, the topic of discussion was on explainable AI. Explainable AI is a critical element of the broader discipline of responsible AI. Responsible AI encompasses ethics, regulations, and governance across a range of risks and issues related to AI including bias, transparency, explicability, interpretability, robustness, safety, security, and privacy.
Interpretability and explainability are closely related topics. Interpretability is at the model level with an objective of understanding the decisions or predictions of the overall model. Explainability is at an individual instance of the model with the objective…
Global AI lead for PwC; Researching, building, and advising clients on AI. Focused at the intersection of AI innovation, policy, economics and application.