Balancing Innovation, Economics, and Regulation to Increase Responsible AI Adoption

Panel Discussion in the conference on Responsible AI for Social Empowerment (RAISE) India, 2020

AnandSRao
The Startup

--

Source: Photos from Tingey Injury Law Firm, Diego PH, and Sharon McCutcheon in Unsplash

In a recent conference on Responsible AI for Social Empowerment (RAISE), held in India, the topic of discussion was on balancing regulation and innovation. On the one hand we have some industry leaders and professional groups urging policy makers not to regulate AI systems too soon; while on the other hand some civil liberty groups, joined by other companies urging policy makers to outright ban certain types of AI systems and regulate the rest. This duel has intensified over the past few months as we see the increased use of facial recognition systems and contact tracing applications globally.

Addressing this issue properly requires us to address the questions of why?, what?, and how? — Why do we need to regulate? What do we regulate? And How do we regulate?

Why Regulate? — The Economics of AI

The primary argument for regulating AI is that AI can provide a lot of value to human society, but can also create a lot of harm when it is misused. This harm could be near-term or long-term. The potential for AI systems, in the future, to learn faster and better than humans, coupled with the potential mis-alignments between the objectives of human society and AI, could result in an existential threat to humanity. In the short-term, the AI systems of today, if not built with the right safeguards, tests, and methodologies, could result in systems that are biased, not understandable, unsafe, brittle, etc.

The argument against regulation is that AI is still in its early stages of development and heavy handed regulation, across a broad range of technologies considered as AI, could stifle innovation and prevent or slow down the realization of the huge value from AI that has been projected.

In addition to these arguments we should also consider the economics of AI. The development of AI is facilitated or inhibited by three feedback loops:

  • Data network effects: Machine learning systems require large volumes of data for training. So when we have large amounts of data, the AI systems are better or smarter. With such smarter systems companies can better personalize their services to customers and attract more customers to use their AI system. With more customers using your AI system you accumulate more data, which in turn makes your AI system smarter and this virtuous cycle continues.
Figure: Data Network Effects (Source: Created by author)
  • Cognitive network effects: Building good machine learning or AI systems also requires domain expertise to label the data and technical expertise to train, deploy, and monitor AI systems. So having the right talent or cognitive capital with respect to people’s expertise will result in smarter AI systems. With smarter systems, once can attract more customers, and with more customers comes more revenue. Given the low variable costs of adding new customers, more customers could also mean greater margins. With increased margins and profits the company can invest more and attract better talent — once again resulting in a virtuous cycle.
Figure 2: Cognitive Network Effects (Source: Created by author)
  • Trust network effects: Adoption of AI systems require customers to trust the system and the provider of the system. With a trusted brand and product one can attract more customers; with more customers using your AI platform your brand can improve as well resulting in a virtuous cycle.

These three network effects interact with each and other and as a result we have seen some of the AI companies grow big really fast, attracting talent, and building their brand. In the absence of any regulation of AI, these three network effects could result in global or regional monopolies that can stifle competition, and reduce overall innovation in the field.

In summary, the economics of AI — especially the three network effects — should also be factored into any debate on innovation vs regulation. Ironically, in this case innovation might actually require regulation and the debate is no longer “innovation vs regulation”, but one of “regulation for innovation”.

What to regulate? — Call for human-centered AI

In a recent article, I reviewed four ways in which AI is being applied today. These four ways are aligned along two dimensions: (a) how humans interact with AI and (b) how the AI is interacting with the environment. No human-in-the-loop and hardwired/specific interaction results in the use of AI as automated intelligence; Human-in-the-loop and hardwired/specific interaction results in AI as assisted intelligence; Human-in-the-loop and adaptive interaction results in AI as augmented intelligence; and finally no-human-in-the-loop and adaptive interaction results in AI as autonomous intelligence.

In countries like India, with a large and skilled labor pool, policy makers should focus on AI that explicitly includes human-in-the-loop (i.e., Assisted and Augmented intelligence). This would allow these countries to use their skilled labor to make the most of the cognitive network effect that we saw earlier. In addition, given the large addressable market the data network effects will also come into play. On the contrary, focusing on automated or autonomous intelligence — other than the limited cases that are related to safety of humans — might be detrimental to society . Automating large parts of the economy could result in large-scale unemployment and greater societal burden than the efficiency savings from automation.

In summary, when we look at what to regulate we should be cognizant of these four ways of applying AI and favor assisted/augmented AI, instead of automated/autonomous AI.

How to regulate? — Creation of data trusts, coops, and exchanges

There have been a number of proposals in recent times to enable the collection, organization, curation, sharing, and exchange of data. Government institutions, private companies and public-private partnerships have all been setting up such data sharing entities. The business model for these entities range from:

  • Coops: These entities are set up solely by the producers and consumers of data — who generally happen to be the same party. For example, individuals are willing to share their genomic data with a cooperativefor the benefit of scientific research and can also use the data on specific conditions for their individual benefit.
  • Trusts: In this model an independent person, group or entity acts as the steward for the data and takes on fiduciary responsibility. For example, all publicly available company data can be shared and used as a corporate data trust.
  • Exchanges: In this model, customers who are individuals or institutions can sell their data and also buy data from others in the marketplace. The exchange could be in real currencies or crypto currencies. These exchanges are typically for-profit entities, unlike the other two. For example, individuals can share their browser histories or their exercise statistics to companies willing to buy them. They get compensated through cash or through cryptocurrencies.

While the business models here are for primarily sharing data, similar models are applicable for sharing insights. Providing more regulatory clarity and facilitating the creation of such data institutions could go a long way in terms of building trust in AI. While this will not eliminate all the risks associated with AI — it does address one of the critical raw materials for building AI systems.

In conclusion, innovation and regulation should not be viewed as a zero-sum game. Some regulations may be necessary to bring greater clarity to new business models and may also be required to foster more competition and innovation. In addition to the technology of AI we also need to understand the economics of AI when we consider regulations.

--

--

AnandSRao
The Startup

Global AI lead for PwC; Researching, building, and advising clients on AI. Focused at the intersection of AI innovation, policy, economics and application.