You are here

Preempting a generative AI monopoly

Feb 05,2023 - Last updated at Feb 05,2023

CAMBRIDGE — ChatGPT, the new artificial-intelligence chatbot developed by the San Francisco-based research laboratory OpenAI, has taken the world by storm. Already hailed as a milestone in the evolution of so-called large language models (LLMs), the world’s most famous generative AI raises important questions about who controls this nascent market and whether these powerful technologies serve the public interest.

OpenAI’s release of ChatGPT last November quickly became a global sensation, attracting millions of users and allegedly killing the student essay. It is able to answer questions in conversational English (along with some other languages) and perform other tasks, such as writing computer code.

The answers that ChatGPT provides are fluent and compelling. Despite its facility for language, however, it can sometimes make mistakes or generate factual falsehoods, a phenomenon known among AI researchers as “hallucination”. The fear of fabricated references has recently led several scientific journals to ban or restrict the use of ChatGPT and similar tools in academic papers. But while the chatbot might struggle with fact-checking, it is seemingly less prone to error when it comes to programming and can easily write efficient and elegant code.

For all its flaws, ChatGPT obviously represents a major technological breakthrough, which is why Microsoft recently announced a “multiyear, multibillion-dollar investment” in OpenAI, reportedly amounting to $10 billion, on top of the $1 billion it had already committed to the company. Originally a nonprofit, OpenAI is now a for-profit corporation valued at $29 billion. While it has pledged to cap its profits, its loose-fitting structure limits investors’ returns to 10,000 per cent.

ChatGPT is powered by a GPT-3, a powerful LLM trained on vast amounts of text to generate natural-sounding, human-like answers. While it is currently the world’s most celebrated generative AI, other Big Tech companies such as Google and Meta have been developing their own versions. While it is still unclear how these chatbots will be monetised, a paid version of ChatGPT is reportedly forthcoming, with OpenAI projecting $1 billion in revenues by 2024.

To be sure, bad actors could abuse these tools for various illicit schemes, such as sophisticated online scams or writing malware. But the technology’s prospective applications, from coding to protein discovery, offer cause for optimism. McKinsey, for example, estimates that 50-60 per cent of companies have already incorporated AI-powered tools like chatbots into their operations. By expanding the use of LLMs, companies could improve efficiency and productivity.

But the massive, immensely costly, and rapidly increasing computing power needed to train and maintain generative AI tools represents a substantial barrier to entry that could lead to market concentration. The potential for monopolisation, together with the risk of abuse, underscores the urgent need for policymakers to consider the implications of this technological breakthrough.

Fortunately, competition authorities in the United States and elsewhere seem to be aware of these risks. The United Kingdom’s communications regulator, Ofcom, launched an investigation of the cloud computing market, on which all large AI models rely, late last year, while the US Federal Trade Commission is currently investigating Amazon Web Services (AWS), which, along with Google and Microsoft Azure, dominates the market. These investigations could have far-reaching implications for AI-powered services, which rely on enormous economies of scale.

But it is not clear what, if anything, policymakers should do. On one hand, if regulators do nothing, the generative-AI market could end up dominated by one or two companies, like every digital market before it. On the other hand, the emergence of open-source LLMs, such as the text-to-image tool Stable Diffusion, could ensure that the market remains competitive without further intervention.

Even if for-profit models become dominant, however, open-source competitors could chip away at their market share, just as Mozilla’s Firefox did to Google’s Chrome browser and Android did to Apple’s mobile operating system, iOS. Then again, cloud computing giants like AWS and Microsoft Azure could also leverage generative AI products to increase their market power.

As was debated at the recent World Economic Forum meeting in Davos, generative AI is too powerful and potentially transformative to leave its fate in the hands of a few dominant companies. But while there is a clear demand for regulatory intervention, the accelerated pace of technological advance leaves governments at a huge disadvantage.

To ensure that the public interest is represented at the technological frontier, the world needs a public alternative to for-profit LLMs. Democratic governments could form a multilateral body that would develop means to prevent fakery, trolling, and other online harms, like a CERN for generative AI. Alternatively, they could establish a publicly-funded competitor with a different business model and incentives to foster competition between the two models.

Whichever path global policymakers choose, standing in place is not an option. It is abundantly clear that leaving it to the market to decide how these powerful technologies are used, and by whom, is a very risky proposition.

 

Diane Coyle, professor of Public Policy at the University of Cambridge, is the author, most recently, of “Cogs and Monsters: What Economics Is, and What It Should Be” (Princeton University Press, 2021). Copyright: Project Syndicate, 2023. www.project-syndicate.org

up
54 users have voted.


Newsletter

Get top stories and blog posts emailed to you each day.

PDF