You are here
AI and the productivity imperative
Aug 10,2023 - Last updated at Aug 10,2023
MILAN — Around the world, supply is struggling to keep up with demand. Inflation remains stubbornly high, despite aggressive interest-rate hikes. The global workforce is aging rapidly. Labour shortages are ubiquitous and persistent.
These are just some of the forces behind the productivity challenge facing the global economy. And it has become increasingly clear that we must harness artificial intelligence to address that challenge.
Over the last four decades, rapid emerging-economy growth brought a surge in productive capacity, which acted as a powerful supply-side disinflationary force. China, in particular, served as a robust engine of growth. But that emerging-economy growth engine has weakened substantially in recent years. China’s post-pandemic growth is well below potential and declining.
Moreover, geopolitical tensions, pandemic-era shocks and climate change are disrupting global supply chains, and a combination of market incentives and new policy priorities, such as “de-risking” and boosting resilience, is impelling governments to pursue the (very expensive) process of supply-chain diversification. Meanwhile, sovereign-debt levels are high and rising, reducing countries’ fiscal capacity to undertake growth-oriented public investment and destabilising some economies.
These are secular trends, meaning that they are likely to be persistent features of the global economy in the coming decade. Supply constraints and rising costs will subdue growth. Inflation will remain a persistent threat, requiring higher interest rates that raise the cost of capital. Increasingly urgent large-scale investments in the energy transition will be extremely difficult, economically, politically and socially, to pursue; without them, however, climate-related disruptions will worsen.
But there is promising news. As Gordon Brown, Mohamed El-Erian, and I argue in our forthcoming book, “Permacrisis: A Plan to Fix a Fractured World”, a broad-based surge in productivity could substantially change this picture. And, with AI technology advancing rapidly, this is hardly pie in the sky. The key is to ensure that productivity growth is a central focus of AI innovation and applications in the coming years.
Even as AI advanced from handwriting recognition to speech recognition to image and object recognition, the conventional wisdom was that the technology worked best in well-defined domains. It did not have a human-like capacity to detect which domain it was working in and switch domains as needed.
That changed with the rise of large language models (LLMs) and generative AI more broadly. LLMs are capable of comprehending language and appear able to detect and switch domains independently, perhaps bringing them one step closer to artificial general intelligence. The potential for broad-based productivity enhancement is considerable.
LLMs function as general-purpose platforms for building applications for specific uses throughout the knowledge economy. Because they understand and produce ordinary language, anyone can use them. ChatGPT reportedly attracted 100 million users in the two months after its public release.
Moreover, LLMs are trained on a vast amount of digital material, so the range of topics that they can address is enormous. This combination of accessibility and coverage means that LLMs have a much broader array of potential applications than any past digital technology, even previous AI-based tools.
The race to develop such applications, linked to a wide range of sectors and job categories, has already begun. OpenAI, the firm behind ChatGPT, has created an application programming interface (API) that allows others to build their own AI solutions on the LLM base, adding data and specialised training for the specific use they are targeting.
A recent case study by MIT economist Erik Brynjolfsson and his co-authors provides an early indication of the productivity potential. Access to a generative-AI-based tool trained on audio recordings of customer-service interactions and performance metrics increased productivity by 14 per cent, on average, as measured by issues resolved per hour.
Less-experienced customer-service agents benefitted the most from the tool, indicating that AI, which encapsulates and filters the accumulated experience of an entire system over time, can help workers “move down the experience curve” faster. This “leveling-up” effect will probably be a common feature of AI applications, particularly those that fit this “digital-assistant model”.
There are many versions of that model, which may take advantage of the ability of AIs and ambient-intelligence systems to track and record outcomes. For doctors seeing patients, or making rounds in a hospital, AI tools can produce a first draft of required reports, which the doctor will then need only to edit. Estimates of the time savings vary, but all are very large.
To be sure, AI may well also enable the automation of many tasks and the replacement of human workers. But AI tools are fundamentally prediction machines; they make mistakes, make stuff up, and perpetuate the biases on which they have been trained. Given this, prudent applications are unlikely to exclude humans any time soon.
To realise AI’s productivity-enhancing potential, policymakers will have to act in several areas. For starters, innovation, experimentation, and development of applications depend on widespread access to LLMs. Perhaps there will be enough competition to ensure access at a reasonable cost. But given how few companies have the computing capacity to train LLMs, regulators must remain vigilant on this front.
Moreover, government will need to collaborate with industry and researchers to establish widely accepted principles for the responsible management and use of data, and implement regulations to uphold these principles. Striking the right balance between security and openness is essential; the rules cannot be so restrictive that they impede experimentation and innovation.
Finally, AI researchers need access to considerable computing power to test and train new AI models. Government investments in a cloud-computing system would yield long-term progress in AI and robotics, with far-reaching economic benefits. In fact, effective and forward-looking management of AI’s development, together with a renewed commitment to global cooperation, could well be the key to a more prosperous, inclusive, and sustainable future.
Michael Spence, a Nobel laureate in economics, is emeritus professor of Economics and a former dean of the Graduate School of Business at Stanford University. Copyright: Project Syndicate, 2023. www.project-syndicate.org