You are here

Creeping towards dystopia

May 29,2023 - Last updated at May 29,2023

LONDON — With investors pouring billions of dollars into artificial intelligence (AI)-related startups, the generative AI frenzy is beginning to look like a speculative bubble akin to the Dutch tulip mania of the 1630s and the South Sea Bubble of the early eighteenth century. And, much like those episodes, the AI boom appears headed for an inevitable bust. Instead of creating new assets, it threatens to leave behind only mountains of debt.

Today’s AI hype is fueled by the belief that large language models like OpenAI’s newly released GPT-4 will be able to produce content that is virtually indistinguishable from output produced by humans. Investors are betting that advanced generative AI systems will effortlessly create text, music, images, and videos in any conceivable style in response to simple user prompts.

Amid the growing enthusiasm for generative AI, however, there are mounting concerns about its potential impact on the labor market. A recent report by Goldman Sachs on the “potentially large” economic effects of AI estimates that as many as 300 million jobs are at risk of being automated, including many skilled and white-collar jobs.

To be sure, many of the promises and perils linked to AI’s rise are still on the horizon. We have not, yet, managed to develop machines that possess the level of self-awareness and capacity for informed decision-making that aligns with most people’s understanding of intelligence. This is why many technologists advocate incorporating “moral rules” into AI systems before they surpass human capabilities.

But the real danger is not that generative AI will become autonomous, as many tech leaders would have us believe, but rather that it will be used to undermine human autonomy. Both “narrow” and “general purpose” AI systems that can perform tasks more efficiently than humans represent a remarkable opportunity for governments and corporations seeking to exert greater control over human behaviour.

As Shoshana Zuboff notes in her 2019 book The Age of Surveillance Capitalism, the evolution of digital technologies could lead to the emergence of “a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction and sales”. The increasingly symbiotic relationship between government and private-sector surveillance, she observes, is partly the result of a national-security apparatus “galvanised by the attacks of 9/11” and intent on nurturing and appropriating emerging technologies to gain “total knowledge” of people’s behaviour and personal lives.

Palantir, the data-analytics company co-founded by billionaire investor Peter Thiel, is a case in point. Thiel, a prominent Republican donor, reportedly persuaded former US president Donald Trump’s administration to grant Palantir lucrative contracts to develop AI systems tailored for military use. In exchange, Palantir provides intelligence services to the US government and other spy agencies around the world.

In “A Voyage to Laputa”, the third part of Jonathan Swift’s Gulliver’s Travels, Captain Gulliver comes across a floating island inhabited by scientists and philosophers who have devised ingenious methods for detecting conspiracies. One of these methods involves scrutinising the “diet of all suspected persons”, as well as closely examining “their excrements”, including “the colour, the odour, the taste, the consistence, the crudeness or maturity of digestion”. While the modern state-surveillance apparatus focuses on probing e-mails rather than bodily functions, it has a similar objective: to uncover plots and conspiracies against “public order” or “national security” by penetrating the depths of people’s minds.

But the extent to which governments can spy on their citizens depends not only on the available technologies but also on the checks and balances provided by the political system. That is why China, whose regulatory system is entirely focused on preserving the political stability and upholding “socialist values”, was able to establish the world’s most pervasive system of electronic state surveillance. It also helps explain why China is eager to position itself as a world leader in regulating generative AI.

In contrast, the European Union’s approach to regulation is centered around fundamental human rights, such as the rights to personal dignity, privacy, freedom from discrimination, and freedom of expression. Its regulatory frameworks emphasise privacy, consumer protection, product safety, and content moderation. While the United States relies on competition to safeguard consumer interests, the EU’s AI Act, which is expected to be finalised later this year, explicitly prohibits the use of user-generated data for “social scoring”.

The West’s “human-centered” approach to regulating AI, which emphasises protecting individuals from harm, contrasts sharply with China’s authoritarian model. But there is a clear and present danger that the two will ultimately converge. This looming threat is driven by the inherent conflict between the West’s commitment to individual rights and its national-security imperatives, which tend to take precedence over civil liberties in times of heightened geopolitical tensions. The current version of the AI Act, for example, grants the European Commission the power to prohibit practices such as predictive policing, but with various exemptions for national-security, defence and military uses.

Amid the fierce competition for technological supremacy, governments’ ability to develop and deploy intrusive technologies poses a threat not just to companies and political regimes but to entire countries. This malign dynamic stands in stark contrast to optimistic predictions that AI will bring about a “wide array of economic and societal benefits across the entire spectrum of industries and social activities”.

Unfortunately, the gradual erosion of countervailing powers and constitutional limits on government action within Western liberal democracies plays into the hands of authoritarian regimes. As George Orwell presciently observed, a state of perpetual war, or even the illusion of it, creates an ideal setting for the emergence of a technological dystopia.

 

Robert Skidelsky, a member of the British House of Lords, is professor emeritus of Political Economy at Warwick University. Copyright: Project Syndicate, 2023. www.project-syndicate.org

up
9 users have voted.


Newsletter

Get top stories and blog posts emailed to you each day.

PDF