You are here
Global politics of AI
Jan 22,2025 - Last updated at Jan 22,2025
I have known Jaan Tallinn, co-founder of Skype, for a decade. We first met for dinner in Tallinn, Estonia’s capital, and talked all evening about the future of humanity. We sometimes exchange emails about trends that can have an impact on the world. Tallinn has shared an interesting video on his website. It shows an immobile crowd at the Grand Central Station in New York in an era when AI is a superior to humans. In the eyes of AI, humans move at a very slow speed, just as we find a tortoise plodding inch by inch. AI will think that we are tortoises.
AI is a transformative technology. It can predict protein structures for medical innovation. It can forecast earthquakes. It can be used to track endangered animals in dense forest. It is used for increasing the productivity of agriculture, industry, and banks. It can write complex software codes in minutes which used to take months by a human software engineer. It is useful in predicting climate change and pandemics. It may even invent new medicines.
AI has another, more dangerous side, that worries many scientists. I met Geoffrey Hinton, known as the godfather of AI, during a recent visit to Toronto. He was awarded the Nobel Prize in Physics in December 2024. He is afraid that AI will spin out of human control sometime in the future. It is difficult to predict how this will impact humans. It may lead to the extinction of our species.
Most policymakers ignore the warnings of scientists. Their attention is consumed by a desire to dominate the world if they have resources. The countries that do not have AI want to have access to it. In the last week of his presidency, Joe Biden restricted the export of advanced computer chips to many countries. China, in turn, is investing heavily to manufacture chips and super computers. Russia is concentrating its limited resources on developing AI for military applications. The policies of isolating Russia may lead to the emergence of AI powered missiles which might be dangerous beyond our imagination.
While the big powers are escalating the AI race, scientists are working to find solutions. Some of them propose that we should limit the computing power and speed of these machines. This will help to keep the technology under human control. But an agreement on such limits will require global cooperation.
AI has become the frontier of new geopolitics, dividing the world. China and the United States want to win the race for the most powerful algorithms. This race can lead to the invention of extremely strong machines, which may unknowingly move out of human control. In September 2024, the United States, UK, and EU signed a legally binding convention to protect freedom, democracy, and ethical AI use but it said nothing about regulating AI’s power and speed. India leads the efforts for the AI governance within the framework of Global Partnership on AI. It has set inclusion, access, and the protection of national data sets as its priorities. The Global South countries want to secure technology and infrastructure. They heavily depend on ‘cloud’ services, chips and original research provided by big American companies. The rich and the poor countries alike agree on the need for ethical governance, dialogue, and knowledge sharing. The problem is that the concept of ethics differs from one country to another.
Every country wants a voice in shaping the AI world order. When financial institutions were created after the Second World War, the United States and Europe wrote the rules. When the nuclear non-proliferation treaty was negotiated (NPT), five countries dominated its conclusions. Today no nation wants to be sidelined in the AI revolution.
In the divided universe of AI, China and the United States may dictate rules. Many nations are signing bilateral or regional agreements. The resolutions of the United Nations are only about promoting dialogue and knowledge sharing. They do not include mandatory agreements. Europe has lost is unifying appeal because of its conflict with Russia. A coalition of countries like India, Brazil, South Africa, Turkey, Saudi Arabia, UAE, Jordan, and Indonesia could play a pivotal role in bridging divides between global powers. Such a coalition would require broad and visionary leadership. It will need to set the agenda that addresses concerns of technological inclusion and data sovereignty along with rules that prevent existential threats to humanity.
What should we do? AI’s potential to uplift humanity is unparalleled, but so are its risks. Will AI unite the world or divide it? Will the community of nations show wisdom to ensure that AI serves as an instrument of the progress of mankind rather than a force of its destruction?
Sundeep Waslekar is the President of Strategic Foresight Group, an international think tank, and author of A World Without War.
Add new comment