ARTIFICIAL INTELLIGENCE
In the 2020s, AI has grown from a specialised technology into a powerful force that is shaping global politics, economics and security. Governments now view artificial intelligence not only as an innovation tool but also a strategic asset in international power dynamics.
As such, its role in national security, economic competitiveness and technological progress places AI at the centre of geopolitical rivalry.
AI’S ETHICAL BOUNDARIES
Sanjeewaka Kulathunga describes how AI is driving geopolitics in this age

Consequently, ethical concerns surrounding AI extend beyond technical issues to questions of sovereignty, human rights and global governance. And ensuring that its development respects values such as fairness, transparency, accountability and human dignity remains a major challenge amid intense global competition.
AI has become a key element of global power competition. Major players such as the US, China, the EU and several emerging economies are investing heavily in research, infrastructure and talent development.
These investments are motivated by the belief that leadership in artificial intelligence can provide advantages in economic productivity, military capabilities and tech independence.
CONCERNS While these developments accelerate innovation, they also raise ethical concerns – particularly when competition encourages rapid technological deployment without sufficient safeguards.
The geopolitical dimension of artificial intelligence means that ethical issues are now closely tied to political decision making. Governments are increasingly integrating AI into areas such as military planning, intelligence analysis, cybersecurity and surveillance systems.
Although these technologies can improve efficiency and strengthen national security, they also raise important ethical questions about responsibility, autonomy and the potential impact on civilian populations. Determining who controls these systems and how their use should be regulated have become critical issues.
Another important concern surrounding AI is the presence of bias within systems. Algorithms trained on historical data may unintentionally reproduce or amplify existing social inequalities.
When AI is used in areas such as law enforcement, recruitment, financial services or social media moderation, these prejudices can disproportionately affect certain communities.
PRINCIPLES Although researchers and policymakers stress the importance of transparency, fairness and accountability in AI governance, ethical principles alone are insufficient.
A gap often exists between theory and practice, partly because countries differ in political systems, legal traditions and cultural values, which make universal ethical standards difficult to establish.
When AI systems make decisions that affect individuals or societies, determining responsibility for mistakes or harmful outcomes can be complicated. This issue is particularly sensitive in areas such as military operations or intelligence analysis where AI systems process large volumes of data to support strategic decisions.
Because many AI models function as complex black boxes, understanding how they reach specific conclusions can be difficult. Experts warn that the use of opaque artificial intelligence systems in government activities may weaken democratic oversight and increase the risk of hidden biases or unjust outcomes.
QUESTIONS The use of artificial intelligence in military technology raises some of the most serious ethical questions. Autonomous weapons systems and AI assisted battlefield technologies have the potential to transform warfare by enabling faster and more precise decision making.
However, they also raise profound moral concerns about whether machines should be allowed to make life-and-death decisions.
Delegating lethal authority to automated systems challenges traditional ideas about human responsibility and moral judgement. As military automation continues to develop, many scholars and policymakers argue that clear ethical boundaries must be established.
Geopolitical rivalry further complicates attempts to regulate military AI. Countries with different political priorities and ethical perspectives may adopt contrasting approaches to autonomous weapons. Some nations may prioritise tech superiority and military efficiency while others may focus on strict ethical limitations.
These differences make it difficult to create binding international agreements governing the use of AI in warfare.
Without coordinated international standards, the world could face a technological race to the bottom where nations prioritise strategic advantage over ethical responsibility.
Building a global framework for AI ethics is extremely complex because cultural, political and economic differences influence how ethical principles are interpreted and implemented.
A technological race to the bottom





