Speakers:
- Dr. Robin Geiss, Director of UNIDIR
- Ambassador Shorna-Kay M. Richards, Ambassador of Jamaica to Japan, Chair of the UN Secretary General's advisory board on disarmament matters, Chair of the Board of Trustees of UNIDIR
- Ambassador Kitano Mitsuru, Former Ambassador of Japan to the International Organizations in Vienna
Moderator:
- Dr. Akiyama Nobumasa, Director of the Center for Disarmament, Science and Technology, Japan Institute of International Affairs (moderator)
1. Opening Remarks (Dr. Akiyama)
Emerging technologies present profound challenges, including the fair distribution of their benefits, the prevention of catastrophic consequences due to malicious use, and the difficulty of achieving international consensus on governance frameworks and modalities. As global security continues to deteriorate, the international community stands at a crossroads: technology is advancing faster than regulation, threatening to spiral out of control. Renewed momentum is urgently needed to establish effective governance mechanisms for these technologies.
2. Introduction (Ambassador Richards)
Founded in 1980, the United Nations Institute for Disarmament Research (UNIDIR) is an autonomous think-tank supported by donors from all regions of the world. Its unique structure allows it to serve as a bridge-builder and to operate at a more informal level than traditional multilateral structures. This role is especially valuable in the field of emerging technologies, where rapid and unpredictable developments call for new approaches to regulation, governance and dialogue. As the world becomes increasingly multipolar, so does the way of regulating, which is growingly decentralized.
3. Presentation by Dr. Geiss: Major technological trends and challenges for governing emerging technology
UNIDIR has identified three major trends in the development of emerging technologies, each posing distinct challenges to global governance.
The first trend is the rise of machine warfare. In just a decade, drones have evolved from rudimentary systems to high-end technology and have rapidly proliferated, becoming default weaponry in contemporary conflicts. While most are still remotecontrolled, there is growing momentum toward increasing autonomy to meet operational demands in complex battlefields. This trend becomes particularly concerning if it leads to autonomous target selection and engagement. In response, the UN Secretary General's has proposed a treaty by 2026 to prohibit certain practices, especially life-and-death decisions made by algorithms. This timeline is ambitious but justified by the fast pace of technological development.
The second trend is the growing integration of artificial intelligence (AI) and weapon systems. AI is now increasingly used to inform and augment military decisionmaking and could even aim to replace it altogether. A plausible scenario could involve drones and satellites collecting data on the battlefield, transmitting it to data centers, where AI systems analyze it and make engagement decisions. No human beings would be involved in such a decision-making pattern, which raises the issue of human agency and accountability in integrated warfare.
The third trend relates to the tension arising from the dual nature of emerging technologies. On the one hand, there is growing demand for international cooperation to harness their benefits. On the other hand, some states implement exports controls to deny and restrict access to these technologies by their competitors, which some states believe has a negative impact on the possibility of global governance and fair benefit-sharing. This dual nature also reveals some gaps in existing international law, which fails to address the risks these technologies pose to civilian infrastructure (electricity grids, public transportation systems...) that remain inadequately protected under current international conventions.
4. Discussion
1) Key strategic and moral concerns
The discussion highlighted five key strategic concerns: the expansion of nuclear arsenals and their growing prominence in the strategies of nuclear-armed states; the heightened risk of further nuclear proliferation; the increasing likelihood of nuclear use; the erosion of conventional disarmament agreements, exemplified by the withdrawal of some states from the Ottawa Convention; and the intersection of AI with military strategy--particularly in the nuclear domain--raising concerns about its integration into command and control systems.
A major risk identified was the accelerating pace of large-scale military developments, which could spiral beyond human control in the absence of incentives for restraint. In such scenarios, the application of international law remains imperative, particularly in safeguarding civilians and critical infrastructure such as hospitals.
From a moral standpoint, the growing roles of AI and drones present complex challenges. While these technologies could potentially reduce civilian casualties, fully autonomous decision-making raises profound legal and ethical questions: who bears responsibility for an attack initiated by a machine? Actions deemed moral within a domestic political context may be considered unethical from an international perspective, making reconciliation of these viewpoints exceedingly difficult. Additionally, concerns were raised about the public's increasing unawareness of AI's influence on decision-making processes, especially in the military sphere.
2) Current state of regulation
While the adoption of a United Nations treaty regulating autonomous weapon systems (LAWS) by 2026 is highly ambitious, the primary obstacles are political rather than technical, largely due to a persistent trust deficit regarding rule compliance. The current approach seeks to prohibit AI-enabled weapons that cannot meet the requirements of international humanitarian law, while regulating others under a more flexible framework. This is viewed as a compromise between a complete ban and unregulated development.
Additionally, the role of soft law should not be underestimated. Though non-binding at first, soft law can become influential--and even effectively binding--if it garners strong political support.
However, lessons from the history of arms control suggest that progress has depended on shared urgency and common interests, as was the case on some occasions between the United States and the Soviet Union during the Cold War. Today, no such convergence exists. Furthermore, AI's dual-use nature--as an enabler rather than a weapon in itself--makes regulation even more complex and the establishment of common ground more difficult.
3) Moving forward
Raising public awareness was strongly recommended, as today's public remains significantly less informed about the risks of AI than Cold War-era citizens were about nuclear threats. The UN Secretary-General's Advisory Board has released an interim report outlining key recommendations. These include reaffirming international law, developing specific guardrails for emerging technologies, bridging the global digital divide, addressing governance gaps, and involving a broad range of stakeholders--including academia, industry, and civil society, in addition to states. Anticipation was emphasized as essential, given that regulation often follows, rather than precedes, major crises. Anticipatory governance would allow the international community to maximize the benefits of these technologies while mitigating their most urgent risks.