1. Global Context: REAIM Summit and Declining Consensus
India abstained from signing the ‘Pathways to Action’ declaration at the third summit on Responsible Artificial Intelligence in the Military Domain (REAIM). This reflects the growing complexity of regulating AI in military applications.
At the recent summit, only 35 out of 85 participating countries signed the declaration. In contrast, the previous summit saw 60 countries endorse a blueprint for action. Major powers such as the United States, China, and India did not sign, signalling widening geopolitical divergence.
The declining number of signatories highlights the difficulty of building consensus on military AI governance. As AI becomes increasingly embedded in defence systems, fragmented regulation may intensify strategic mistrust.
When consensus weakens in multilateral platforms, competitive security calculations tend to dominate. If military AI remains weakly governed, risks of escalation and instability may grow.
“The unleashed power of the atom has changed everything save our modes of thinking.” — Albert Einstein
2. Dual-Use Nature of AI: Core Regulatory Challenge
AI is inherently a dual-use technology, developed simultaneously for civilian and military purposes. Civilian advances in robotics, logistics, automation, and analytics can be repurposed for defence objectives.
This dual-use nature complicates compliance verification. Unlike conventional weapons, AI development occurs across private industry, academia, and defence establishments, making it difficult to determine the intended end-use.
Technologies that offer widespread economic and strategic benefits have historically been difficult to regulate. AI increasingly falls into this category, strengthening states’ reluctance to accept binding constraints.
Governance Challenges:
- Difficulty distinguishing civilian and military R&D.
- Absence of effective monitoring mechanisms.
- Strategic incentives to maintain technological advantage.
Conventional arms control tools are poorly suited for software-driven technologies. Without adaptive governance, military AI expansion may proceed with limited oversight.
3. Lethal Autonomous Weapons Systems (LAWS): Ethical and Legal Concerns
AI is already deployed in benign military functions such as maintenance, surveillance, and logistics. However, Lethal Autonomous Weapons Systems (LAWS), capable of selecting and engaging targets autonomously, remain the most controversial application.
The UN Convention on Certain Conventional Weapons (CCW) convened expert discussions twice last year but failed to produce concrete recommendations. Technical ambiguity and political disagreement have stalled progress.
Concerns centre on accountability, proportionality, and compliance with humanitarian norms. Delegating lethal decision-making to machines raises fundamental legal and ethical questions.
If responsibility for lethal outcomes becomes diffused between programmers, commanders, and machines, existing accountability frameworks may become inadequate.
“Technology is a useful servant but a dangerous master.” — Christian Lous Lange
4. Definitional Deadlock: Absence of Consensus on LAWS
A major obstacle is the lack of an agreed international definition of LAWS. Countries differ on what degree of autonomy qualifies as a lethal autonomous system.
- States with limited AI capacity often favour binding restrictions.
- Technologically advanced states tend to advocate higher thresholds for defining LAWS.
- Some states oppose binding frameworks entirely.
- India has described a legally binding instrument as “premature.”
Without definitional clarity, drafting enforceable agreements becomes difficult.
Core Issues:
- Divergent interpretations of autonomy.
- Strategic asymmetries.
- Lack of verification mechanisms.
Definitional ambiguity preserves flexibility but delays regulatory certainty. Over time, such ambiguity may enable unchecked deployment.
5. India’s Calculated Approach: Strategic Autonomy and Innovation
India’s position reflects a balance between advancing AI research and managing security concerns in its neighbourhood. While supporting responsible AI principles, India has refrained from signing recent declarations.
By terming a binding framework “premature,” India signals its preference for retaining policy flexibility while technological capabilities evolve.
Given limited publicly known battlefield deployment of advanced military AI, India appears to favour incremental norm-building rather than immediate legal commitments.
Maintaining strategic autonomy allows technological development. However, prolonged ambiguity may limit India’s role in shaping emerging global norms.
6. Scope for Non-Binding Frameworks and Confidence-Building
Despite strategic hesitation, there is widespread discomfort with autonomous systems making lethal decisions. This creates room for non-binding mechanisms focused on transparency and accountability.
Potential measures include:
Policy Proposals:
- Excluding AI-augmented autonomous systems from nuclear command and control.
- Establishing voluntary confidence-building and data-sharing mechanisms.
- Developing a risk hierarchy for military AI use cases.
- Encouraging national frameworks aligned with accountability principles.
Such measures could build trust while avoiding premature constraints.
Non-binding norms can shape responsible behaviour without freezing technological progress. Ignoring early guardrails risks reactive regulation later.
7. Strategic Implications of Fragmented Governance
The decline from 60 signatories previously to 35 of 85 now illustrates increasing fragmentation in global technology governance.
If major powers remain outside common frameworks:
- Competitive AI militarisation may intensify.
- Trust deficits between states may widen.
- Multilateral arms control efforts may weaken.
Military AI governance is therefore emerging as a defining issue in the evolving global security architecture.
Fragmented governance can entrench rival blocs. Inclusive norm-building is essential to prevent destabilising technological competition.
Conclusion
Military AI governance represents a critical frontier in international security. India’s cautious stance reflects an effort to balance technological ambition with strategic autonomy.
In the near term, non-binding transparency measures and risk-based frameworks provide a pragmatic path forward. Over time, as norms mature and operational experience accumulates, more formal agreements may emerge.
Ensuring that human judgment remains central in decisions involving the use of force will be essential to maintaining accountability and strategic stability in an AI-driven era.
