Navigating the Complexities of Military AI Governance

India must advocate for a non-binding framework that ensures accountability while advancing its AI interests in the military domain.
S
Surya
5 mins read
Guardrails needed for military AI use
Not Started

1. Global Context: REAIM Summit and Declining Consensus

India abstained from signing the ‘Pathways to Action’ declaration at the third summit on Responsible Artificial Intelligence in the Military Domain (REAIM). This reflects the growing complexity of regulating AI in military applications.

At the recent summit, only 35 out of 85 participating countries signed the declaration. In contrast, the previous summit saw 60 countries endorse a blueprint for action. Major powers such as the United States, China, and India did not sign, signalling widening geopolitical divergence.

The declining number of signatories highlights the difficulty of building consensus on military AI governance. As AI becomes increasingly embedded in defence systems, fragmented regulation may intensify strategic mistrust.

When consensus weakens in multilateral platforms, competitive security calculations tend to dominate. If military AI remains weakly governed, risks of escalation and instability may grow.

“The unleashed power of the atom has changed everything save our modes of thinking.” — Albert Einstein


2. Dual-Use Nature of AI: Core Regulatory Challenge

AI is inherently a dual-use technology, developed simultaneously for civilian and military purposes. Civilian advances in robotics, logistics, automation, and analytics can be repurposed for defence objectives.

This dual-use nature complicates compliance verification. Unlike conventional weapons, AI development occurs across private industry, academia, and defence establishments, making it difficult to determine the intended end-use.

Technologies that offer widespread economic and strategic benefits have historically been difficult to regulate. AI increasingly falls into this category, strengthening states’ reluctance to accept binding constraints.

Governance Challenges:

  • Difficulty distinguishing civilian and military R&D.
  • Absence of effective monitoring mechanisms.
  • Strategic incentives to maintain technological advantage.

Conventional arms control tools are poorly suited for software-driven technologies. Without adaptive governance, military AI expansion may proceed with limited oversight.


3. Lethal Autonomous Weapons Systems (LAWS): Ethical and Legal Concerns

AI is already deployed in benign military functions such as maintenance, surveillance, and logistics. However, Lethal Autonomous Weapons Systems (LAWS), capable of selecting and engaging targets autonomously, remain the most controversial application.

The UN Convention on Certain Conventional Weapons (CCW) convened expert discussions twice last year but failed to produce concrete recommendations. Technical ambiguity and political disagreement have stalled progress.

Concerns centre on accountability, proportionality, and compliance with humanitarian norms. Delegating lethal decision-making to machines raises fundamental legal and ethical questions.

If responsibility for lethal outcomes becomes diffused between programmers, commanders, and machines, existing accountability frameworks may become inadequate.

“Technology is a useful servant but a dangerous master.” — Christian Lous Lange


4. Definitional Deadlock: Absence of Consensus on LAWS

A major obstacle is the lack of an agreed international definition of LAWS. Countries differ on what degree of autonomy qualifies as a lethal autonomous system.

  • States with limited AI capacity often favour binding restrictions.
  • Technologically advanced states tend to advocate higher thresholds for defining LAWS.
  • Some states oppose binding frameworks entirely.
  • India has described a legally binding instrument as “premature.”

Without definitional clarity, drafting enforceable agreements becomes difficult.

Core Issues:

  • Divergent interpretations of autonomy.
  • Strategic asymmetries.
  • Lack of verification mechanisms.

Definitional ambiguity preserves flexibility but delays regulatory certainty. Over time, such ambiguity may enable unchecked deployment.


5. India’s Calculated Approach: Strategic Autonomy and Innovation

India’s position reflects a balance between advancing AI research and managing security concerns in its neighbourhood. While supporting responsible AI principles, India has refrained from signing recent declarations.

By terming a binding framework “premature,” India signals its preference for retaining policy flexibility while technological capabilities evolve.

Given limited publicly known battlefield deployment of advanced military AI, India appears to favour incremental norm-building rather than immediate legal commitments.

Maintaining strategic autonomy allows technological development. However, prolonged ambiguity may limit India’s role in shaping emerging global norms.


6. Scope for Non-Binding Frameworks and Confidence-Building

Despite strategic hesitation, there is widespread discomfort with autonomous systems making lethal decisions. This creates room for non-binding mechanisms focused on transparency and accountability.

Potential measures include:

Policy Proposals:

  • Excluding AI-augmented autonomous systems from nuclear command and control.
  • Establishing voluntary confidence-building and data-sharing mechanisms.
  • Developing a risk hierarchy for military AI use cases.
  • Encouraging national frameworks aligned with accountability principles.

Such measures could build trust while avoiding premature constraints.

Non-binding norms can shape responsible behaviour without freezing technological progress. Ignoring early guardrails risks reactive regulation later.


7. Strategic Implications of Fragmented Governance

The decline from 60 signatories previously to 35 of 85 now illustrates increasing fragmentation in global technology governance.

If major powers remain outside common frameworks:

  • Competitive AI militarisation may intensify.
  • Trust deficits between states may widen.
  • Multilateral arms control efforts may weaken.

Military AI governance is therefore emerging as a defining issue in the evolving global security architecture.

Fragmented governance can entrench rival blocs. Inclusive norm-building is essential to prevent destabilising technological competition.


Conclusion

Military AI governance represents a critical frontier in international security. India’s cautious stance reflects an effort to balance technological ambition with strategic autonomy.

In the near term, non-binding transparency measures and risk-based frameworks provide a pragmatic path forward. Over time, as norms mature and operational experience accumulates, more formal agreements may emerge.

Ensuring that human judgment remains central in decisions involving the use of force will be essential to maintaining accountability and strategic stability in an AI-driven era.

Quick Q&A

Everything you need to know

Lethal Autonomous Weapons Systems (LAWS) refer to weapons that can select and engage targets without meaningful human intervention. Unlike remotely operated drones, LAWS rely on AI-enabled decision-making algorithms to independently identify threats and execute force. However, there is no universally accepted definition of what level of autonomy qualifies as ‘lethal autonomous’, leading to deep disagreements among states.

The definitional deadlock arises because autonomy exists on a spectrum. Some systems are human-in-the-loop (requiring human approval), others are human-on-the-loop (human supervision), and fully autonomous systems may operate without immediate oversight. Technologically advanced countries tend to favour higher definitional thresholds to preserve strategic flexibility, while less advanced states advocate broader definitions to ensure stricter controls.

This lack of consensus complicates efforts at the UN Convention on Certain Conventional Weapons (CCW), where negotiations have stalled. Without a shared definition, it becomes nearly impossible to draft binding legal norms, making LAWS governance one of the most contentious areas in global AI regulation.

The primary reason for reluctance is the dual-use nature of AI. AI technologies developed for civilian applications—such as logistics, predictive maintenance, or data analytics—can easily be repurposed for military functions. This makes verification of compliance extremely difficult and raises concerns about strategic disadvantage if restrictions are unevenly applied.

Additionally, AI is widely perceived as a game-changing technology that could confer decisive military advantage. States that have invested heavily in AI research are hesitant to accept binding frameworks that might limit innovation or operational flexibility. For India, regional security concerns and the need to modernise its armed forces add another layer of strategic calculation.

Historically, technologies with transformative military potential—such as cyber capabilities—have been harder to regulate. AI follows a similar trajectory, where strategic competition outweighs normative consensus. Hence, major powers prefer flexible, non-binding principles rather than strict legal commitments.

India’s position reflects a careful balance between technological ambition and security imperatives. As a country investing significantly in AI for both civilian and defence applications, India is wary of prematurely constraining its innovation ecosystem. Given its strategic environment—marked by tensions along its borders—maintaining operational flexibility is seen as essential.

However, critics argue that delaying binding commitments risks normalising the deployment of increasingly autonomous systems without adequate accountability frameworks. The absence of strong norms could lead to an arms race in AI-enabled weapons, reducing strategic stability. Moreover, ethical concerns about delegating life-and-death decisions to algorithms remain unresolved.

A nuanced view suggests that while binding regulations may indeed be premature given definitional ambiguities and limited deployment data, India should actively shape non-binding norms and confidence-building measures. This would allow it to protect its interests while demonstrating leadership in responsible AI governance.

Non-binding frameworks can serve as confidence-building mechanisms that promote transparency and reduce mistrust among states. For example, voluntary information-sharing on military AI doctrines, testing protocols, and risk mitigation strategies can help prevent misperceptions and accidental escalation.

Such frameworks could also establish clear normative guardrails. One important proposal is prohibiting AI-augmented autonomous systems in connection with nuclear forces, thereby safeguarding strategic stability. Additionally, creating a risk hierarchy of AI use cases—distinguishing between benign logistics applications and high-risk lethal systems—could guide national regulations.

Historical precedents, such as the early norms around cyber operations or space debris mitigation, show that non-binding principles can gradually evolve into customary norms and eventually binding agreements. For military AI, such incrementalism may be the most pragmatic path forward.

India could position itself as a bridge between technologically advanced states and developing countries by advocating a principles-based, non-binding framework. First, it should emphasise ‘meaningful human control’ in all lethal decision-making processes. This aligns with ethical concerns about accountability and maintains human responsibility in warfare.

Second, India could propose voluntary reporting and peer-review mechanisms under the UN or a plurilateral forum. These could include sharing best practices on safety testing, fail-safe mechanisms, and algorithmic auditing. Third, India should champion the exclusion of AI-enabled systems from nuclear command-and-control structures, reinforcing global strategic stability.

By combining its domestic AI ambitions with responsible international engagement, India can shape emerging norms rather than merely react to them. This would enhance its credibility as both a technological power and a responsible global actor.

Attribution

Original content sources and authors

Sign in to track your reading progress

Comments (0)

Please sign in to comment

No comments yet. Be the first to comment!