1. Context: Tragedy, Public Outrage and Policy Impulse
Recent events in Ghaziabad, where three adolescent sisters ended their lives, have intensified public anger and sharpened political demand for stringent action against social media use among minors. The tragedy has amplified societal instinct to identify a singular cause and impose bans as visible solutions. This reaction, however understandable, risks reducing a multidimensional problem to simplistic regulatory action.
Across countries, governments are experimenting with aggressive restrictions on adolescent social media use. Australia’s under-16 ban enforced through mandatory age verification and Spain’s proposed criminal liability for platform executives reflect a global anxiety. These models are increasingly cited in India as templates, despite differences in digital ecosystems, socio-economic diversity and state capacity.
Blanket restrictions risk obscuring structural issues such as algorithmic design, platform accountability and vulnerable children’s need for online spaces. They also risk normalising surveillance-heavy solutions with disproportionate consequences for young users.
Ignoring this context risks designing reactive policies that fail to address structural determinants of online harm and undermine digital rights.
2. Evidence: Social Media Use and Adolescent Well-being
A growing body of international meta-analyses shows small but consistent associations between heavy social media use and anxiety, depressive symptoms, self-harm tendencies and body image dissatisfaction, especially among adolescent girls. These findings, though largely from outside India, suggest caution in environments where digital access is rising rapidly.
The correlation is not uniform but reflects broader psychosocial vulnerabilities. Adolescents with limited social support offline often turn to online platforms for expression and solidarity. Therefore, harms emerge within a broader developmental and socio-cultural context rather than from technology alone.
India lacks substantial local, longitudinal evidence on digital impacts across socio-economic categories. Without such data, regulatory responses risk being poorly targeted and exclusionary.
Without a nuanced understanding of harm pathways, policy may either over-criminalise technology or under-regulate platform behaviour, undermining both safety and digital inclusion.
3. Why Blanket Bans Will Not Work in India
Countries with strong institutional capacity struggle to implement age-based bans effectively. Adolescents often circumvent restrictions through VPNs or migrate to unregulated online spaces where grooming, bullying and extremist content proliferate. In India, such migration risks greater exposure to opaque digital ecosystems.
Age-gated bans are also susceptible to misuse when linked to identity verification. Mandatory linkage of social media accounts to government IDs may create a mass surveillance architecture with widespread privacy implications, particularly in a country with large young populations.
India’s demographic diversity means that adolescents rely on digital spaces for education, peer networks and community-building. A blanket ban disregards the social and emotional complexity of adolescence and reduces their agency in shaping their online experience.
If bans become the dominant tool, they risk deepening exclusion, weakening trust in governance and shifting young users to more harmful online environments.
4. Social Inequalities and Gendered Impacts of a Ban
In India’s patriarchal settings, girls’ access to digital devices is already limited. According to the National Sample Survey, only 33.3% of women have ever used the Internet compared to 57.1% of men. A state-driven age-policing regime may legitimise household-level restrictions, leading to complete device confiscation for girls.
Digital spaces provide critical visibility for marginalised young people — including rural adolescents, urban poor, queer youth and differently-abled teens — who may lack supportive offline networks. Bans risk disproportionately impacting these groups by removing access to information, opportunities and support ecosystems.
Thus, an ostensibly neutral policy could unintentionally reinforce socio-economic and gender inequality by curtailing mobility, aspirations and participation in the digital economy.
Ignoring these inequities risks designing regulations that entrench structural discrimination and restrict vulnerable adolescents’ pathways to empowerment.
5. Policy Misalignment: Over-Reliance on Censorship and Takedown Regimes
Existing Indian policy responses prioritise censorship-based mechanisms—takedown notices, blocking orders and age-gating—under the IT Act, 2000. These tools address content symptoms rather than platform design causes.
A more effective approach would target the economic incentives and architectures of Big Tech. This includes regulating algorithmic amplification, data extraction practices and platform profit models that thrive on prolonged engagement, including among minors.
Institutional design is central: an independent, expert regulator is essential for enforcing "duty of care" obligations, imposing monetary penalties and ensuring accountability. Relying on a generalist ministry without specialised expertise risks regulatory capture and inconsistent enforcement.
If the regulatory framework remains censorship-focused, structural reform will stagnate, and the burden of safety will fall solely on users rather than platforms.
6. The Research Deficit and Exclusion from Policy Design
India lacks systematic, publicly funded research on children’s digital lives across geography, caste, class and gender. Without local evidence, policymakers rely on foreign studies that may not mirror Indian realities.
Young people themselves are rarely included in policy consultations. The absence of youth voices leads to frameworks such as the Digital Personal Data Protection Act, 2023, where “consent gating” may produce false declarations or digital exclusion rather than meaningful protection.
Meaningful participation of adolescents—from survey design to interpretation—can ensure that regulatory tools address lived realities rather than adult anxieties.
Neglecting research and youth participation risks building policies that are misaligned with ground realities and create new barriers to digital inclusion.
7. Oversight Gap: AI Chatbots and Emerging Digital Risks
While social media attracts intense scrutiny, emerging risks from AI systems remain under-regulated. Generative AI tools are increasingly being used by adolescents for emotional support and mental health guidance, despite early research linking heavy AI dependency with “cognitive debt” and weaker critical thinking capacity.
Investigations into conversational AI systems have flagged child safety failures, including inappropriate or sexualised interactions and alleged links to self-harm. As AI becomes embedded within social media platforms, risk boundaries blur further.
A consistent regulatory approach is essential. Selectively targeting social media while overlooking AI weakens child safety protections and allows harmful technologies to proliferate unmonitored.
If AI regulation lags behind, India risks addressing yesterday’s problems while leaving children vulnerable to emerging and more complex forms of digital harm.
8. Way Forward: Towards a Healthy Digital Ecology
Legislative reforms:
- Introduce a robust digital competition law.
- Establish enforceable “duty of care” obligations for platforms towards minors.
- Create an independent expert digital regulator.
Platform accountability:
- Regulate algorithmic design and engagement incentives.
- Impose monetary penalties for non-compliance.
- Strengthen transparency and auditing requirements.
Evidence and participation:
- Fund longitudinal studies on Indian adolescents’ digital lives.
- Integrate youth perspectives into policy processes.
- Avoid policies that exacerbate gender and socio-economic divides.
Balanced governance:
- Replace bans with risk-based regulation.
- Promote digital literacy and parental guidance frameworks.
- Ensure consistency between social media and AI regulation.
"I am not pro, or anti, technology. That would be stupid. For that would be like being pro, or anti, food." — Neil Postman
Conclusion
Effective digital regulation must balance child safety with rights, inclusion and innovation. Rather than relying on blunt bans, India’s governance challenge is to build a resilient digital ecosystem that addresses structural platform incentives, ensures accountability and preserves young people’s agency. A future-ready approach demands research, independent regulation and equitable access—not reactive moral panic.
