top of page

The Social Media Dilemma: Who Bears Responsibility?

  • 4 days ago
  • 6 min read

Mohit Desai

When a Los Angeles jury recently held Meta Platforms and Google’s YouTube liable for a young woman's personal struggles, it ignited a firestorm of debate that extends far beyond a single courtroom. The $6 million verdict, the first of more than 3,000 pending lawsuits in California courts raises questions across the world, including in India, that stakeholders have been reluctant to answer honestly: Who is truly responsible when technology harms young people, and what does meaningful accountability actually look like?


These are not simple questions, and they deserve more than simple answers. Andhra Pradesh and Karnataka are mulling to ban social media for children under 13 years of age. Various other state governments are considering ‘digital detox’ policy. Bollywood actress Soha Ali Khan recently spotlighted a critical debate: Should India ban social media for children- or are we posing the wrong question altogether? She posed incisive queries: Is the solution rooted in outright prohibition, smarter regulation, better parenting, or greater platform accountability? What is your stance? And are you content to leave such decisions in others' hands?


In the Los Angeles case, the plaintiff, identified only as K.G.M., is a twenty-year-old woman who began using YouTube at age six and Instagram at age nine, well below each platform's minimum age of thirteen. By her own account, compulsive use of these platforms left her feeling deeply depressed and profoundly insecure about her appearance, a result she attributes to a relentless feed of curated, unrealistic images provided by the platforms.



Her story is not an outlier. Across the United States, more than 40 state Attorneys General have filed suit against social media companies, and school districts in multiple states have sought damages for what they describe as a youth mental health crisis fuelled by addictive platform design. Lead trial attorney Mark Lanier framed the case in stark terms for the jury: these companies, he argued, did not merely build applications, they built psychological traps, deliberately engineered to exploit the developing brains of children.


The comparison to Big Tobacco is one plaintiffs' attorney’s have invoked deliberately. Just as cigarette manufacturers once concealed internal research about nicotine addiction, critics argue that Meta and other platforms possessed knowledge about the harmful effects of their products on young users while continuing to optimize for engagement above all else. Whether or not that analogy holds legally, the emotional force it carries in a courtroom and in the court of public opinion is undeniable.


For families who have watched children spiral into anxiety, eating disorders, or social isolation while glued to a screen, the verdict offers something that has been difficult to find: a sense that someone, somewhere, has been held accountable.


The Limits of a Single Verdict

And yet, accountability is only meaningful if it is accurately targeted and here is where the picture grows considerably more complicated.


K.G.M. began using YouTube at age six and Instagram at age nine. Both platforms require users to be at least thirteen, in compliance with the Children's Online Privacy Protection Act. She bypassed those controls. This is not a trivial detail. It raises an uncomfortable question: if a child accesses a platform in direct violation of its stated policies, using an account she was not supposed to have, does full liability for subsequent harm rest with the platform?


The honest answer is: not entirely.


This does not mean platforms bear no responsibility. Reports indicate that Meta's ecosystem—spanning Facebook, WhatsApp, and Instagram—commands a colossal presence in India, exceeding one billion users across its platforms. As the tech giant's premier global market, India delivers over 550 million ad-reachable users for Facebook and Instagram alone. This vast scale demands collective accountability: a robust legal framework must delineate the roles of parents, schools, and societal norms in guiding children's engagement with technology. Holding platforms liable for harms that occurred partly because families did not enforce, or were unaware of, age restrictions risks creating a system where courts substitute for the parenting decisions that no algorithm can replace.


Social media platforms are communication tools whose effects are shaped by an enormous range of individual, familial, and societal factors. Depression and body image issues among teenagers predate Instagram. Loneliness among young people is a complex phenomenon with roots in school culture, family dynamics, economic stress, and yes, screen time. Courts that assign monetary figures to these diffuse harms run the risk of creating legal precedent that is more about financial redistribution than genuine child protection.


There are also downstream consequences worth considering. Meta and Google are currently investing hundreds of billions of dollars in artificial intelligence research with potential applications in medical diagnostics, drug discovery, and mental health treatment. Massive litigation costs and uncertainty do not exist in a vacuum they shape where resources flow and what risks companies are willing to take.

 

What Genuine Reform Looks Like

None of this means the status quo is acceptable. It is not.


Social media companies have been insufficiently transparent about what their internal research shows regarding platform effects on adolescent mental health. The algorithmic recommendation systems that keep young users scrolling- serving them increasingly extreme or emotionally destabilizing content because it maximizes engagement are design choices, not natural phenomena. Choices can be regulated. Choices can be changed.


What is needed is not a legal shakedown, but a coherent policy framework built on several clear pillars.


We need age verification tools that actually works. Requiring users to be thirteen is meaningless if any child can create an account with a false birthdate. Platforms must be held to a higher standard of verification, one that uses technology, parental consent mechanisms, and third-party authentication to make minimum age requirements real rather than nominal. This is technically achievable. It has simply not been made a legal requirement with sufficient teeth.


Secondly, meaningful parental control is need of the hour. Parents cannot protect children from harms they cannot see. Platforms should be required to provide robust, easy-to-use parental oversight tools including screen time limits, content filtering, and activity transparency as a default feature, not an optional add-on buried in settings menus. Families who want to grant their children more autonomy can do so; the default should protect children, not expose them.


We need to fix the algorithmic accountability. The specific design features that plaintiffs' attorneys describe as “traps” infinite scroll, push notifications engineered to interrupt sleep, recommendation systems that prioritize emotional provocation should be subject to regulatory scrutiny. Independent audits of recommendation algorithms, with findings reported to regulators, would bring a level of transparency that currently does not exist.


Schools and parents need real resources to help young people understand how social media is designed to influence their behaviour, how to recognize when use is becoming compulsive, and how to maintain a healthier relationship with technology. This is not a substitute for platform accountability but it is a necessary complement to it.

 

Shared Responsibility, Not Selective Blame

The Los Angeles verdict will not be the last. The social-media litigation wave is coming regardless of whether it ultimately produces good law or good outcomes for young people.


What society should resist is the temptation to treat this moment as an either/or proposition, either corporations are entirely to blame, or they bear no responsibility at all. Neither extreme reflects reality, and neither produces workable solutions.


K.G.M.'s pain is real. The mental health struggles of millions of young people who have grown up on algorithmically curated feeds are real. These harms deserve serious, sustained attention. But serious attention means grappling honestly with complexity: with the failure of age-verification systems, with the role of parental oversight, with the genuine difficulty of drawing causal lines between platform use and mental illness, and with the limits of what tort litigation can accomplish when what is truly needed is legislation.


Social media companies should be required to do more much more to protect young users. That obligation is not in dispute. What is in dispute is whether the courtroom, driven by novel legal theories and the incentives of the plaintiffs' bar, is the right arena in which to work that out.


The children caught in the crossfire of this debate deserve better than a shakedown. They deserve policy that actually protects them one that holds platforms accountable through clear rules, enforces age limits with real mechanisms, empowers parents, and treats young people as something more than future plaintiffs.


That is the harder work. It is also the more important one.

 
 
bottom of page