Opinion | Meta’s safety ratings for teens on Instagram are not what they appear to be

It’s a symbolic gesture in an effort to delay actual accountability.

October 21, 2025 at 9:59AM
The Meta logo at its corporate headquarters in Menlo Park, Calif., on Nov. 1, 2021. (JIM WILSON/The New York Times)

Opinion editor’s note: Strib Voices publishes a mix of guest commentaries online and in print each day. To contribute, click here.

•••

Meta, the parent company behind Instagram and Facebook, recently announced that it would adopt a “PG-13” safety rating for Instagram Teen Accounts, modeled on the Motion Picture Association’s film-rating system. It isn’t progress. It’s propaganda.

The announcement is the latest example of Big Tech trying to convince the public that voluntary self-policing can replace enforceable rules. It is the digital equivalent of letting the fox design the locks on the henhouse.

Even the Motion Picture Association quickly distanced the film industry from Meta’s claim, saying it had nothing to do with the idea. Imitation isn’t always the sincerest form of flattery. In this case, it’s the sincerest form of manipulation.

For decades, social-media companies have followed a familiar script. When pressure builds for real accountability, they offer symbolic gestures. This time, the gesture is a ratings system that suggests a complex, algorithm-driven environment can be reduced to a label once applied to a two-hour movie.

History shows how that strategy works. Cornered by public-health advocates, tobacco companies offered “safer” cigarettes and alcohol companies promised voluntary guidelines. The goal wasn’t reform. It was delay.

The same playbook is at work here. Someone in or around the tech industry realized that rating apps the way we rate movies would sound responsible, look proactive and postpone serious legislative debate. It was also a quick way to create full-time employment.

The organization I lead (SAVE — Suicide Awareness Voices of Education) does not take money from Big Tech. We hope others will join SAVE and like-minded organizations that cannot justify accepting funding from companies actively working against common-sense laws designed to protect children from the very products they create.

A movie rating might warn parents about language or violence. It cannot tell them whether the film will follow their child home, track their movements or sell their personal data. Yet those are the very risks social-media platforms impose every single day.

No rating can capture the reality that these products expose children to predators who coerce them into sending explicit photos under threat of public humiliation, or lure them into sex trafficking through manipulation and trust-building. No safety rating can convey the relentless bullying that drives adolescents to self-harm or suicide. No rating can show how algorithms promote choking “challenges” that end in tragedy. It also cannot expose the drug dealers using encrypted apps to sell fentanyl-laced pills.

SAVE has worked with too many parents who learned about these dangers only after losing a child. There is no recovery from that kind of loss and no algorithmic fix that can make it right. These are not abstract risks. They are daily, documented, measurable harms, enabled by design and sustained by profit.

SAVE is not neutral in this fight. For nearly two years, we have been advocating for state and national laws to protect children’s safety online and have joined more than 400 organizations across the country in calling for passage of the Kids Online Safety Act.

The truth is that this ratings initiative is a preemptive strike against regulation.

Faced with the possibility of real accountability, the industry has reached for its favorite defense: self-regulation. A “PG-13 Instagram” gives lawmakers cover to pause, parents reason to exhale and companies space to keep doing business as usual.

Minnesota has taken genuine, evidence-based action. Earlier this year, our state became one of the first in the nation to require mental-health warning labels on social-media products linked to harm among youth. That was real progress, public-health policy rooted in data, transparency and accountability, not public relations.

Minnesota’s law was enacted through legislation, not marketing. It was created to inform and protect, not to shield an industry from scrutiny.

If we are serious about protecting young people online, we need more measures like that: laws, not loopholes. We need policies that require transparency about algorithms, limit data collection and impose real penalties when companies put profit ahead of protection. We need independent oversight funded by the public, not by the platforms themselves.

The Motion Picture Association’s rating system was never designed for a medium where the audience also creates the content. It was meant to classify finished films, not to monitor an interactive, ever-changing feed driven by algorithms. Applying it to social media is like slapping a “PG-13” label on a loaded gun and calling it safe.

If Big Tech were serious about protecting children’s well-being, the companies would stop inventing distractions and start supporting enforceable standards. Until that happens, every new safety rating should be seen for what it is: an expensive illusion that protects shareholders, not children.

Erich Mische is the CEO of SAVE — Suicide Awareness Voices of Education. It is a Minnesota-based national suicide prevention nonprofit that was founded in 1989. The organization’s website is save.org.

about the writer

about the writer

Erich Mische

More from Commentaries

See More
card image
Erica Dischino/For the Minnesota Star Tribune

We don’t need to take shortcuts that could potentially harm our environment and our future.

card image
card image