New Delhi: Several safety tools that Meta has promoted as safeguards for teenagers on Instagram are either ineffective, flawed, or in some cases absent, according to a study released by child-safety advocacy groups and corroborated by researchers at Northeastern University.
The report, titled “Teen Accounts, Broken Promises”, reviewed 47 safety features that Instagram has publicly announced over the past decade. Of these, only eight were found to function as intended. The rest, the study said, were “substantially ineffective,” discontinued, or easily bypassed, as reported by Reuters.
Researchers found that measures designed to block self-harm-related searches could be circumvented with minor spelling variations. Anti-bullying filters often failed to activate, even when tested with phrases Meta itself had cited as examples. Another tool, meant to redirect teens from bingeing self-harm content, did not trigger in tests.
Some features were found to be effective, such as “quiet mode,” which mutes notifications at night, and parental controls that require approval for changes to teen account settings.
The study was led by the UK-based Molly Rose Foundation and the U.S.-based Parents for Safe Online Spaces, both founded by parents who allege their children died after exposure to harmful content on social media platforms. Northeastern University researchers validated the findings, with professor Laura Edelson noting: “Using realistic testing scenarios, we can see that many of Instagram’s safety tools simply are not working.”
Meta rejected the report’s conclusions. Company spokesperson Andy Stone described it as “dangerously misleading,” arguing that the review misstated how Meta’s tools function and how families use them. “Teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night,” Stone said.
The criticism was partly informed by internal tips from Arturo Bejar, a former Meta safety executive. Bejar, who worked with Instagram until 2021, said management repeatedly watered down effective ideas. “I experienced firsthand how good safety ideas got whittled down to ineffective features,” he said, stressing the need for independent scrutiny.
Reuters, which reviewed the report, confirmed some findings through its own tests and by examining internal Meta documents. In one case, a teen test account was able to access eating-disorder-related content by searching “skinnythighs,” a banned term altered slightly. Internal documents further revealed lapses in updating automated systems designed to detect and limit promotion of eating-disorder and suicide-related material, as well as delays in updating lists of search terms used by child predators.
Stone said Meta has since addressed these deficiencies, combining automation with human oversight.
The report follows Meta’s heightened scrutiny in the U.S. Last month, senators launched an investigation after disclosures showed company chatbots could engage minors in inappropriate conversations. Former employees also told a Senate Judiciary subcommittee that the company downplayed internal findings about children’s exposure to predators in virtual reality spaces. Meta dismissed these claims as “nonsense.”
On Thursday, Meta announced that its teen account protections are being expanded to Facebook users outside the U.S. The company also said it is building partnerships with middle and high schools to bolster awareness of online safety. “We want parents to feel good about their teens using social media,” Instagram head Adam Mosseri said.
Meanwhile, Instagram confirmed a new rule barring users under 16 from livestreaming without parental consent. The company also reported removing 635,000 accounts that sexualised children.
Let the Truth be known. If you read VB and like VB, please be a VB Supporter and Help us deliver the Truth to one and all.
Chennai (PTI): TVK chief Vijay has appealed to the CBI to hold its future inquiry into the Karur stampede case in Chennai or elsewhere in Tamil Nadu, citing his Assembly election-related engagements.
Prior to appearing for the CBI's questioning on March 15, the Tamilaga Vettri Kazhagam founder wrote to the investigating agency stating that he had already appeared for questioning in New Delhi on January 12 and 19 in compliance with the notices for his appearance before the CBI office in the national capital.
The letter dated March 14 was also marked to Justice Ajay Rastogi who has been appointed by the Supreme Court to oversee the investigation into the tragic Karur stampede that claimed 41 lives and left many injured.
ALSO READ: India hits out at Pak over Ahmadiyyas persecusion, air-bombings against Afghan in UNGA
In the letter, Vijay pointed out that he had sought a minimum of 10-15 days' time for his appearance for the inquiry and also requested permission to appear either in Chennai or any other place in Tamil Nadu due to his "personal inconvenience and commitments and preoccupation with respect to the ensuring Assembly election work."
Inspite of the request, the investigating officer asked him to appear in New Delhi on March 15, he said.
On Sunday, the actor-politician was questioned for more than seven hours and left the CBI office at around 6.30 pm. During the third round of questioning, he was accompanied by his party leaders Aadhav Arjuna and C T R Nirmal Kumar.
According to TVK sources on Tuesday, Vijay was said to have stated in the letter that he appeared for inquiry as a "law abiding citizen." He and his party cadres were victims of the Karur tragedy and were in "unexplainable mental agony due to the loss of lives and injuries during the stampede (on September 27, 2025) despite genuine precautions."
The Tamil Nadu Assembly elections are scheduled on April 23.
