Instagram Struggles with Disturbing Rise of Pedophilic Accounts Using Coded Hashtags and Emojis: WSJ, Stanford, UMass Study

Originally published June 7, 2023 10:23 am PDT

Instagram, Meta’s popular social media platform, is wrestling with a deeply concerning issue, found researchers at Stanford University and the University of Massachusetts (UMass): an alarming presence of pedophilic accounts that tactically use coded hashtags and emojis to advertise and sell underage sex content.

The findings were first reported in a detailed Wall Street Journal exclusive.

Prompted by these findings, Instagram is in the process of banning such search terms like #pedobait and variations on #mnsfw.

These accounts, according to UMass’s Levine, adopt a cryptic language to conceal their illegal activities.

Emojis like a map, symbolizing “minor-attracted person,” and “cheese pizza,” an acronym for “child pornography,” are commonly used.

Accounts that sell this offensive material even denote the child’s alleged age using veiled references like being “on chapter 14,” or “age 31” followed by a reverse arrow emoji.

This worrisome activity extends further. Some profiles brazenly indicate signs of sex trafficking, others offer self-produced content with faces concealed to avoid stalking or blackmail.

Many of the users bear the scars of self-harm, citing past sexual abuse.

Alarmingly, even brief engagement with one of these accounts can trigger Instagram’s algorithm to recommend joining similar communities.

The revelation triggered a wave of criticism, with Instagram’s enforcement procedures and recommendation algorithms under the microscope.

Sarah Adams, a Canadian mother who discusses child exploitation on Instagram, reported her followers being recommended an account featuring pro-incest content after her brief interaction with it.

Instagram admitted that such recommendations violate its rules and vowed to address the issue through its newly formed child safety task force.

However, Instagram’s actions to combat the issue have often been insufficient or delayed.

Several instances were reported where user reports of child nudity were left unanswered for months, with the platform responding to some reports saying, “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”

Responding to these allegations, a Meta spokesperson admitted that due to a software glitch, a significant portion of user reports were not being processed, and that their moderation staff wasn’t correctly enforcing the platform’s rules.

The company claimed to have fixed the bug and is training its content moderators anew.

Nonetheless, the platform’s penal actions aren’t always permanent.

Instagram’s guidelines allow users to operate multiple linked accounts, enabling violators to simply switch to a “backup” account listed in their bio if their primary account is removed, thereby evading meaningful enforcement.

Moreover, Instagram’s recommendation system, driven by AI, seems to work against its safety measures.

Despite blocking searches for a certain encrypted file transfer service infamous for transmitting child sex content, Instagram’s autofill feature suggested related terms that led to similar content.

Despite initial measures taken by Instagram to remove reported accounts, UMass’s Levine found that viewing even one such account triggered the platform’s algorithm to suggest new ones, helping to reestablish the very network Instagram was trying to dismantle.

“Pull the emergency brake,” Levine urged. “Are the economic benefits worth the harms to these children?”

LATEST VIDEO