GLAAD Report Reveals Record Lows in LGBTQ Safety Across Major Social Platforms

GLAAD says LGBTQ safety scores hit new lows on Meta, YouTube, X; TikTok holds steady

GLAAD Report Reveals Record Lows in LGBTQ Safety Across Major Social Platforms

A new report from GLAAD says LGBTQ safety continues to erode across major social media platforms, with most companies posting their lowest scores to date in its annual assessment of policies affecting queer users.

The sixth annual Social Media Safety Index (SMSI) evaluates six platforms: TikTok, YouTube, X, Facebook, Instagram and Threads, on their policies related to LGBTQ safety, privacy and expression. The 2026 findings show broad declines across the board, with TikTok remaining the only platform to hold steady compared to last year.

Researchers say the results point to a growing gap between stated community guidelines and real-world protections for LGBTQ users. Scores Hit Historic Lows for Most Platforms

According to the report, X continues to rank last with a score of 29 out of 100, reflecting ongoing concerns about hate speech and harassment. YouTube followed at 30, dropping 11 points from last year, the steepest decline among all platforms measured.

Meta’s platforms also saw decreases. Instagram scored 41, Facebook 40, and Threads 39, all slipping from 2025 levels. TikTok held at 56, the highest score in the index but showing no improvement.

GLAAD researchers say the declines reflect policy rollbacks, reduced transparency, and weakening protections for LGBTQ users, particularly transgender and gender non-conforming people. Policy Changes Drive Concerns

The report highlights several recent changes at major tech companies that it says have contributed to the drop in safety scores.

Meta has faced criticism for modifying its hate speech rules in ways that critics argue allow more anti-LGBTQ rhetoric on its platforms. The company has also scaled back diversity, equity and inclusion initiatives and made changes to its content moderation approach, including ending its fact-checking program in the United States.

YouTube, meanwhile, removed gender identity from its list of protected characteristics in hate speech policies, a shift GLAAD says places LGBTQ users at greater risk of harassment and abuse.

The report argues that both companies are moving away from previously established best practices for online safety. Key Findings Point to Broader Risks

Beyond platform-specific policies, the report raises concerns about how artificial intelligence is shaping content moderation. It warns that automated systems may disproportionately suppress LGBTQ voices while failing to consistently remove harmful content.

Researchers also flag concerns over data privacy, noting that major platforms increasingly use user-generated content to train AI systems, often without clear consent frameworks.

The index further highlights a decline in transparency, including limited reporting on moderation practices and workforce diversity data.

GLAAD says these trends make it harder to evaluate whether platforms are adequately protecting vulnerable communities. Offline Harms Reflected Online

The report links online safety concerns to broader real-world trends, citing more than 1,000 anti-LGBTQ incidents reported in 2025. It also references FBI data showing that anti-LGBTQ bias accounted for more than 20% of reported hate crimes in 2024, marking the third consecutive year at that level.

Researchers argue that online harassment and misinformation often contribute to offline harm, particularly as extremist content spreads across digital platforms. Advocacy Calls for Accountability

GLAAD President and CEO Sarah Kate Ellis said major platforms are failing to meet basic standards for safety and transparency.

She called on advertisers and users to reconsider their relationship with platforms that do not adequately protect LGBTQ communities.

“Social media companies do not meet basic best practices in content moderation, transparency, data privacy, and workforce diversity,” Ellis said in a statement included in the report. “They continue to prioritize profit over safety.”

Ellis added that LGBTQ creators and users are often left to deal with harassment, threats and misinformation without meaningful platform support. What Comes Next

The SMSI recommends stronger content moderation systems, improved transparency around enforcement, and renewed investment in diversity and inclusion programs. It also urges platforms to better protect LGBTQ users from targeted harassment while avoiding the suppression of queer content and expression.

As debates over online safety, regulation and free expression continue, the report suggests LGBTQ users remain disproportionately affected by policy shifts at major tech companies.

For now, TikTok stands out as the only platform maintaining its previous score, while others continue to decline, raising fresh questions about how social media companies balance growth, moderation and user safety in an increasingly polarized digital landscape.

Source