Social media rewards bad news, study finds

Social media rewards bad news, study finds - Professional coverage

According to Digital Trends, Cornell University researchers analyzed nearly 11 million posts across 7 social media platforms and found a consistent pattern. The study included BlueSky, Mastodon, LinkedIn, Twitter/X, TruthSocial, Gab, and GETTR, each with different political leanings. On every single platform, news from lower-credibility sites received 7% more engagement than posts from higher-credibility outlets. This happened even when the same user posted both types of content, and the trend appeared across both left-leaning and right-leaning platforms. Sensational headlines and emotional framing seem to drive the clicks, and even AI systems struggle with news accuracy in this environment.

Special Offer Banner

Sponsored content — provided for informational and promotional purposes.

The engagement problem

Here’s the thing that really worries me about these findings. When people consistently reward poor journalism or dramatic content, platforms have zero incentive to boost reliable information. Basically, misinformation gets a free algorithmic ride while higher-quality journalism loses reach and influence. We’re talking about a system where clicks chase chaos, and engagement-based feeds amplify bad content by design.

But here’s what makes this particularly troubling – it’s not just a “bad algorithm” problem. The researchers tested this with the same posters and audiences, and lower-quality news still pulled more engagement. So we can’t just blame the platforms. Sometimes people simply choose the louder link. When was the last time you clicked on a sober, factual headline versus something outrageous?

Breaking the political myth

One of the most important findings here challenges the narrative that misinformation only spreads on platforms with certain political leanings. The study found this pattern everywhere – from left-leaning spaces to right-leaning ones. Users reward outrage, not accuracy, regardless of their political affiliation. Good reporting often loses to viral drama across the entire spectrum.

This really weakens the argument that if we could just fix one “side” of the political divide, we’d solve the misinformation problem. The data suggests it’s much more fundamental than that. It’s about human psychology and what captures attention in a crowded digital space.

What happens now?

Platforms are already experimenting with credibility signals and implementing AI to verify facts. We might see more prompts or labels nudging users toward reliable sources in the future. But the bigger question is whether social platforms should prioritize credible sources over whatever drives attention.

Some platforms are testing tools that give users more control over what they see, which could help. But honestly, I’m skeptical about whether most people will choose the “healthy” option when the junk food equivalent is right there and tastes so good. The full research is available in PNAS, and Cornell has more details about their methodology.

So where does this leave us? We need platforms to rethink their recommendation systems, not just content moderation. But we also need to acknowledge that this is as much about human behavior as it is about technology. Until we address both sides of that equation, we’ll keep seeing reliable news lose out to whatever gets the most clicks.

Leave a Reply

Your email address will not be published. Required fields are marked *