debunk, don’t ‘prebunk,’ and other psychology lessons for social media moderation

For social media networks and similar platforms to effectively address the issue of false information, simply identifying disinformation isn't sufficient—understanding public response to it is crucial. Recent research conducted by teams at MIT and Cornell University presents some unexpected insights that could influence how platforms like Twitter and Facebook manage this challenging content.
The findings from MIT challenge conventional wisdom. It might seem logical to flag potentially false headlines with a warning before a user encounters them, alerting readers to a disputed claim. However, the research indicates this approach isn’t the most effective.
The study involved nearly 3,000 participants who assessed the accuracy of headlines after being presented with varying types of warnings, or no warnings at all.
“Initially, I expected that providing a correction in advance would be the most beneficial, allowing individuals to approach the false claim with a degree of skepticism. Surprisingly, our results demonstrated the opposite,” explained David Rand, a co-author of the study, in an MIT news report. “Presenting the debunking information after exposure to the claim proved to be the most impactful method.”
Participants who received a warning before viewing the headline showed a 5.7% improvement in their ability to accurately assess its veracity. When the warning accompanied the headline, this improvement increased to 8.6%. However, when the warning was presented after exposure, accuracy improved by a significant 25%. This clearly demonstrates that debunking misinformation after exposure is considerably more effective than attempting to prevent belief in it beforehand.
The researchers theorize that this outcome aligns with observations suggesting individuals are more inclined to integrate feedback into existing beliefs than to modify those beliefs as they are forming. They also emphasize that addressing the problem of misinformation requires more than a simple adjustment.
“There isn’t a single, straightforward solution to the problem of misinformation,” stated co-author Adam Berinsky. “Systematic investigation of fundamental questions is a vital step towards developing a comprehensive set of effective strategies.”
The Cornell University study offers both encouraging and concerning results. Individuals evaluating potentially misleading information were consistently influenced by the opinions of larger groups, regardless of whether those groups shared their political affiliations.
This is encouraging because it suggests people are receptive to the idea that if a substantial number of individuals—even from opposing political viewpoints—question a story's validity, there may be legitimate cause for concern. However, it is also concerning due to the apparent ease with which opinions can be swayed simply by indicating that a large group holds a particular view.
“Practically speaking, our research demonstrates that people’s perspectives can be altered through social influence, independent of political alignment,” said Maurice Jakesch, a graduate student and lead author of the paper. “This creates opportunities to leverage social influence in ways that could reduce polarization and foster greater unity in online environments.”
It’s important to note that partisanship still plays a role—individuals were approximately 21% less likely to be influenced by a group opinion if that group was perceived as politically opposed. Nevertheless, people remained highly susceptible to the group’s overall judgment.
A key reason misinformation spreads so readily is our limited understanding of why it appeals to people and what methods can diminish that appeal, among other fundamental questions. Until social media platforms move beyond a trial-and-error approach, finding a solution will remain elusive, but each study like these contributes to a greater understanding of the issue.