LOGO

Facebook Vaccine Policy: A Pattern Emerges - Analysis

August 11, 2021
Facebook Vaccine Policy: A Pattern Emerges - Analysis

Facebook Confronts Disinformation and Scrutiny

Recently, Facebook disclosed the removal of hundreds of accounts from its Facebook and Instagram platforms. These accounts were linked to Russia-based operations engaged in spreading anti-vaccination disinformation. One campaign specifically disseminated claims, including the false assertion that the AstraZeneca COVID-19 vaccine would induce chimpanzee-like characteristics in recipients.

Furthermore, the same network, in May, circulated a purportedly leaked AstraZeneca document, questioning the safety profile of the Pfizer vaccine, according to Facebook’s report. The company emphasizes that these actions are part of its ongoing efforts to identify and eliminate deceptive campaigns globally.

Ongoing Concerns About Misinformation

Despite these efforts, a recent New York Times investigation suggests Facebook continues to struggle with addressing misinformation effectively. This includes concerns surrounding vaccine safety and other critical issues. The report highlights a potential disconnect between the company’s stated goals and its actual performance.

Sheera Frenkel, a cybersecurity correspondent for the New York Times and co-author of “An Ugly Truth: Inside Facebook’s Battle for Domination,” discussed these issues. The conversation, lightly edited for brevity, sheds light on the complexities of Facebook’s operations.

NYU Researcher Account Shutdown and Facebook’s Approach

TC: Facebook recently deactivated the accounts of NYU researchers whose advertising study tools were deemed to violate company rules. Many believe these objections are unfounded. Democratic senators have also sent a letter to the company questioning this decision. How does this situation align with your understanding of Facebook’s operational style?

SF: This action mirrored a pattern we observed in our book – a seemingly inconsistent and fragmented approach to problem-solving within Facebook. The decision against NYU was surprising, given that numerous other entities, including commercial firms, utilize data in similar ways, often with limited transparency.

The NYU academics were openly transparent about their data collection methods, informing both journalists and Facebook itself. Therefore, Facebook’s targeting of them, particularly as they prepared to publish potentially critical research, raises questions about the company’s priorities and its handling of data concerning its users.

Accountability and Past Apologies

Do you anticipate increased demands for accountability from Senate or Congressional investigators, particularly regarding recent events like January 6th? Historically, Facebook has often issued apologies following public controversies, but substantive change has been limited.

SF: Following the release of our book, a lawmaker expressed concern that Facebook’s apologies often lack meaningful follow-through. They suggested the company believes it can resolve issues with a simple apology and superficial changes, without addressing the underlying problems.

Regarding January 6th, lawmakers are now taking a broader perspective, examining how Facebook allowed groups to organize and proliferate for months prior to the event. They are investigating how the platform’s algorithms directed users toward these groups and how its inconsistent moderation policies contributed to the spread of the “stop the steal” movement.

Data Sharing and Investigative Limitations

If Facebook remains reluctant to share data comprehensively, how effective can these investigations be?

SF: We reported that Facebook was unable to provide the White House with prevalence data on COVID misinformation when requested. This was because the company had not allocated resources to track such data, despite requests from its own data scientists over a year ago. Lawmakers could pressure Facebook to prioritize data tracking and establish clear deadlines for data provision.

Internal Reporting and Transparency

Based on your reporting, do you believe there are internal reporting issues within Facebook, or are these information gaps intentional? Your book details Russian activity on the platform leading up to the 2016 elections, and the then-chief security officer, Alex Stamos, formed a team to investigate it. However, Mark Zuckerberg and Sheryl Sandberg reportedly expressed surprise after the election, claiming they were unaware of Stamos’ findings.

SF: Our investigation aimed to determine whether Mark Zuckerberg and Sheryl Sandberg were genuinely unaware of the Russian interference or were deliberately kept in the dark. Ultimately, only they can definitively answer that question.

However, shortly after the 2016 elections, Alex Stamos informed Zuckerberg and Sandberg about evidence of Russian interference. Despite this alarming revelation, Zuckerberg did not request regular updates on the security team’s progress. While he had many responsibilities, it seems reasonable to prioritize an investigation into a potential threat to democratic processes.

Future Regulatory Focus

What regulatory developments are you monitoring most closely?

SF: Over the next six to twelve months, two issues are particularly noteworthy. First, COVID misinformation remains a significant challenge for Facebook, with deep roots and a predominantly domestic source. This challenges the company’s principles of free speech and its ability to distinguish between protected expression and harmful content.

Second, upcoming elections in various countries, particularly those with populist leaders, will test Facebook’s ability to manage political activity on its platform. These leaders may emulate tactics previously employed by Donald Trump, and Facebook’s response will be closely scrutinized.

#Facebook#vaccine#policy#misinformation#content moderation#New York Times