facebook’s ‘oversight’ body overturns four takedowns and issues a slew of policy suggestions

Facebook’s independently-funded ‘Oversight Board’ (FOB) has released its initial set of rulings regarding challenged content moderation choices, approximately two months after selecting its first cases for review.
Developed over a considerable period, the FOB represents Facebook’s public relations effort to separate its operations from the repercussions of contentious content moderation choices—by establishing a review organization to manage a limited number of the complaints resulting from its content removals. It began accepting submissions for consideration in October 2020—and has encountered criticism for its initial operational delays.
In announcing these first decisions, the FOB indicates that it has affirmed only one of the content moderation decisions previously made by Facebook, while reversing four of the tech company’s determinations.
Decisions on each case were reached by five-member panels, which included at least one representative from the relevant region and a diverse range of genders, as outlined by the FOB. A majority vote from the entire board was then required to approve each panel’s conclusions before a final decision could be issued.
The single instance in which the board supported Facebook’s decision to remove content is case 2020-003-FB-UA—where Facebook had removed a post utilizing the Russian term “тазики” (“taziks”) to refer to Azerbaijanis, with the user asserting they lacked historical significance compared to Armenians, under its Community Standard concerning Hate Speech.
In the four other cases where the board reversed Facebook’s removals, it rejected prior evaluations made by the tech giant regarding policies on hate speech, adult nudity, dangerous individuals/organizations, and violence and incitement. (Details regarding these cases are available on its website.)
Each decision pertains to a specific piece of content, and the board has also issued nine policy recommendations.
These include suggestions that Facebook [emphasis ours]:
- Develop a new Community Standard addressing health misinformation, consolidating and clarifying existing regulations into a single resource. This standard should clearly define key terms like “misinformation.”
- Employ less restrictive methods for enforcing its health misinformation policies when the content does not pose an immediate threat of physical harm.
- Enhance transparency regarding its moderation of health misinformation, including the publication of a transparency report detailing the enforcement of Community Standards during the COVID-19 pandemic. This recommendation is based on feedback received from the public.
- Guarantee that users are consistently informed of the rationale behind any enforcement of the Community Standards against them, including the specific rule being applied. (The board issued two identical policy recommendations on this matter concerning the cases it reviewed, also noting in relation to the second hate speech case that “Facebook’s lack of transparency created the possibility that the company removed the content because it disagreed with the user’s expressed viewpoint.”)
- Clarify and provide examples illustrating the application of key terms within the Dangerous Individuals and Organizations policy, including the meanings of “praise,” “support,” and “representation.” The Community Standard should also offer guidance to users on conveying their intent when discussing dangerous individuals or organizations.
- Maintain a public list of organizations and individuals designated as “dangerous” under the Dangerous Individuals and Organizations Community Standard or, at a minimum, provide a list of illustrative examples.
- Notify users when automated enforcement is utilized to moderate their content, allow users to appeal automated decisions to a human reviewer in certain situations, and refine automated detection of images containing text-overlay to prevent posts raising awareness of breast cancer symptoms from being incorrectly flagged for review. Facebook should also improve transparency reporting on its use of automated enforcement.
- Update Instagram’s Community Guidelines to explicitly state that female nipples can be displayed to promote breast cancer awareness and clarify that, in instances of conflict, Facebook’s Community Standards supersede Instagram’s Community Guidelines.
In cases where it has overturned Facebook takedowns, the board stipulates that Facebook must reinstate the specific removed content within seven days.
Furthermore, the board states that Facebook will also “assess whether identical content with comparable context associated with the board’s decisions should remain on its platform.” And that Facebook has 30 days to publicly respond to its policy recommendations.
Therefore, it will be particularly interesting to observe how the tech giant reacts to the extensive list of proposed policy adjustments—especially the recommendations for increased transparency (including the suggestion to inform users when content has been removed solely by its AI systems)—and whether Facebook is willing to fully align with the policy guidance provided by this self-regulatory entity (or not).
Facebook established the board’s structure and charter and appointed its members—but has promoted the idea that it operates ‘independently’ from Facebook, despite also providing funding to the FOB (indirectly, through a foundation created to administer the organization).
While the Board asserts that its review decisions are binding on Facebook, there is no corresponding requirement for Facebook to implement its policy recommendations.
It is also important to note that the FOB’s review efforts are exclusively focused on content removals—rather than on content Facebook chooses to host on its platform.
Considering these factors, it is difficult to determine the extent of Facebook’s influence on the Facebook Oversight Board’s decisions. Even if Facebook adopts all of the aforementioned policy recommendations—or more likely issues a public statement welcoming the FOB’s “thoughtful” contributions to a “complex area” and states it will “consider them as it moves forward”—it will be doing so while maintaining maximum control over content review by defining, shaping, and funding the “oversight” process.
TL;DR: This is not a true supreme court.
In the coming weeks, the FOB will likely be closely monitored regarding a case it recently accepted—related to Facebook’s indefinite suspension of former U.S. president Donald Trump, following his incitement of a violent attack on the U.S. capital earlier this month.
The board notes that it will be soliciting public comment on this case “shortly.”
“Recent events in the United States and globally have underscored the significant impact that content decisions made by internet services have on human rights and freedom of expression,” it states, adding that: “The challenges and limitations of current approaches to content moderation highlight the value of independent oversight of the most critical decisions made by companies such as Facebook.”
However, this “Oversight Board” is inherently unable to be entirely independent of its creator, Facebook.
Update: In a genuinely independent response to the FOB’s decisions, the unofficial “Real Facebook Oversight Board”—comprising individuals not selected by Facebook—issued a critical assessment, stating that the rulings are riddled with “deep inconsistencies” and establish a “concerning precedent for human rights.”
“The Oversight Board’s rulings confirm Facebook’s worst-kept secret—it lacks a coherent moderation strategy and clear, consistent standards,” the Real Facebook Oversight Board added.
Update 2: In a public response to the FOB’s initial decisions, Facebook stated: “We will implement these binding decisions in accordance with the bylaws and have already restored the content in three of the cases as mandated by the Oversight Board. We restored the breast cancer awareness post last year, as it did not violate our policies and was removed in error.”
It added that it would consider the “numerous policy advisory statements” issued by the FOB, noting that it has up to 30 days to “fully consider and respond”.
“We believe that the board included some important suggestions that we will take to heart. Their recommendations will have a lasting impact on how we structure our policies,” it added.
Facebook’s initial response to the FOB’s first decisions does not directly address the latter’s decision to overturn a hate speech takedown and order the reinstatement of a post by a user in Myanmar—which had suggested a flaw in the mindset of Muslims.
The post referenced two widely circulated photographs of a Syrian child of Kurdish ethnicity who drowned while attempting to reach Europe in September 2015—accompanied by text questioning the lack of response by Muslims generally to the treatment of Uyghur Muslims in China, compared to reactions to cartoon depictions of the Prophet Muhammad in France, according to the FOB, before concluding that recent events in France diminished the poster’s sympathy for the depicted child, and seemingly implying the child might grow up to be an extremist.
“The Board considered that while the initial portion of the post, viewed in isolation, might appear to make an insulting generalization about Muslims (or Muslim men), the post should be interpreted as a whole, considering the context,” the FOB explained its decision that the post did not incite hatred against Muslims.
However, the Board’s decision to overturn the takedown—in a region where Facebook’s platform has been implicated in accelerating ethnic violence for years—was met with disbelief on social media and strong condemnation from the independent Real Facebook Oversight Board.