Glossary

Glossary of terms commonly used in the Facebook Papers:

3PFC - Third Party Fact-Checkers – The network of independent fact-checkers upon which Facebook relies for labeling inaccurate or misleading posts.

ACDC - An algorithmic service that classifies and remediates problematic "clusters" of Facebook accounts or other entities.

ANSA - Adult Nudity and Sexual Activity

ARCs - At-Risk Countries – A designation for nations with elevated levels of unrest in which Facebook ostensibly deploys additional measures to mitigate harms. "Tier 1" ARCs include countries like India, Myanmar, Sri Lanka, Philippines, and Ethiopia.

B&H - Bullying & Harrassment

B2V - Barrier To Vaccination – A classifier referring to content that may deter users from getting the COVID vaccine. An earlier iteration of the B2V tag was VH (vaccine hesitancy).

BI - Business Integrity

Blackhole - An internal service used by Facebook researchers to blacklist problematic URLs, domains, and IP addresses.

Blame Tool - An internal service used to determine "which sources and triggers recommended a bad post" to users.

Blue - A reference to the main Facebook app.

BTG - Break The Glass – Emergency measures implemented around volatile moments such as the Capitol insurrection.

BWC - Build With Care

CAU - Cares About Us

CEI - Child Exploitation Images

CI - Central Integrity – After a reorginization in late 2020, "Central Integrity" was formed, encompassing 3 main verticals: Problems (focused on mitigating specific harms like misinformation, objectionable content, etc.), Foundation (focused on buillding the infrastructure and platforms used across Integrity teams), and Ecosystem (Integrity mission control, focused on prioritization, goal-setting, coordination, etc.). NB: Before late 2020, "CI" often referred to Community Integrity.

CIB/IO - Coordinated Inauthentic Behavior / Influence Operation

CIX - Community Integrity Experience

CO - Community Operations

CPI - Community Products Integrity

CN - Child Nudity

CORGI - According to Gizmodo, CORGI is a "complex mathematical model that Facebook’s researchers came up with internally in order to find 'clusters of users' that might be operating in an inauthentic way—like users that might be commenting a bit too frequently on each other’s posts."

CR - Community Review

CVI - Civic Integrity – The now-defunct Civic Integrity team was established after the 2016 election to tackle misinformation around elections and other civic processes like the Census. In December of 2020, it was dissolved as a stand-alone entity and folded into the new Central Integrity hub, a move that Frances Haugen has cited as a key motivator for her whistleblowing.

DAP / DAU - Daily Active People / Users

DIO (occasionally cited as DOI) - Dangerous Individuals & Organizations – Under its high-stakes DIO policy, Facebook maintains a fluid list of individuals and organizations that are banned from the platform. This classification process includes consideration of government designations, such as groups on the US Foreight Terrorist Organizations list, as well as' off-platform behavior, among other factors.

Drebbel - The Drebbel project was launched by the Dangerous Content team to "measure, monitor, and prevent long-term adverse effects of recommender systems on people, particularly with respect to integrity and fairness." Its work has focused largely on identifying and disrupting "gateway Groups" that Facebook's algorithms have promote that lead peope to more explicitly problematic Groups dedicated to anti-vaxx, QAnon, etc.

DVDE - Dedicated Vaccine Discouragement Entities

Eat Your Veggies - A vetting process whereby Facebook's public policy and communications teams are empowered to nix Integrity interventions that might have politically sensitive impacts, such as tweaks to make the platform more civil that may have disproportionately affected right-wing users.

EB - Engagement-Bait

FAI - Fake Account Index – A scoring system used across the Facebook family of apps that assesses the likelihood that a given account is inauthentic.

FRX - Feedback Report Experience

FUSS - Feed Unified Scoring System – A key classification system used by Facebook's integrity teams to assess the quality of entities and content; a red FUSS tag indicates low quality, while yellow indicates borderline and green indicates presumably high quality. FUSS signals are also utilized to tag specific categories of problematic content, such as anti-vaxx posts, and to limit posts' reach by removing them from recommendations, among other things.

GYSJ - Groups You Should Join – Facebook's Group recommendation tool, which has been shown to be a problematic vector of disinformation and radicalization. Ahead of the 2020 election, Facebook announced it would turn off recommendations for political Groups, although the enforcement of that policy has subsequently been called into question.

HERO - High-Risk Early Review Operations – An algorithm Facebook uses to identify potentially problematic posts that may gain virality.

HEx - Human Exploitation

IGWB - Instagram Wellbeing

IX - Integrity Experiences

M-Team - Mark's Team (or Management Team) – Zuckerberg's inner circle, key leaders and decision-makers across Facebook.

M&H - Misinformation & Harm

MRA - Misinformation Related Article – Third-party fact-checker (3PFC) articles displayed alongside false or misleading content.

MSI - Meaningful Social Interactions – A controversial engagement metric launched in 2018 that plays a key role in Facebook's content ranking algorithms and broader decision-making. The MSI metric and corresponding algorithm tweak was ostensibly introduced to make Facebook a healthier platform by placing greater weight on posts from users' family and friends, but in practice, it has rewarded and incentivized the most sensational and polarizing content, putting it at the center of many internal and external complaints about the platform's Integrity shortcomings.

MSM - Militarized Social Movements

N&P - Nudity & Pornography

NCII - Non-Consensual Sexual Imagery

NEQ - News Ecosystem Quality – NEQ is an internal system dedicated to evaluating the quality of journalism, and can be used to boost authoritative news and downrank untrustworthy sources. While NEQ typically plays a relatively small role in the overall Newsfeed ranking algorithm, it was given far greater weight as a break-glass measure around the divisive 2020 election. Despite making a considerable positive impact, the change was phased back out as Facebook executives stated it was always meant to be temporary.

NFX - Negative Feedback Experience

OC - Objectionable Content

OCQ - Objective Content Quality

PAC - Protect And Care team

Project Daisy - A pilot program in which Instagram tested the effect of removing 'like' counts from users profiles to gauge whether it would improve well-being. While the test found hiding likes had negligible impact, Instagram rolled it out as an option nonetheless after executives argued it would signal that they care about their users.

PYMI - People You May Invite – A recommender system designed to encourage users to invite friends to join Groups.

PYMK - People You Might Know – A recommender system that encourages users to make additional connections based on their social graphs.

PYML - Pages You Might Like – A recommender system that encourages users to follow pages that the algorithm predicts they will find engaging based on their behavioral profile.

Reshare Depth - A measure of how far removed a user who reshares content is from the original poster. Internal Facebook research has shown that "deep reshares" are a significant contributing factor to the viral spread of objectionable content.

RO / ROPs / RODs - Repeat Offender / Repeat Offender Pages / Repeat Offender Domains

SEV - Site Event – Refers to a significant event or issue that has been elevated and typically requires an all hands on deck response across teams within Facebook.

Soft Actions - Umbrella term for the wide range of content treatments that Facebook can apply that stop short of removal, including employing warning labels or other forms of friction and context, downranking posts, removing actors from recommendations, etc.

SRT - Single Review Tool – A core tool used by the Integrity team to generate labels for clickbait and other categories of potentially violative content.

SSI - Suicide and Self-Injury

SUMA - Single User Multiple Accounts – This is a catch-all term for individuals with multiple accounts, whether they are operating a troll farm or simply a secondary account. It appears repeatedly throughout the Facebook Papers, as inauthentic accounts are not only a vector for sowing harm and disinformation, but also prompt concerns that Facebook has improperly inflated its ad metrics to business users.

TRIPS - Tracking Reach of Integrity Problems – A key internal survey that was conducted to measure users' perceptions of the content their seeing on the platform, the insights from which continue to influence a wide range of Integrity initiatives.

USI - Unwanted Social Interactions – Harassment and abuse on Facebook platforms

V&I - Violence & Incitement (also sometimes abbreviated VNI)

VH - Vaccine Hesitancy – Classifier for anti-vaxx content, subsequently renamed Barrier To Vaccination (B2V)

VHS - Very High Severity

VNSA - Violent Non-State Actors

VPV - View Port Views (impressions)

WYT - Worth Your Time – A metric derived from surveys and machine learning models that was designed to measure users' satisfaction from their time spent on the platform, as opposed to MSI-style engagement.

XCheck - Facebook's controversial Cross-Check system, ostensibly designed as a quality control measure for high-profile users, has insulated approximatelly 6 milliion VIPs from the standard policy enforcement to which everyone else is subject.

XFN - Cross-functional team

XI - Connections Integrity

YA - Young Adults (ages 18-29)