Right-Wing Sting Group Project Veritas Is Breaking Facebook's "Authentic Behavior" Rule. Now What? - 2019-06-11
A member of Project Veritas gave testimony in a federal court case indicating that the right-wing group, known for its undercover videos, violates Facebook policies designed to counter systematic deception by Russian troll farms and other groups. The deposition raises questions over whether Facebook will deter American operatives who use the platform to strategically deceive and damage political opponents as vigorously as it has Iranian and Russian propagandists. But is the company capable of doing so without just creating more problems?
Close observers of Veritas and Facebook, including one at a research lab that works with the social network, said the testimony shows the group is clearly violating policies against what Facebook refers to as "coordinated inauthentic behavior." The company formally defined such behavior in a December 2018 video featuring its cybersecurity policy chief Nathaniel Gleicher, who said it "is when groups of pages or people work together to mislead others about who they are or what they're doing." The designation, Gleicher added, is applied by Facebook to a group not "because of the content they're sharing" but rather only "because of their deceptive behavior." That is, using Facebook to dupe people is all it takes to fit the company's institutional definition of coordinated inauthentic behavior.
In practice, "coordinated inauthentic behavior" has become a sort of catchall label for untoward meddling on Facebook, snagging everyone from Burmese military officers to Russian meme spammers. But curbing such activity has become a very public crusade for Facebook in the wake of its prominent role as a platform for the spread of disinformation, propaganda, and outright hoaxes during the 2016 presidential campaign. This past January, Gleicher announced the removal of coordinated inauthentic behavior from Iran, which spread when operatives "coordinated with one another and used fake accounts to misrepresent themselves," thus triggering a Facebook ban. Similarly, in a 2017 update on Facebook's internal investigation into Russian online propaganda efforts, the company's then-head of security Alex Stamos assured the world's democracies the company was providing "technology improvements for detecting fake accounts," including "changes to help us more efficiently detect and stop inauthentic accounts at the time they are being created."