How can jihadis and terrorists keep their profiles on Facebook while conservative activists in America are banned? That’s a question Facebook has been trying but not succeeding to answer. A recent opinion piece by Daniel Gallant in the WSJ suggests that Facebook censors at random. He writes:
If you used Facebook in late November, you probably saw a stream of fundraising campaigns for charities and cultural organizations. That’s because Facebook offered up to $7 million in matching donations for nonprofits that used its platform to raise funds on Giving Tuesday. But this gesture masks the negative impact Facebook’s newly adopted advertising policies have had on nonprofit organizations that rely on social media.
In response to public scrutiny stemming from the Cambridge Analytica scandal this year, Facebook has implemented enforcement measures aimed at improving election security and discouraging anonymous political messages. These measures have been poorly executed and inconsistently applied. They unfairly burden charitable organizations and small businesses, yet are easy for organized or well-funded actors to circumvent.
Several paid advertising campaigns run by my colleagues and clients have been inexplicably obstructed by Facebook’s policing in the past several months. Facebook refused to allow my New York cultural nonprofit, the Nuyorican Poets Cafe, to pay to promote a post encouraging people to vote in the midterms because our page was not “authorized to run ads related to politics.” A campaign promoting a lecture about sculpture at the Boston Museum of Fine Arts was blocked because Facebook’s censors mistakenly believed it was intended to influence an election in Ireland.
Similarly, Arts Japan 2020, an entity that highlights Japan-related cultural programs in the U.S., was unable to promote a post celebrating an award given by the emperor of Japan to an American arts curator. Facebook claimed the topic was of “national importance.” These harmless posts remain on Facebook in unpromoted form, but unpromoted content has a limited reach.
The problem is widespread. The Atlantic reported on Nov. 2 that Facebook’s election-security policies have caused it to block advertising campaigns from organizations including community centers, national parks and charities that serve wounded veterans.
Representatives of charities are often reluctant to register as political advertisers on Facebook because of privacy concerns. Facebook requires users to disclose significant personal information before promoting posts about politics or national issues. To be authorized to run such advertisements on behalf of my nonprofit organization, I would have to send Facebook my residential address, my Social Security number, and a photo of myself holding my passport or driver’s license. I’m loath to entrust any entity with all of that sensitive information—especially Facebook, which could use its facial-recognition software to match my personal information with photos of me that might appear online.
But suppose I did submit those items and was therefore allowed to promote political content. If I subsequently broke the rules, Facebook wouldn’t necessarily hold the nonprofit I represent responsible. Under Facebook’s policies, the person who operates an ad account is accountable for any ads placed by that account.
The only real protection Facebook’s identification requirements might provide is a guarantee that Facebook users can determine the true identity of the marketer responsible for a political advertisement. Or can they? A well-resourced advertiser with nefarious intent could simply hire a patsy (or use fake credentials) to pass Facebook’s screening process and establish a nominal presence at an American address.
As several reporters have recently discovered, Facebook allows many advertisers who pass its invasive screening process to run political ads under any identity they choose. A recent Vice article describes how Facebook approved political ads by reporters who pretended to be 100 different senators. And Business Insider was able to post political Facebook ads that purported to be from Cambridge Analytica.
In the name of election security, Facebook has implemented an opaque and shape-shifting definition of “issues of national importance” and an intrusive vetting process that is poorly enforced. These don’t protect users or the American public. Unless Facebook offers more transparency and accountability, determined marketers will remain able to circumvent the process. And despite benevolent gestures on Giving Tuesday, nonprofits and small businesses will continue to suffer under Facebook’s arbitrary restrictions.
Read more here.