Meta misses blatant misinformation again… in ads ahead of Brazil’s election
London-based nonprofit, Global Witness, has caught Meta in violation of its own rules of political misinformation for the fourth time.
MORE IN THIS SECTION
Ahead of Brazil’s presidential election on Oct. 2, misinformation and high tensions have plagued the electoral process. Facebook, the platform most affiliated with that kind of political deception in recent times is once again under the microscope for playing a role or unknowingly participating in the misinformation being fed to unwitting users.
On the whole, this is the fourth time it has been caught by London-based nonprofit group, Global Witness, of failing to detect obvious misinformation in advertisements ahead of the election. This continued ignorance from the social media giant was described by the nonprofit as “alarming.”
The election includes current and far-right President Jair Bolsonaro, who is seeking re-election and has battered the country’s electronic voting system, facing off with former President and once incarcerated, Luiz Inácio Lula da Silva.
Some advertisements posted on Meta have featured misinformation in relation to promoting the incorrect election date, the integrity of the election being doubted, incorrect voting methods, and even the electronic voting system that is in place.
Global Witness has in the past deliberately tested to see whether the social media giant was following its own rules concerning disinformation about presidential elections, and it was found to be in violation three times prior to the most recent discovery. At this point, the company is beyond the baseball analogy of three strikes and you’re out.
Facebook was also found by the group to have missed hate speech posts in Myanmar, where advertisements included racial slurs against East Indians and Muslims, even calling for their deaths. Similar actions occurred in Ethiopia and Kenya, where threats of death, beheadings, rape were used in the ads.
The previous three times Meta flubbed review, Global Witness purposely submitted advertisements that contained hate speech along with other misinformation to see if the human reviewers at the company or the artificial intelligence that is considered to be top notch, would be able to detect the posts. The posts were not taken down.
With that, Meta previously said it had “prepared extensively” for the upcoming elections in Brazil, as Facebook is the most popular app in the country.
“We’ve launched tools that promote reliable information and label election-related posts, established a direct channel for the Superior Electoral Court (Brazil’s electoral authority) to send us potentially-harmful content for review, and continue closely collaborating with Brazilian authorities and researchers,” the company said in a statement.
Following the controversial 2016 U.S. Presidential election that saw Donald Trump triumph on the back of Russian-bought advertisements that were intentionally disseminated to cause unrest among Americans and create division in the country, Facebook began to require those posting political advertisements, to go through an authorization process that forces the inclusion of a “paid for by” disclaimer so users know where an ad was coming from.
Despite the rules in place, the company violated it and did not recognize recent blatant misinformation ads. Global Witness submitted test ads that were actually approved for publication, but never were published. The ads placed outside of Brazil in London and Nairobi respectively, also did not raise any warnings when they should have.
In the case of Brazilian election ads, Meta said that as a part of its new rules and safeguards to prevent such misinformation, it would require a Brazilian payment method, which it did not, as well as a “paid for by” disclaimer, that it also did not require.
“Facebook has identified Brazil as one of its priority countries where it’s investing special resources specifically to tackle election related disinformation—So we wanted to really test out their systems with enough time for them to act. And with the U.S. midterms around the corner, Meta simply has to get this right — and right now,” said Jon Lloyd, senior advisor at Global Witness, about the situation.
“What’s quite clear from the results of this investigation and others is that their content moderation capabilities and the integrity systems that they deploy in order to mitigate some of the risk during election periods, it’s just not working,” he continued.
The ads are being used instead of regular posts because, according to the help center page, the company holds ads to a higher standard than that of regular posts. Global Witness has submitted over 10 ads in which they deliberately included false information about many aspects of the election that the company failed to recognize.
“We are constantly having to take Facebook at their word. And without a verified independent third party audit, we just can’t hold Meta or any other tech company accountable for what they say they’re doing,” Lloyd said.
“Disinformation featured heavily in its 2018 election, and this year’s election is already marred by reports of widespread disinformation, spread from the very top: Bolsonaro is already seeding doubt about the legitimacy of the election result, leading to fears of a United States-inspired January 6 ‘stop the steal’ style coup attempt,” Global Witness said in a statement after releasing its findings.
The Federal University of Rio de Janeiro’s social media group, NetLab, was a participant in the Global Witness study that found that many of the misinformed advertisements on Facebook have been funded by candidates currently running for a seat in the federal and state legislatures.