The report by US-based Eko and India Civil Watch International, released on Tuesday, put the scanner back on political advertising across social media platforms and claims by Big Tech firms to follow policies to detect misinformation.
The two groups claimed to have collated 22 politically incendiary advertisements across Meta’s advertising platforms. Of these, 14 passed Meta’s quality filters, the bodies claimed. However, Eko said the ads in question were taken down before they went live on Meta’s platforms.
“Meta is unequipped to detect and label AI-generated ads, despite its new policy committing to do so, and its utter failure to stamp out hate speech and incitement to violence—in direct breach of its own policies… These (ads) called for violent uprisings targeting Muslim minorities, disseminated blatant disinformation exploiting communal or religious conspiracy theories prevalent in India’s political landscape, and incited violence through Hindu supremacist narratives. One approved ad also contained messaging mimicking that of a recently-doctored video of union home minister, Amit Shah,” the report alleged.
Meta not alone
Meta is not the only platform to have come under the scanner. On 2 April, a report by human rights body Global Witness claimed that 48 advertisements portraying violence and voter suppression on the world’s largest video distribution platform YouTube cleared the latter’s electoral quality check filters.
The reports, as described in Eko’s investigation, also highlighted the use of generative AI content, “proving how quickly and easily this new technology can be deployed to amplify harmful content.”
Maen Hammad, a researcher with Eko, told Mint that the body “uncovered a vast network of bad actors using Meta’s ads library to push hate speech and disinformation.” While Meta had responded to Eko’s investigation, Hammad claimed the company “did not directly answer our questions related to the detection and labeling of AI generated images in their ad library.”
Hammad shared a copy of Meta’s response to Eko, dated 13 May. In the response, Meta underlined that it took multiple “actions” and “enforcement” against malicious ad content. “We reviewed the 38 ads in the report and found that the content did not violate our advertising standards,” the response said.
However, a Meta India spokesperson told Mint on Wednesday that the company did not receive details from Eko to investigate. “As part of our ads review process—which includes both automated and human reviews—we have several layers of analysis and detection, both before and after an ad goes live. Because the authors immediately deleted the ads in question, we cannot comment on the claims made.”
In the investigation by Global Witness, a YouTube statement claimed that none of the purported ads ran on the platform, and refuted that they showed “a lack of protections against election misinformation.”
“Just because an ad passes an initial technical check does not mean it won’t be blocked or removed by our enforcement systems if it violates our policies. But, the advertiser deleted the ads in question, before any of our routine enforcement reviews could take place,” a YouTube spokesperson said.
The company was yet to respond to Mint’s request for a statement, until press time.
Pressing for third-party audits
Despite the defence put up by Big Tech, industry stakeholders and policy evangelists said that there is a clear need for third-party auditing of Big Tech’s policy enforcements—especially amid the ongoing election period.
Prateek Waghre, executive director at public policy think tank Internet Freedom Foundation of India (IFF), said, “There are gaps that are being exploited by many malicious parties. While political content is anyway there across social platforms, many policy gaps are being exploited time and again. Various ads that fall afoul of Big Tech’s own policies end up being published, which reveals a clear enforcement gap of quality controls. Advertising content in India is multilingual, but we don’t quite know how good Big Tech’s quality classifiers are in most of our languages.”
In its response to Eko cited above, Meta claimed that its content moderation is enforced in 20 Indic languages, while third-party human fact-checking is done in 16 languages.
Further, most Big Tech firms publish their own, self-audited ‘transparency reports’ to support their policy enforcements. For instance, Meta’s latest India transparency report published 30 April claimed that the company took “actions” against 5,900 instances of “organized hate”, 43,300 instances of “hate speech” and 106,000 instances of “violence or incitement.” However, the report failed to define ‘actions’, and what steps were taken against the perpetrators.
Isha Suri, research lead at fellow think-tank Centre for Internet and Society (CIS), said, “Europe’s Digital Services Act enforces policy implementation transparencies. In India, one doesn’t understand most such systems. We need to have external third-party audits beyond Big Tech’s own filtering, and such independent scrutinies may help clarify which filters are working, and which aren’t.”
You are on Mint! India’s #1 news destination (Source: Press Gazette). To learn more about our business coverage and market insights Click Here!
Download The Mint News App to get Daily Market Updates & Live Business News.
Published: 23 May 2024, 07:00 AM IST