Saturday, February 8, 2025

Amazon, Google and verification vendors among ad tech cohort under fire from U.S. senators over child safety shortcomings

Must read

Adalytics has been a thorn in the side of major ad platforms that have characterized its research as flawed, but now it has found an audience in the highest echelons of government. 

Members of Congress have sent letters to major tech companies, including Google and Amazon, expressing concern about ads served on websites known to host child sexual abuse material (CSAM).

Signed by U.S. Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) the open letters come after new research from watchdog group Adalytics showed examples of ad tech companies serving ads on websites known to carry CSAM.

The letters, sent today, detail “grave” and “profound” concerns after a new Adalytics report found evidence of ads on CSAM websites promoting major brands and other advertisers, including the federal government. The report was shared earlier with lawmakers in private and released publicly today. Letters to Amazon and Google say the companies’ “actions here—or in best case, inaction—are problematic.”

“The dissemination of CSAM is a heinous crime that inflicts irreparable harm on its victims,” senators wrote in the letter to Google CEO Sundar Pichai. “Where digital advertiser networks like Google place advertisements on websites that are known to host such activity, they have in effect created a funding stream that perpetuates criminal operations and irreparable harm to our children.”

Letters have also been sent to the CEOs of ad verification giants DoubleVerify and Integral Ad Science. The CEOs of Media Ratings Council and Trustworthy Accountability Group, which both represent the industry’s self-governance efforts, have also been sent letters.

The ad tech companies and trade organizations mentioned in the public letters did not immediately respond to Digiday’s request for comment.

Additionally, the letters — signed by Senators Blackburn and Blumenthal — cast doubt on the ability of AI to perform the catch-all safety functions that ad tech companies claim it can and demand answers on just how vigorously they comply with child protection requirements.  

Lawmakers also note how this activity is in violation of companies’ policies and federal laws and ask why each of them continued to power ads on problematic websites. The websites in question have both previously been flagged since 2021 in transparency reports released by the National Center for Missing & Exploited Children.

According to the Adalytics’ report, the examples were uncovered while it was researching where ads for U.S. government agencies were served to bots and crawlers. According to Adalytics, multiple advertisers checked their own records confirmed brand safety vendors had marked 100% of ad impressions on the websites as “brand safe” or “brand suitable.”

The lawmakers’ letters to tech companies express similar concerns but they also ask each company CEO to address specific concerns related to their outfit’s activities observed in the findings. Adalytics also declined to comment further about the report.

Amazon and Google quizzed on vetting

In particular, Amazon and Google, both of which have extensive ad tech operations that help fund vast networks of websites, are asked by Blackburn and Blumenthal to explain how they vet third-party sites they help monetize. Lawmakers’ questions mentioned in the letters include how closely they adhere to stated policies on censuring bad actors and recourse for advertisers that have been impacted by the failure to enact such measures. 

The Congressional letters could also put more on advertisers receiving URL-level reports from brand safety partners. Advertising and safety experts that have reviewed the Adalytics report say the adtech supply chain is still too opaque, which makes it harder to track and fix problems when ads are served on problematic websites.

Companies continue to rely on ineffective “legacy” tech for brand safety even if it doesn’t work, said Rob Leathern, founder of the trust and privacy consultancy Trust2.

“The degree to which publishers and advertisers can be anonymous on these websites is a problem,” said Leathern, who previously led the product team for Google’s Privacy and Data Protection Office. “….People can hide behind anonymous web domains and I don’t think that’s something we should accept as society.”

Verification under the microscope 

The leadership of the two largest ad verification companies, DoubleVerify and IAS, have also been asked by Blackburn and Blumenthal to explain how much revenue they have generated serving ads on the offending websites, how they vet supply chain vetting, as well as adhere to child protection guidelines.

Meanwhile, letters have also been sent to MRC and TAG’s leadership asking what plans they have to review the accreditation status of companies that have measured ads on websites known to host CSAM. Inquiries include policies to review or revoke an entity’s accreditation when companies fail to identify unlawful websites, plus what immediate corrective actions are planned, including reviews of accreditation for DV and IAS.

Adalytics’ findings 

According to Adalytics, ads for major brands were found to have been placed on the two websites in question, including ads for the U.S. Department of Homeland Security and dozens of brands, including Starbucks, PepsiCo, Honda, and Audible.

Ads from ad vendors mentioned in the Adalytics report include nearly a dozen companies, including Amazon, Google, Criteo, Microsoft.

According to Adalytics, researchers noticed the ads while looking at where a bot for URLScan.io crawled a page from an IP address in Italy. While the bot was taking a screenshot, Adalytics said the bot was served an ad for the U.S. Dept. of Homeland Security that was served by Google’s DV360 platform. After seeing the URL containing CSAM, researchers reported the issue to federal law enforcement agencies, including the FBI, DHS, the National Center for Missing And Exploited Children (NCMEC), and other groups.

“This shocking report exposes the problem that it is simple for criminals and fraudsters to exploit the lack of transparency in the online advertising system to gain money from placement of adverts on sites with horrific criminal material,” said Ian Moss, Chair of the British group UK Stop Ad Funded Crime (UKSAFC).

Multiple media buyers have confirmed with Digiday that their reports from measurement providers have the problematic websites labeled as “100% brand safe.” One media agency exec told Digiday their brand safety report showed 75% of the impressions for the websites in the Adalytics report had pre-bid applied and 100% had post-bid applied.

In the letters addressed to the ad verification companies, lawmakers expressed doubt about the ability of their AI systems to properly identify and categorize content as brand safe. They also said advertisers rely on the companies’ tech without knowing where their ads are shown and deserve more transparency.

DoubleVerify’s letter mentions finding code on the websites in question. Lawmakers note DV should have visibility into the full-page URL where ads render. “We understand that DoubleVerify generally withholds long-term, granular page-level data from its clients,” according to the letter.

“While Integral Ad Science’s failure to prevent advertisers from inadvertently subsidizing a website known to engage in illegal activity is unacceptable,” lawmakers wrote to IAS. “To withhold this data from advertiser customers that would give them more autonomy to prevent their ads from funding illicit activity is inexcusable.”

The report is just the latest of many released by Adalytics in the past few years. Another report released last summer raised new questions about whether brand safety vendors’ AI tools are accurately identifying and categorizing content as brand safe. Measurement firms like DV and IAS have defended their platforms’ capabilities; some advertisers and brand safety experts have grown increasingly frustrated at what they see as industry inaction against substantial change.

The report comes as Congress considers new legislation to protect kids from a range of online risks, including CSAM and data privacy violations. One bill, the Stop CSAM Act, would require platforms to report CSAM and conduct annual transparency reports. The bill also would create new legal liabilities for platforms that knowingly host, store, promote, or facilitate CSAM.

Another bipartisan bill, the Kids Online Safety Act (KOSA), has 70 co-sponsors. The legislation gained momentum last year and includes new rules that would require online platforms to exercise more care to protect minors from content related to online bullying, physical violence, sexual exploitation, and exposure to various content, depending on their age.

The Adalytics report is a “wake up call” for resetting the ways programmatic ads are contracted and regulated, said Nick Swimer, an advertising specialist at UK law firm Lee & Thompson.

“This is a seismic moment that should lead to a complete re-set in how programmatic advertising is contracted and regulated to create the right incentives to ensure that brand safety is meaningful and everyone in the supply chain knows who is plugging in and the exact nature of the content against which ads are being served,” Swimer said.

Latest article