![Getty Images A silhouette against a yellow light of a person about to press down on a phone screen (Credit: Getty Images)](https://ichef.bbci.co.uk/images/ic/480xn/p0kpxs6b.jpg.webp)
Some of the biggest tech companies in the world served ads on a website featuring images of child abuse, helping to fund its operations. It shines a light into the dark corners of digital advertising.
Sometimes you come across an image online that’s so horrifying you can’t unsee it. For Krzysztof Franaszek, it happened at work.
Franaszek runs the advertising research firm Adalytics based in the US. Recently, he was studying where ads for the US Department of Homeland Security end up online, and the project took him to an image-sharing website called ImgBB. There, Franaszek uncovered something sickening: sexually explicit images of a very young child, with adverts for Fortune 500 companies running alongside them.
He immediately reported the content to the US Federal Bureau of Investigation (FBI), the Department of Homeland Security (DHS), and child safety organisations. The Canadian Centre for Child Protection – one of those Franaszek alerted – says it found at least 35 images flagged by Adalytics on the site that meet its classification of child sexual abuse material (CSAM). The Centre says it notified ImgBB, and the images were taken down. An FBI spokesperson says the bureau reviews all allegations of criminal conduct but does not comment on tips from the public. The DHS did not respond to questions.
The more Franaszek dug, the clearer the problem became – and his findings raise questions about how the adverts you see online may also be inadvertently pumping large sums of money into undesirable, and at times illegal, corners of the internet.
According to a new report from Adalytics, advertising systems run by companies including Google, Amazon and Microsoft have inadvertently funnelled money to the owners of a website hosting illegal images of child sex abuse. In addition to CSAM, Adalytics documented ads for more than 70 large organisations and Fortune 500 companies running alongside hardcore adult pornography, including MasterCard, Nestlé, Starbucks, Unilever and even the US Government. “Many advertisers whose ads appeared on this website probably had no idea that they were funding this kind of content,” Franaszek says.
![Getty Images Advertising money from Fortune 500 companies and even the US Government may have found its way to a website hosting illegal material (Credit: Getty Images)](https://ichef.bbci.co.uk/images/ic/480xn/p0kpvp98.jpg.webp)
On 7 February 2025, US Senators Marsha Blackburn and Richard Blumenthal sent letters to Amazon, Google and other ad tech companies mentioned in the report, demanding answers about whether this problem represents a widespread issue across the internet. “The dissemination of [child sexual abuse material] is a heinous crime that inflicts irreparable harm on its victims,” the letter to Google reads. “Where digital advertiser networks like Google place advertisements on websites that are known to host such activity, they have, in effect, created a funding stream that perpetuates criminal operations and irreparable harm to our children.”
While a few images of child abuse on a single website have alarmed many both inside and outside the industry, they also provide a glimpse of some of the wider problems afflicting the inscrutable world of digital advertising. Most people who use the internet will be familiar with the clamour of digital ads fighting for their attention. They are the product of a system so vast and complex that even the companies who run it don’t always know where their money is going. For years, critics have warned that the tech industry will unwittingly line the pockets of bad actors across the web without serious regulatory oversight. Lawmakers are still catching up.
Meanwhile, it is relatively easy for anyone to set up a website that can make money from ad networks. “It doesn’t cost much to operate a website that serves a few million images per month,” Franaszek says.
Google, Amazon and Microsoft insist they are committed to fighting online child sexual exploitation and abuse. All three companies say they have now banned ImgBB and its subsidiary site IBB from their advertising systems.
“We have zero tolerance when it comes to content promoting child sexual abuse and exploitation and both of the accounts in question are terminated,” a Google spokesperson told the BBC. “Our teams are constantly monitoring Google’s publisher network for this type of content and we refer information to the appropriate authorities.”
‘We have no idea where our ads are going’
Ads are the fuel that powers the internet. The best estimates say spending on digital advertising reached an all-time high of $694b (£559b) in 2024. The marketing industry brings in untold sums for its clients, and the majority of ads are served on legal, appropriate sites. Sometimes, advertisers have a direct relationship with the platforms that run their adverts and commercials. But most of the time, the process is far more complex.
Almost every time you see an ad online, it’s the result of a chain of dozens of platforms and services – some competing, others working together – in an automated process that plays out in fractions of a second. Advertisers usually don’t pick the websites that show their ads. Instead, advertisers pay an “ad network” whose business is to find the most suitable audience on the most suitable site.
Amazon, Google and Microsoft all run ad networks of their own, but Google’s are the largest by far. It’s so dominant, in fact, that the company is currently fighting allegations of operating an illegal monopoly. Google disputes this and argues it faces steep competition in the digital advertising business.
These ad networks have untold millions of websites in their inventories. When a network serves an ad, the website gets paid. But anyone can plug their website into the ad networks and start making money. That means it’s up to Google and other tech companies to do their due diligence to ensure they don’t run ads on websites that fund criminal enterprises or damage brands’ reputations.
![Getty Images Senators Marsha Blackburn (left) and Richard Blumenthal are demanding answers about an apparent lack of due diligence from tech industry ad systems (Credit: Getty Images)](https://ichef.bbci.co.uk/images/ic/480xn/p0kpwmx2.jpg.webp)
But research by Adalytics and others suggests Google and the ad tech industry have sent advertising money from clients including US senators and multinational corporations to a long list of questionable websites, in ad campaigns that add up to tens of billions of dollars, and perhaps even more. The list includes websites featuring foreign propaganda, calls for racial violence, extremist political content and pornography, as well as sites based in countries facing US trade sanctions such as Iran, Syria and Russia.
According to Arielle Garcia, chief operating officer at the digital advertising watchdog group Check My Ads, incidents like this expose one of the key problems with digital advertising – that the systems running these adverts are so impenetrably complicated that it’s difficult for anyone to figure out what’s actually going on, especially from the outside.
“That isn’t a mistake, it’s intentionally opaque,” she says. “The ad tech industry weaponises complexity.” A growing chorus of advertisers complain that the big players in the ad tech business have been providing them with less and less information, Garcia says, and that lack of transparency makes it hard to detect waste and bad practices.
A Google spokesperson says the company has strict policies about what type of content can run adverts, and it uses “cutting edge AI-driven enforcement systems” as well as teams of human reviewers to enforce those policies at scale. The spokesperson says Google disables ads from running on sites where such content is detected.
But examples like those exposed by Adalytics have raised doubts among the companies who pay for the adverts in the first place.
“We have no idea where our ads are going. No confidence at all,” says a media executive at a major consumer healthcare company whose ads appeared on ImgBB via Google and Amazon’s ad network. The executive asked for anonymity because they were not authorised to speak on the matter. “Amazon and Google are responsible for what sits within their inventory. You would think removing content like this from the supply path would be the number one priority.”
A Starbucks spokesperson says the company has a robust approach to ensure that its ads run on sites that align with its social responsibility standards. They say Starbucks ensures that its media partners undergo bi-annual audits. A Unilever spokesperson says the company uses similar guidelines and standards and plans to investigate the report’s findings. MasterCard and Nestlé did not respond to requests for comment.
“We regret that this occurred and have swiftly taken action to block these websites from showing our ads,” an Amazon spokesperson says. “We have strict policies in place against serving ads on content of this nature, and we are taking additional steps to help ensure this does not happen in the future.” A Microsoft spokesperson says the company doesn’t allow advertising on content that violates its policies, which includes user-generated content that is not sufficiently regulated or moderated. The company says it takes immediate action when it detects violations.
Google, Amazon and Microsoft are not the only companies that served ads on ImgBB. Adalytics found ad networks run by a list of other, smaller companies that ran ads on the site as well.
![Getty Images Experts say big tech ad systems are sharing money with websites that host a litany of noxious content (Credit: Getty Images)](https://ichef.bbci.co.uk/images/ic/480xn/p0kpvrx5.jpg.webp)
However, Google’s dominance over the ad industry means it should share an outsized proportion of the blame, says Laura Edelson, a computer science professor who studies the digital economy at Northeastern University in the US.
“No one is more responsible for this than Google,” she says. “Sure, this stuff might be hard to fix. That’s why Google gets paid so much money. This is merely an engineering problem – the kind of problem they solve every day. Google should be held accountable. This is harming our society.” ImgBB may also be an example of a much larger problem, Edelson says. “It is very unlikely that there’s nothing else like this that’s [able to] monetise in the same way.”
‘This website is clearly designed to facilitate bad things’
Ad industry insiders and child safety experts say a site like ImgBB should have been blocked from advertising systems far in advance.
ImgBB lets people upload photos anonymously, without creating an account, and provides settings that hide images from search engines – features that are perfect for criminal activity, experts say. Details about ImgBB’s ownership have been scrubbed from public website registries, and that information is absent from the website’s terms of service and privacy policy, contrary to standard practices. ImgBB did not respond to a request for comment.
“Anyone who spent any time looking at this site would have eliminated it [from their ad network inventories],” says Rob Leathern, a former executive at both Google and Meta who worked on advertising safety issues. “The mechanics of [ImgBB] mean that a greater level of diligence and care should have been taken by a variety of players in the ecosystem. It’s 2025. This is not a new industry. These things shouldn’t be getting through.”
This isn’t ImgBB’s first brush with hosting depictions of sexual abuse. Since 2021, the National Center for Missing and Exploited Children (NCMEC) sent ImgBB 27 notifications about child sexual abuse material on the platform. That number may be higher, as the 2024 data hasn’t been released yet. NCMEC says the website removed the content in every case. “NCMEC would not qualify this hosting provider as a significant CSAM issue based on this data point alone,” says John Sheehan, who oversees NCMEC’s operations relating to sexual crimes against children. However, “any platform that allows individuals to anonymously upload images and videos will have a high likelihood for potential for abuse”, he says.
“It’s a shady platform,” says Josh Golin, executive director of the children’s advocacy group Fairplay. “This website is clearly designed to facilitate bad things. It’s crazy to learn that there are ad placements happening on this website in the first place.”
Amazon, Google and other ad tech companies could have used any number of methods to root out ImgBB from their inventories, according to Garcia of Check My Ads. Text analysis would have flagged descriptions of adult pornography on a page. Algorithms trained to spot child sex abuse material can identify illegal content. A manual review of ImgBB’s features might also have revealed potential risks. Alternatively, Google and other companies could just dedicate an extra layer of diligence to reviewing websites that show up on NCMEC’s disclosure reports, Garcia says.
![Getty Images Lawmakers warn that ad networks have created a funding stream for criminal enterprises (Credit: Getty Images)](https://ichef.bbci.co.uk/images/ic/480xn/p0kpvsh1.jpg.webp)
Google and Amazon did not answer questions about whether they use any of these specific methods to vet the websites in their systems.
It’s no surprise that sites like ImgBB are slipping through, according to Matt Stoller, who’s written extensively about the tech industry in his role as director of research at the American Economic Liberties Project, which opposes corporate monopolies. The more resources ad networks spend making sure adverts only end up next to reputable content, the lower their margins will be, he says.
Stoller doesn’t mince words. “We shouldn’t be asking about how we prevent this. We should be asking who is going to be held accountable. We’re not just looking at an ad tech system, we’re looking at a crime scene. The way to fix this is handcuffs,” he says. “Someone at Google should go to jail. Someone at Amazon should go to jail.”
Brand ‘safety’
In addition to images of child sexual abuse on ImgBB, Adalytics says it found depictions of bestiality, other potentially illegal material and mountains of graphic adult pornography. Its report documented hundreds of examples of ads for multinational corporations and even the US government running alongside this type of content. The BBC independently verified a sample of Adalytic’s findings, and observed ads for Fortune 500 companies running alongside adult pornography on ImgBB. The BBC did not review any child abuse material or other illegal content.
Google isn’t the only company drawing criticism. The BBC spoke to nearly a dozen marketers and advertising experts. Every one of them pointed to what they see as failings within the “brand safety” industry.
Brand safety is the industry term used to describe keeping ads away from content that can damage a company’s reputation. For many advertisers, showing up on a site like ImgBB is the worst-case scenario.
For extra assurance, some advertisers hire companies who make brand safety their entire business. The leaders in this industry, DoubleVerify and Integral Ad Science, say they use a host of artificial intelligence tools and other methods to ensure their clients’ ads don’t show up on the wrong sites. DoubleVerify, for example, says it uses computer vision algorithms and other tools to analyse the images, video, audio and text on a web page. “This meticulous visual analysis helps ensure that the visual components surrounding your ads are both suitable and consistent with your brand’s message,” according to DoubleVerify’s website.
![Getty Images Experts say Google's dominance in digital advertising gives it unparalleled control over where ads appear (Credit: Getty Images)](https://ichef.bbci.co.uk/images/ic/480xn/p0kpvswy.jpg.webp)
However, the Adalytics report and other investigations like it suggest these brand safety systems are not working. Computer vision software is highly effective at identifying naked bodies, for example, and a simple keyword search would have flagged that the pages spotted by Adalytics contained adult pornography. Yet advertising executives at companies whose ads appeared on ImgBB claimed to the BBC that both DoubleVerify and Integral Ad Science marked ImgBB as “100% safe”, including pages full of adult content.
“If people are putting their trust in these [brand safety] vendors, then they should be reconsidering it, because it clearly isn’t working,” Leathern says.
A DoubleVerify spokesperson says the company blocked tens of thousands of ads from appearing on ImgBB in the past 30 days. Ad impressions on ImgBB account for only 0.000047% of the total ads they measured during that period, the spokesperson says, and over 80% of those ads were served on the homepage. DoubleVerify did not answer questions on the record about its methodology, or why advertisers saw pages with adult pornography marked as safe. The company published a blogpost breaking down other criticisms of the report.
Integral Ad Science did not answer questions for this story, but has described previous Adalytics research as “flawed” and “incomplete”.
Other industries have laws that require the kind of review process that might have flagged ImgBB. The finance and legal industries, among others, have so-called Know Your Customer laws, for example. Banks are required to scrutinise potential clients to ensure they aren’t facilitating money laundering, terrorism, or other crimes. No such laws exist for digital advertising. In fact, the industry is almost entirely unregulated in the US and many other parts of the world.
“Our experience has been that the tech industry in general has failed to take meaningful action,” says Jacques Marcoux, director of research and analytics at the Canadian Centre for Child Protection. “For this reason, our organisation, along with many others across the world, have been advocating for government regulation.”
Addressing these issues requires a layered approach that involves every stakeholder in the digital supply chain, Marcoux says. Ad networks who onboard websites should engage in much more robust review processes; advertisers should demand accountability to ensure their ads don’t appear next to harmful or illegal content; and the payment processors who handle transactions and compensation along the ad tech chain should apply more rigorous conditions and Know Your Customer practices as well, Marcoux says.
“We are not going to fix this problem without better regulation and actual, real, serious consequences for delivering ads that fund horrific companies and activities,” Edelson says. “It’s too profitable to just ignore this. It’s going to be impossible to solve without changing those incentives.”
Thomas Germain is a senior technology journalist for the BBC. He’s covered AI, privacy and the furthest reaches of internet culture for the better part of a decade. You can find him on X and TikTok @thomasgermain.
For timely, trusted tech news from global correspondents to your inbox, sign up to the Tech Decoded newsletter, while The Essential List delivers a handpicked selection of features and insights twice a week.