Tuesday, October 22, 2024

Anti-Bot Services Help Cybercrooks Bypass Google ‘Red Page’

Must read

Cybercriminals have found a new way to get around what has been an effective deterrent to phishing attacks, with novel anti-bot services sold on the Dark Web that allow them to bypass the protective “Red Page” warning in Google Chrome that alerts users to potential fraud.

The anti-bot services aim to prevent security crawlers from identifying phishing pages and blocklisting them by filtering out cybersecurity bots and disguising phishing pages from Google scanners, according to new research published today by SlashNext.

They do this by rendering ineffective the Red Page, a feature of Google Safe Browsing — which itself is a feature of Chromium-based browsers and other Google services — that aims to protect users from harmful websites by warning them of potential dangers, such as phishing attempts. The page is so-named because it is displayed in red and provides a warning that a site to which someone is navigating may be deceptive, advising them to avoid it.

In doing so, the warning can “severely” limit “the potential success of phishing attacks,” according to the post, providing “a massive hurdle” to threat campaigns. That’s because these campaigns rely on high click-through rates, which is significantly lowered when Google’s detection flags a phishing page and adds it to a blocklist.

Now various anti-bot services found on the Dark Web, such as Otus Anti-Bot, Remove Red, and Limitless Anti-Bot, “threaten to undermine this line of defense, potentially exposing more users to sophisticated phishing attempts,” according to the post.

How Anti-Bot Services Work

Though each service has its own unique features, they are all based on a combination of several techniques that allow malicious content to bypass Google’s Red Page feature. Most rely on bot detection mechanisms that analyze user-agent strings and IP addresses to filter known security bot traffic that would otherwise be blocked, according to SlashNext.

“Public lists of cybersecurity crawlers are widely available (for example, Shodan), making it easy to filter known security bot traffic,” according to the post. “Once an IP address or user-agent is flagged as a security crawler, it is blocked, ensuring the page remains accessible to real users but hidden from cybersecurity entities.”

The services also use cloaking techniques such as context-switching or JavaScript obfuscation to serve different content based on the visitor’s profile. These techniques effectively redirect security crawlers to benign content while directing a user to a phishing page.

Another common feature of the anti-bot services is to introduce CAPTCHA or challenge pages to filter out automated scanners that typically would analyze a webpage for malicious content. “Since most bots cannot solve CAPTCHAs, this technique effectively blocks them while allowing real users through,” according to the post.

Some anti-bot services might even introduce a time delay, which further confuses security bots by making them “time out” before they can scan the page and thus warn users of a potential security threat.

They also can bypass the Google Red Page by delivering region-specific content and blocking foreign traffic, according to SlashNext. For example, if a phishing campaign is targeting a Korean bank, the service might allow only Korean traffic to visit the site while blocking foreign IP addresses, the researchers noted. Moreover, these methods can get extremely specific in terms of geography, even narrowing campaigns down to the city level, which would prevent international cybersecurity services from detecting the page entirely, according to the post.

Not Completely Foolproof

While these anti-bot services can significantly reduce the scope of Google Red Page, they do have their limitations, the researchers noted. The malicious services work best in less sophisticated phishing campaigns because they can identify and block known crawlers in the user-agent string — where many security vendors declare their bots and crawlers, the researchers noted.

“This allows cybercriminals to filter out bot traffic, prolonging the lifespan of phishing campaigns,” according to the post. However, in more sophisticated phishing operations, manual analysis by analysts will eventually detect the page, leading to its inclusion on blocklists.

Still, anything that can limit the detection of phishing by end users is a threat to the overall security, not just of individuals but also enterprises. That’s because despite being one of the oldest forms of cybercrime, phishing is still one of the primary ways attackers gain initial entry onto corporate networks to perform other types of malicious activities, such as ransomware attacks.

Moreover, the rise in the availability of phishing kits that make it easy for attackers to create campaigns, the growing sophistication of phishing tactics and now the emergence of anti-bot services make detection by individuals and defenders more complex.

The best defense against the use of anti-bot services to bypass Google Red Page is to use security platforms that can detect threats in real-time across email, mobile, and messaging apps with as much accuracy as possible, according to SlashNext. Aforementioned manual analysis of phishing pages and the subsequent addition of malicious sites to blocklists also can prevent these services from being effective.

Latest article