Sunday, January 5, 2025

New Gmail, Outlook, Apple Mail Warning—2025 Hacking Nightmare Is Coming True

Must read

Forget everything you thought you knew about staying safe online. No more telltale signs, no more derisory pretense, no more laughable promises. Imagine if the next email that’s clearly from your friend, family member or colleague is actually fake—but it’s so good you simply cannot tell.

This is the stuff of security nightmares and it’s already coming true—it will shape the new threat landscape. “AI is giving cybercriminals the ability to easily create more personalized and convincing emails and messages that look like they’re from trusted sources,” McAfee warned ahead of 2025, “these types of attacks are expected to grow in sophistication and frequency.” And for Gmail, Outlook, Apple Mail and other leading platforms, the defenses are not yet in place to stop this.

And so with 2025 barely a few days old, here’s the first news story of the year reporting exactly that. As per The Financial Times, “an influx of hyper-personalized phishing scams generated by artificial intelligence bots” is on the rise. These attacks are already a security nightmare and will only get worse. The newspaper says major companies including eBay now warn “of the rise of fraudulent emails containing personal details probably obtained through AI analysis of online profiles.”

ForbesGoogle Play Store Decision—Why You Need A New Phone In 2025

Check Point warned this would happen in 2025: “Cyber criminals are expected to leverage artificial intelligence to craft highly targeted phishing campaigns and adapt malware in real-time to evade traditional detection mechanisms. Security teams will rely on AI-powered tools… but adversaries will respond with increasingly sophisticated, AI-driven phishing and deepfake campaigns.”

“AI bots can quickly ingest large quantities of data about the tone and style of a company or individual and replicate these features to craft a convincing scam,” says The FT of these latest attacks. “They can also scrape a victim’s online presence and social media activity to determine what topics they may be most likely to respond to—helping hackers generate bespoke phishing scams at scale.”

McAfee’s warning highlights enhanced phishing, with the lures the same albeit the presentation much improved. As such, when you “receive an email that looks identical to one from your bank, asking you to verify your account details” is to ensure you have the usual security hygiene factors in place—2FA, strong and unique passwords or better yet passkeys, never clicking links.

But new phishing lures—especially in the corporate world—might just be seeking information, trusted access elsewhere within the enterprise, or to kickstart a larger, more complex fraud to divert funds or trick an exec into giving their finance team the nod to okay a transaction. Check Point says the rapid advanced in AI now give attackers “the ability to write a perfect phishing email”.

eBay’s cyber crime security researcher Nadezda Demidova told The FT that “the availability of generative AI tools lowers the entry threshold for advanced cyber crime… We’ve witnessed a growth in the volume of all kinds of cyber attacks,” describing the latest scams as “polished and closely targeted.”

ESET’s Jake Moore agrees. “Social engineering,” he says, “has an impressive hold over people due to human interaction but now as AI can apply the same tactics from a technological perspective, it is becoming harder to mitigate unless people really start to think about reducing what they post online.”

Such is the fear of such attacks that the FBI issued a specific advisory last month: “Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud… Synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.”

“Ultimately,” Moore told me, “whether AI has enhanced an attack or not, we need to remind people about these increasingly more sophisticated attacks and how to think twice before transferring money or divulging personal information when requested – however believable the request may seem.”

ForbesMicrosoft Update Decision—65% Of All Windows Users Now At Risk

“Phishing scams generated using AI may also be more likely to bypass companies’ email filters and cyber security training,” The FT says. And with human mistakes still the key to all compromises, such a convincing lure at stage one is a security nightmare. From then on, other emails are likely real and it’s unlikely anyone will check back to the original source. The circle of trust has been broken.

“AI is transforming how the Gmail team protects billions of inboxes,” Google says, ”with new “ground-breaking AI models that significantly strengthened Gmail cyber-defenses [to] spot patterns and respond rapidly.” But AI can break such patterns, making every email unique and specifically avoiding the step and repeat telltales of the past—at least for the most sophisticated campaigns.

And this will only get worse. “AI has increased the power and simplicity for cybercriminals to scale up their attacks,” Moore warns. “Present phishing emails are fed into algorithms and analyzed but when such emails sound and feel genuine, they go under both the human and technological radars.”

Latest article