The $40 Billion Threat: How AI Transformed Fraud

    Subscribe to our newsletter

    By submitting this form, you agree to the Allure Security privacy policy.

    AI chip representing AI-driven phishing attacks and automated fraud at scale

    In the time it takes to read this sentence, AI has made it possible for a single attacker to launch what once required a criminal enterprise.

    Two years ago, launching a sophisticated phishing campaign required a team: someone fluent in the target’s language, a web developer to build convincing fake sites, a social engineer patient enough to research and personalize each approach. That division of labor imposed natural limits on how fast fraud could scale.

    Those limits no longer exist. What once required teams of specialists can now be accomplished by a single operator armed with the right prompts.

    Deloitte projects that AI-enabled fraud losses in the United States will climb from $12.3 billion in 2023 to $40 billion by 2027, a compound annual growth rate of 32%. The FBI reports that business email compromise alone drove nearly $3 billion in losses in 2024. And according to Hoxhunt research, AI-generated phishing emails now achieve a 54% click-through rate, compared to just 12% for traditional attempts.

    These aren’t incremental changes. They represent a fundamental shift in how fraud operates.

    How AI changed the attacker's playbook

    Before generative AI, sophisticated phishing required genuine expertise. Crafting a convincing business email meant understanding corporate communication norms. Building a believable fake website demanded web development skills. Executing a voice-based social engineering attack required an actor who could improvise convincingly.

    AI eliminated these barriers in months, not years.

    IBM security researchers found that AI can generate an effective phishing campaign with just five prompts in five minutes, work that would take human experts 16 hours. The cost to launch an AI-assisted spear phishing attack has dropped to roughly $50 per week.

    The transformation extends beyond email. Voice cloning now requires just three seconds of audio. Deepfake video costs less than two dollars to produce. Large language models generate fake e-commerce stores by the thousands, complete with product descriptions, customer reviews, and professional layouts.

    “AI technologies have given new superpowers to bad actors,” Aaron Painter, CEO of fraud prevention firm Nametag, told Cybersecurity Dive. “It’s a perfect storm.”

    The $25 million wake-up call

    In early 2024, an employee at British engineering firm Arup received what appeared to be a routine request from the company’s chief financial officer. The message described an urgent, confidential transaction. Sensing something might be off, the employee requested a video call to verify, exactly what security protocols recommend.

    The call was arranged, and multiple executives joined, including the CFO. Everyone looked authentic: facial movements matched speech patterns, voices sounded exactly right, and nothing suggested the participants were anything other than who they claimed to be.

    Over 15 transactions, the employee transferred $25 million to Hong Kong bank accounts. Every person on that call was an AI-generated deepfake.

    “What happened at Arup, I would call it technology-enhanced social engineering,” Rob Greig, Arup’s global chief information officer, told the World Economic Forum. “It wasn’t even a cyberattack in the purest sense. None of our systems were compromised.”

    The Arup incident wasn’t an outlier. Ferrari’s CEO was targeted with a convincing voice clone, thwarted only when an executive asked a personal verification question the scammers couldn’t answer. These attacks represent a growing trend in executive impersonation that security teams are struggling to address. In each case, attackers used publicly available videos and conference recordings to train their models.

    Why traditional defenses fall short

    Traditional digital risk protection tools rely heavily on known patterns: blocklists of malicious domains, signature-based detection, and manual analyst review. These approaches worked reasonably well when attackers moved at human speed.

    Against AI-powered threats, the mismatch is severe. The average time to detect fraud now stretches to nine hours or more, and in that window, attackers can register lookalike domains, build convincing phishing sites, deploy fake social media profiles, and harvest credentials from thousands of victims. All of this happens before the first alert fires.

    Modern attackers can generate thousands of unique phishing variants with a single prompt, each slightly different, each evading pattern-based detection. Training employees to “look for red flags” becomes increasingly futile when AI-generated content contains no spelling errors, no grammatical mistakes, and no obvious inconsistencies.

    Gartner identified “disinformation security” as a Top 10 Strategic Technology Trend for 2025. The category encompasses technologies organizations need to protect themselves from synthetic media, brand impersonation, and coordinated deception campaigns. According to Gartner, 50% of enterprises will invest in disinformation security solutions by 2028, up from less than 5% today.

    What distinguishes this emerging category from legacy tools? AI-native platforms can analyze billions of URLs daily, identify brand impersonation in real time, and initiate automated takedowns through direct API integrations. This compresses response times from days to hours.

    The Bottom Line

    The $40 billion projection isn’t speculation. It’s a trajectory already visible in quarterly loss reports. AI hasn’t created new fraud categories so much as it has industrialized existing ones, making sophisticated attacks accessible to anyone with basic technical literacy.

    Organizations that continue relying on human vigilance and pattern-matching defenses are betting that attackers won’t exploit the tools already available to them. The evidence from Arup and countless less-publicized incidents suggests that bet is losing.

    Key Takeaways

    How much will AI-powered fraud cost businesses by 2027?

    Deloitte projects AI-enabled fraud losses in the U.S. will reach $40 billion by 2027, up from $12.3 billion in 2023. That represents a compound annual growth rate of 32%.

    Why is AI-generated phishing more effective than traditional phishing?

    AI-generated phishing emails achieve a 54% click-through rate compared to just 12% for generic attempts. AI eliminates telltale signs like spelling errors while enabling personalized messages that reference real projects and colleagues.

    What happened in the Arup deepfake fraud case?

    In early 2024, an Arup employee transferred $25 million after attending a video call where the CFO and multiple executives were all AI-generated deepfakes. The employee had requested the call specifically to verify an unusual request.

    How quickly can attackers launch sophisticated fraud campaigns?

     IBM researchers found that AI can generate an effective phishing campaign with just five prompts in five minutes. That work previously took human experts 16 hours. The cost has dropped to approximately $50 per week.

    What is disinformation security?

    Disinformation security is an emerging category identified by Gartner as a Top 10 Strategic Technology Trend for 2025. It encompasses technologies protecting organizations from synthetic media, brand impersonation, and coordinated deception campaigns.

    See the threats targeting your brand right now

    Get a customized assessment showing active impersonation, phishing infrastructure, and exposed credentials specific to your organization. No commitment required.