AI giving scammers new tricks, enhancing old ones

AI has made all sorts of people more efficient, productive and effective, greatly increasing both their capacities and the scale at which they can be applied. While this includes professionals like accountants, lawyers and finance executives, unfortunately it also includes scammers, fraudsters and cyber-criminals. 

Processing Content

Data from invoice processing solutions provider Stampli found that these bad actors have been applying artificial intelligence to attacks far more sophisticated and subtle than before, though certain patterns can be deduced. By examining tens of billions of invoices processed through its platform, Stampli refined six emerging AI powered invoice fraud attack vectors targeting finance teams. The analysis shows attackers are increasingly weaponizing generative AI to enact fraud that looks and sounds legitimate, which serves to circumvent traditional controls and exploit moments when finance teams are under pressure. 

One tactic uses AI-generated synthetic entities posting as vendors, which come with convincing-looking AI-generated websites and other digital footprint signs. After establishing themselves as a totally-real-business-we-swear, they then submit AI-generated invoices as a newly onboarded vendor. The target, perhaps stressed out and in a rush, decides to approve it without checking to see if they even work with that vendor. Teams need to control vendor onboarding and flag first-time invoices for additional review. 

robot AI scammer fraud
Lisa Haney - stock.adobe.com

Even if the vendor is real, though, fraudsters can still strike. Another emerging attack vector is AI-generated phishing attempts that involve stealing legitimate vendor credentials then sending fraudulent invoices or changing payment details. Teams need to verify payment changes through pre-approved, out-of-band channels and monitor deviations from vendor patterns. 

Other times the scammers might and instead decide to clone invoice formatting, metadata and line items from a legitimate vendor to directly submit fake invoices. In this case, teams should flag invoices that deviate from a vendor's historical behavior and enforce controls on unusual invoice activity. 

Scammers might also decide to skip the invoices entirely and simply collect information via an AI-generated link that mimics real vendor portals, which tricks teams into entering their legitimate credentials or payment details for later use. Teams should require out-of-band verification and refuse to enter vendor information through unsolicited links. 

Considering these impersonation scams all involve written documents, one's instinct might be to call someone on the phone or schedule a video call to confirm they're legitimate. While that may have worked well a few years ago, today it is a poor safety measure. Another emerging vector is deepfaked vendor requests where people use AI to generate the voice and image of actual specific people in order to push urgent fraudulent requests. Now teams should enforce approval workflows and vendor-change controls that cannot be bypassed via email or phone. 

If an organization implements all these recommended defensive measures, they will be less likely to be fooled by an impersonator. Unfortunately, though, at least one emerging vector more resembles old school Trojan attacks that target systems versus people. Specifically, fraudsters are beginning to embed invisible text into PDFs sent to the target which, knowing their target uses AI, manipulates their system into approving fake invoices or revealing sensitive data. Teams need to use softwares that do not execute invoice-embedded prompts and flag suspicious document generation.

"AI has dramatically lowered the barrier to committing sophisticated financial fraud," said Eyal Feldman, Co-founder and CEO of Stampli. "We're seeing attacks that  convincingly mimic real vendors, real employees, and real documents. Finance teams can no longer rely on manual vigilance alone. That's why Stampli brings automated fraud defenses directly into their workflows." 

While organizations may not necessarily be thinking of these specific attack vectors, they are aware that, to remain secure, they must adapt to the ways AI has changed the cybersecurity landscape. A recent survey from Big Four Firm KPMG revealed that teams are both seeing more attacks and plan to spend more to defend against them. 

The survey of 310 cybersecurity leaders found that 83% reported a rise in cyberattacks over the last 12 months, with the three most common being phishing (51%), denial of service attacks (49%) and ransomware (39%). Behind them all, though, is AI. Leaders point to AI as the primary culprit, citing social engineering attacks, polymorphic malware, automated attacks, deepfakes and adversarial attacks on other AI models. The only non-AI item named was "cyber-actors targeting cyber-physical systems for disruption and extortion." 

Leaders, however, are not taking this laying down. Increasingly, cybersecurity experts are seeking help from partners like their IT services provider or outside consulting firms; only 1% are not reaching out to partners at all on cybersecurity matters. 

They're also ramping up spending. In particular, 42% of leaders plan to make investments in identity and access management solutions to counteract bots, and this will likely continue for the next two to three years. They're also accelerating new hires: 98% of leaders plan to grow their headcount at least somewhat; of these, 19% plan to do so by 20% or more. They are also turning to managed services, with 45% having engaged with providers over the last 12 months. 

Overall, 98% have increased their cybersecurity budget in the past 12 months and 99% anticipate an increase in cybersecurity budget over the next two to three years.

For reprint and licensing requests for this article, click here.
Technology Practice management Artificial intelligence Cyber security Fraud
MORE FROM ACCOUNTING TODAY