AI-guided impersonation attacks have become both more numerous and more sophisticated as the technology improves, and will likely not abate anytime soon. This is according to a series of polls that point to the rise of deepfakes and other tactics to craft extremely specific and personalized social engineering attacks.
For example, recent data from cybersecurity solutions provider
Those working in finance functions were seen as those most at risk of being targeted by such an attack. The poll found 50% of respondents rated them as a priority target with high concern about readiness, followed by IT personnel (46.9%) and HR employees (38.3%).

While these include many different kinds of attacks, none were seen as abating — the only difference was how quickly they were growing.
At the top was using AI to create highly personalized and targeted attacks against employees, with 39.1% of respondents saying such incidents have increased either significantly or moderately. Similarly, 23.6% said that imitating vendors has increased significantly or moderately; 32.7% said the use of deepfake audio to trick people into taking malicious actions has grown significantly or moderately; and 26.1% said the use of deepfake videos to join meetings has increased significantly or moderately. Good old-fashioned text also remains relevant, with 31.2% reporting a significant or moderate rise in false or misleading reports about their company on social media platforms.
Leaders also cited other trust-undermining attacks like credential theft, hosting malicious files on sharing platforms, installing infostealers on endpoints, and even trying to get hired in order to gain access to privileged data.
Years ago, such attacks were often crude, with lots of outrageous claims written in bad grammar stuffed with typos. A
No more. Today's attacks, guided by generative AI, focus on imitating trusted sources, like an email from Google asking you to confirm your identity. The Ironsides poll found it's becoming increasingly difficult to determine what is and is not real now that AI can easily generate authentic-looking communications. The poll found a large proportion of respondents either agreeing or strongly agreeing that it is getting harder to distinguish truth from fiction on social media (60.9%), detect phishing emails (59.4%), detect fraudulent requests via phone call (57.1%), know if a job applicant is who they say they are (57%), or keep threat actors out of online meetings (54.4%).
Deepfakes were of particular concern among leaders. When asked about the specific types of social engineering and impersonation attacks they were especially worried about over the next 12 months, deepfakes were the highest rated answer at 19.5%. This was followed by simply "impersonation" (18.8%), which is often aided by deepfakes, then phishing (13.3%), which may or may not involve deepfakes, then "offensive AI" (10.9%), followed by straight up "deepfakes" (8.6%) and, finally, business email compromise (7%).
This lines up with an analysis from
This also echoes a recent report from the World Economic Forum's 2026
Similar to the Ironscales data, leaders believe the threat is growing. The poll found 87% of leaders saying the risk of AI-related vulnerabilities in general has increased over the past year, as has the risk of cyber-enabled fraud and phishing (which often uses AI), with 77% reporting an increase. As one might imagine, the chief threat named by CEOs went from ransomware attacks in 2025 to cyber-enabled fraud and phishing in 2026.
CISOs, on the other hand, named ransomware attacks as chief threats in both years. They have good reason to be wary: A recent
Still, according to the WEF, leaders are not taking these risks lying down. Within just one year there has been a dramatic increase in awareness of AI threats. For example, while in 2025 only 37% of respondents said they have a process in place to assess the security of AI tools before deploying them, in this most recent poll that proportion has grown to 64%.
The majority (77%) are also deploying AI-enabled tools to enhance its cybersecurity. Most commonly, at 52%, are those who have them for phishing and email threat detection, followed by AI to detect and respond to intrusions or anomalies (46%), and automating for security operations (43%).
"There are reasons for optimism," the report concluded. "Organizations that embed resilience into leadership agendas, proactively manage supply chain and AI risks, and engage their broader ecosystems are better positioned to withstand shocks and adapt to uncertainty. The shift toward intelligence-driven collaboration, scenario-based testing and regulatory harmonization signals a maturing approach to collective defense."





