While organizations are pouring incredible amounts of money into AI, lack of strong governance and coordination between leaders has blunted the effectiveness of these investments to the point where only a minority are realizing any tangible financial gains.
Recent data from Top 10 firm Grant Thornton
Despite this awareness, however, 75% of organizations still approved major AI investments, even though 48% admitted to not setting AI governance expectations and 46% have not integrated AI risk and opportunity into ongoing board or committee oversight. Similarly, while nearly three in four organizations are currently piloting, scaling or running autonomous AI, only one in five have actually tested a response plan for when something goes wrong.

Part of this might be due to a lack of unified vision about AI governance. GT found that, within the C-suite, 39% of CIOs and CTOs said their workforce is fully ready to adopt AI compared only 7% of COOs. Further, the study found that 54% of COOs are concerned about regulatory and compliance uncertainty related to agentic AI, versus 20% of CTOs.
"This is no longer a disagreement about whether the workforce is ready. It is a disagreement about how exposed the organization is to regulatory risk from autonomous systems," said the report. "When two leaders who share accountability for AI deployment disagree by this margin on regulatory exposure, the organization cannot produce a coherent account of its risk posture. As agentic AI deployment scales, that gap in alignment around accountability and risk tolerance makes it harder to maintain control, harder to respond to regulatory scrutiny and harder to prove the organization has its autonomous systems governed."
GT is not the only one who found hesitation regarding AI governance. A
"In finance, that level of doubt doesn't just slow adoption, it helps explain why AI investment isn't yet translating into measurable impact," said the report.
This is part of a wider disillusionment about AI: Zuora found that 87% of respondents said there were gaps between AI promise and reality in finance. Only 28% of organizations have seen measurable financial gains from their AI investments, compared to 67% who have not seen any at all.
Indeed, far from making money,
Part of why AI governance and oversight might be difficult is that leaders may not be entirely aware of what is happening within their own organizations. Beyond the differing perceptions between different parts of the C-suite, the GT poll also found deep perceptual rifts between the top and bottom layers of the organizational hierarchy.
GT said its polling data suggests that the people closest to AI in daily operations, and responsible for translating leadership's ambitions into actual workflows, seem to be the ones with the least support in actually doing so. Combined, frontline employees (37%) and middle managers (30%) represent roughly two-thirds of where organizations say support is most needed. In contrast, senior leadership and functional leadership, combined, are in the single digits, 8% and 9% respectively.
"The people closest to AI in daily operations need the most support to implement it, yet they are getting the least. Middle managers are diminishing in numbers while the workload for those remaining has accelerated rapidly. The layer of the organization most responsible for making AI work in practice is simultaneously being thinned and overloaded," said the GT report.
At the same time, leadership may also be unaware of just how much, or how little, or what kinds of AI their frontline staff are using. Digital adoption solutions provider Walkme found in its most recent
"From the outside, AI looks like it is being used. From the inside, it's being avoided," said the report.
Alternatively, workers might still be using AI just not the company's AI. The WalkMe report also found widespread use of "shadow AI": 45% of workers used unapproved AI tools in the past 30 days and 36% used them with confidential company, customer or employee data. The report said they are not doing this in a fit of pique but, rather, in response to the inadequacy of their company's own solutions, as 26% said if there were better guardrails to ensure the AI follows company rules and policies, they would find these solutions much more effective.
"They're not asking to go rogue. They're asking for approved tools that actually work within the rules they're supposed to follow," said WalkMe's report.
The problem, according to the poll, is that workers do not trust the tools to do this. Only 12% said they were fully confident that their AI tools understood the specific context of their work. When they abandoned the use of AI in a task, 30% said it was because the AI didn't really understand what they were trying to do, 29% said they lacked guidance on the task, and 40% said different tools gave them conflicting results.
With this in mind, the survey found 55% of the respondents saying they only trust AI with simple, non-critical tasks versus 9% who trust it for high-impact work. In contrast, their leaders are very confident: 61% of executives said they trust their AI tools enough for complex work, representing gap of 52 percentage points.
This is just one of many large gulfs between the experience of the frontline worker with AI and those of their leaders. When it came to the overall efficacy of their tools, only 21% of frontline workers said they were adequate for their tasks, compared to 88% of executives. Similarly, 29% of frontline workers said AI made them more productive versus 88% of executives. Further, while 29% of frontline workers said they got sufficient AI training, 91% of executives said they did.
When it came to overall satisfaction, 40% of frontline workers were satisfied with AI versus 81% of executives. WalkMe said the two sides are almost describing two entirely different organizations.
There is a possibility this represents more growing pains than anything else as organizations are still in the process of adjusting to AI. The Grant Thornton report noted that, in contrast to the majority who said they were not confident in their ability to pass an independent audit, 75% of those who said they have fully implemented AI were very confident. Such organizations were also the most likely to report revenue growth (58% vs 15% in the pilot stage), accelerated innovation (59% vs 24%), increased efficiency (81% vs 58%) and higher quality outputs (64% vs. 35%).
"The pattern is what matters: fully integrated organizations are not excelling in one area and lagging in others," said the GT report. "They are outperforming across the board. That consistency is the signature of governance as a performance system. These organizations built the infrastructure to make AI provable and defensible across everything it touches, and the results followed."






