While artificial intelligence governance and controls are very important, organizations should not pursue one-size-fits-all blanket applications, especially when differentiating between what is a matter of governance versus technology.
This point was raised several times by speakers at a recent Grant Thornton webcast hosted by Vikrant Rai, managing director of risk advisory, internal audit and cybersecurity, and Alex Hinkebein, senior manager for risk advisory, internal audit and cybersecurity.
Rai says that AI adoption has been very fast, and agentic AI even moreso, driving dramatic change throughout the business world. But with this change has come both new risks and old risks adapted for the AI era. Going back to the way things were is not an option, according to Rai, and so organizations need to familiarize themselves with the potential risks that the technology brings.

He pointed to a number of highly public AI failures: deepfake impersonations of corporate executives, AI legal briefs with fabricated cases, AI mistakenly deleting an entire production database, hiring AIs accused of further discrimination, and chatbot logs accidentally leaked to the public. Such cases are only a fraction of all the ways AI can produce decidedly sub-optimal outcomes, and all demonstrate the importance of controls.
"The rise of agents is here to stay, but the way we do that needs to be in a way that takes care of the AI principles whether safety, security and ensuring responsible AI systems are being developed as we move into the future," Rai said.
This has led to a number new regulations, frameworks and standards centered around AI. Hinkebein noted that, as of now, 38 different states have passed laws regulating AI in some form or another, such as New York's RAISE Act, California's Gen AI Training Data Transparency Act, Colorado's AI Act, Utah's AI Policy Act and Texas's RAI Governance Act. While the specifics can vary greatly, all of them are concerned with enacting responsible AI practices as well as explaining and demonstrating them.
"When it comes down to it, it is all about how we manage governance and be able to explain models and show the appropriate security controls are in place … A lot of different security controls [are about] demonstrating how organizations are using them within their models," she said.
Past actual laws and regulations are frameworks and standards in managing AI risks, such as the NIST AI RMF, the ISO 42001, the OECD AI Principles and more. While they all concern AI, each emphasizes different aspects of different purposes and account for different risks for different users. Having a framework in place is just as important as tailoring it to the needs and risks of the specific organization.
"You will see a combo of standards and frameworks," Rai said. "You don't have to use all of them, but it is important to understand what regulations apply to you and what principles and risks you want to deal with and manage more effectively. What framework best aligns with your requirements?"
While organizations should generally be tailoring frameworks to individual needs, this becomes even more important when discussing AI as it comes with risks where traditional governance methods may be less effective.
While some governance risks are about policy, there are a lot of technical risks that must be accounted for as well. For instance, the risk for AI to make things up out of whole cloth is rooted in technical design of the AI itself. On the other hand, an agent that suddenly starts asking for permissions to systems and platforms it absolutely should not have can look more like an AI-flavored variant of the traditional problem of who can access what and under which circumstances. It is important to know the difference.
"We will likely see agents making decisions," Rai continued. "Why governance is important is to manage that. How we manage risk with AI in decision-making capacities is in how AI platforms operate. There's principles about fairness and transparency and safety but [it is] important to make sure that, as these systems are designed and integrated into traditional applications to make them faster and better, we make sure it is not hallucinating, we want to make sure if an end user does a prompt injection it stays the course."
He discussed the difference between what he called functional risks, which concern governance, strategy and business value, and technical risks, which concern models, infrastructure and security. Functional risks might include things like absence of enterprise-wide AI policies, AI initiatives launched without ROI modeling, rapid deployment risk, or use of unvetted AI tools. Technical risks, on the other hand, might include things like adversarial manipulation of training data, model drift and deterioration, prompt injection attacks or insufficient technical controls. While professionals have typically left such risks up to the IT department, he said it's important for auditors to understand them as well — not so much because they'll be digging into the AI's code themselves but because it helps clarify the nature of risks and the way they can play out if left unaddressed.
"We talk about model bias or prompt injections or back doors, these are all concepts that need to be understood and, from an auditor standpoint, it's important to see how these are broken down. We need to keep it simple but also categorize these as functional and technical risks. Once we break it down, it can be easier to put them in a box. 'Oh, we're talking about governance gaps, or business value, or unvetted use cases,'" he said.
This all depends, though, on whether the organization is thinking about AI risk at all. Hinkebein said that it is important for organizations to integrate AI risk into their broader risk framework so everyone can be aware of threats in the AI landscape. Selecting from AI safety frameworks and standards that fit the organization's specific needs can help a lot for this, some of which will be more focused on technical risks and others more focused on governance risks. Like Rai, she said that professionals need to be aware of both. She said that internal auditors in particular are very well positioned to manage this.
"Internal audit has always been well positioned to navigate new changes and AI is no different. We are here to provide that independent perspective that reveals potential risks but also allows for innovation with having those appropriate guardrails," she said.







