AI governance more about humans than bots

Questions of how accounting firms should use AI almost inevitably call forth further questions of how it should be governed so as to not become the latest horror story of the AI gone off the rails. Such questions are especially germane to this year's Best Firms for Tech, all of which have deeply incorporated AI into their practices, sometimes building entire workflows around it. 

Processing Content

Among such firms, over and over again, there is a single animating principle behind their approach to AI oversight: the human is in control. AI is never used as a substitute for human judgment, and it is never given anything remotely resembling authority. CLA, a Top 20 Firm, is but one example. 

"CLA's AI oversight model is grounded in the principle that AI augments professional judgment; it does not replace it," said James Watson, CLA's chief solutions officer. "Our governance approach is closely aligned with existing data governance and quality standards, ensuring responsible use across the firm."  

Human in the loop 1
StockPhotoPro - stock.adobe.com

This broad ethos finds expression in many specific ways. For one, human validation is a must. At minimum it is a strongly suggested practice inculcated in training and enforced by management. But much more commonly, it is a fundamental part of the workflow, such as with Top 50 firm Crete Professionals Alliance. Tasks are not considered complete until this all-important step is completed. 

"Verification is enforced both by policy and in our tools through the review experience itself," said Tucker Haas, Crete's chief technology officer. "We have a written AI policy that is actively enforced by management, and we design our products so that human verification is a required step in the workflow, not an optional one."

It also means keeping humans accountable. While some may be tempted to blame the AI for mistakes, many firms stress that the buck ultimately stops not with the bot but with the human who directed it. For Iowa-based MJD Advisors, this comes through in both formal policy and informal culture. 

"One of our founding principles is that the team is not permitted to blame automation for an error, which extends to the use of AI," said CEO Mike DeKock. "This is embedded in our quality control system, and employees acknowledge that expectation upon hire and on an annual basis."

Because humans remain so central to the process, AI governance and oversight for these firms always involves providing training to those humans not just in how to use the technology but how to do so responsibly and safely. Sikich, a Top 50 Firm, goes further in requiring continuous AI education for everyone versus a one-off session. 

"All employees are required to complete AI training, ensuring that everyone who interacts with AI-powered tools understands both their potential and their limitations," said Scott Sanders, chief information officer for Sikich. "This isn't a one-time exercise; it reflects our commitment to building a workforce that is AI-literate and appropriately skeptical."

Such requirements are usually part of a broader AI policy that governs acceptable use, approved tools, safety and privacy protocols, and other matters that set clear expectations for staff on how the firm expects them to interact with these solutions. Rather than a set of principles and guidelines, most tech-forward firms—like Tennessee-based JMT Consulting Group—set formal governance policies that are actively enforced by management. 

"We maintain internal policies that guide the responsible use of technology, with a focus on oversight, accountability, and alignment with established workflows," said Samantha Tiso, the firm's director of finance. "We also work with an external IT partner to regularly review our technology environment and ensure it remains secure and aligned with best practices."  

The use of unsanctioned AI tools was a concern among several firms due to the uncontrolled risk. Firms like California's Sensiba take active measures to prevent this. 

"We actively monitor for unsanctioned AI use through endpoint visibility and network‑level controls, allowing us to detect and block high‑risk or unauthorized AI services," said Nick Lew Ton, chief growth officer at Sensiba. "This prevents firm data from being routed through unapproved tools without detection."  

However, AI oversight is not all about humans. Governance and oversight of the technology itself is also required to make sure models stick to their prescribed functions while avoiding proscribed ones. Many times this involves stringent data control, as is the case with Aprio, a Top 25 Firm. 

"Our firm's approach to AI governance focuses on security, ethical use and accountability," said Brent McDaniel, Aprio's chief digital officer. "We thoroughly review all AI tools to ensure they protect client data, and we do not use any AI that compromises data security. We ensure our internal data does not train larger AI models, keeping client information safe."

Beyond private and security, data controls are also needed to make sure an AI model sticks with its assigned function. As these can vary from firm to firm, making sure an AI stays within its bounds is vital to ensure it is only used in the ways practitioners want it to be used. 

"We maintain clear guardrails around AI usage across our platform and internal processes," said Brielle Ferrante, chief marketing officer at Atlanta's Synexus Tax Solutions. "AI is leveraged for tasks such as data organization, pattern recognition and efficiency improvements. However, interpretation of tax laws, application of accounting principles and final compliance decisions remain the responsibility of experienced professionals."

Data control practices do not only mitigate risks. Given an AI is only as good as the data it is fed, oversight over the data fed into it is important to ensure consistent quality as well. 

"We design our systems so that AI operates within structured data and defined workflows," said MJD's DeKock. "It is not operating in isolation. That context improves the quality of outputs and makes it easier for our team to evaluate whether something is correct."  

At the end of the day, though, while AI is certainly a novel development, the practices and procedures around it are much like that of any other firm technology. YHB, based in Virginia, noted that AI governance is but one part of its broader technology framework, which emphasizes controlled approaches to adoption to prioritize specific use cases. 

"We also take a controlled, use-case-driven approach to adoption," said Jeremy Shen, chief strategy officer at YHB. "We prioritize areas where AI can safely improve efficiency, such as summarization, content generation and analysis, rather than areas that require definitive compliance outputs. Governance is embedded into our broader technology framework. AI tools go through the same review process as any other system, including security, data handling and integration considerations. We also invest in training so our teams understand both the capabilities and the limitations of these tools."


For reprint and licensing requests for this article, click here.
Technology Practice management Data governance Artificial intelligence
MORE FROM ACCOUNTING TODAY
Load More