Accountants well positioned to meet growing demand for AI assurance

A joint report authored by the AICPA and Chartered Professional Accountants Canada said the rapid rise of AI throughout the global economy opens up new opportunities for accounting professionals to provide independent assurance of these systems to help build trust and confidence in their functions. 

In just a few years, AI has wormed its way into virtually every business sector, but with this new technology has come new risks. The report points out the black box nature of many AI models, which limits the understanding of how AI systems make predictions and reach their decisions, and in turn creates operational risks for end users from possible errors and  inconsistencies. This has created a demand among organizations for ways to manage and report on various aspects of their AI development, deployment, use and oversight.

The joint report said the accounting profession is ideally positioned to meet this demand. Indeed, many firms are already offering AI-related services ranging from impact and risk assessments to evaluate the potential effects of AI deployment on various stakeholders to model validation testing to evaluate whether AI systems meet specific performance and compliance criteria. 

AI governance
Bartek - stock.adobe.com

"As the demand for transparency and accountability for AI systems grows, it is anticipated that more CPA firms will expand their assurance service offerings to include AI, but factors such as the challenges discussed below will play a role in how quickly this may happen," said the report. 

Still, while many are colloquially using terms like "AI audit" or "AI assurance," the report said these terms are often used to refer to a variety of different types of engagements and assessments. The report noted that some of the services described as assurance services are performed by entities, such as technology consultancies or internal audit teams, that may not follow the same professional standards as assurance engagements performed by CPAs. 

The report clarified that what they mean is an engagement in which an assurance practitioner designs and performs procedures to obtain sufficient appropriate evidence, based on the practitioner's consideration of risk and materiality, in order to express an opinion or conclusion about the subject matter in the form of an assurance report. The two organizations see great opportunity in this area, though not without challenges. 

Professionals today face a number of issues when it comes to providing assurance over AI systems, with one of the more prominent being the lack of suitable criteria for such engagements. The report noted that trustworthy AI systems often require characteristics such as explainability, interpretability and fairness, but without a frame of reference provided by suitable criteria, any conclusion is open to individual interpretation and misunderstanding. Another major assurance challenge is the fact that many of these systems evolve and adapt, which calls into question the relevance of evidence surfaced at specific points in time. 

These kinds of issues mean that while engagement protocols are similar to other cases, they do need to be adapted to the particularities of AI systems. For instance, professionals could need to determine the span of the assurance period so it is proportionate to cover the essential activities and transactions of the AI system. The report addresses design effectiveness within a specific span of time, perhaps six months or a year, with the responsible party determining the period of coverage. 

Or, in response to the lack of suitable criteria, the responsible party or the engaging party could be responsible for selecting the criteria, while the engaging party is responsible for determining that such criteria are appropriate for its purposes. These criteria should be relevant, neutral/objective, reliable/measurable, complete and understandable. 

In terms of understanding roles and accountabilities, the report suggested that the assurance process would involve the collaboration of several key parties, including the organization that developed and/or deployed the AI model, the party responsible for the subject matter (if different), relevant third- or fourth-party vendors, the report user(s) and the assurance provider. 

Meanwhile, the user and practitioner will consider the organization's readiness for an assurance engagement, whether the responsible party will evaluate the subject matter against the criteria in addition to the work performed by the practitioner or whether it will be a direct engagement, the need for independence, the level of assurance (reasonable or limited) and the cost vs. benefit of such an engagement. Management determines the type of engagement it needs and practitioners will determine whether they expect to be able to obtain the evidence to support their opinion or conclusion and obtain a meaningful level of assurance. 

Finally, the report noted that, depending on the nature and complexity of the AI system, the expertise of the assurance team may extend to understanding AI algorithms, data analytics and AI management systems. In some cases, the CPA-led team may need to engage additional specialists, such as data scientists or AI engineers. 

The report said that, in anticipation of growing demand for AI systems assurance, CPAs should support education and training in the technology, consider collaborations with AI experts and data scientists, as well as leverage their expertise and influence to shape AI governance and assurance procedures. 

"As AI assurance evolves, it is important that CPAs play an active role in shaping the criteria and assurance requirements for AI," the report concluded. "Whether they are operating within industry as a developer, deployer or user of AI, or in public practice, CPAs bring valuable expertise and perspective to the table. With robust professional standards and expertise in delivering assurance and advisory services to meet the needs of organizations and users, CPAs are uniquely positioned to provide valuable services to build trust and confidence in AI systems, leveraging the long-established standards and frameworks of the profession."

For reprint and licensing requests for this article, click here.
Technology Audit Artificial intelligence AICPA
MORE FROM ACCOUNTING TODAY