Voices

Should your firm have an AI use policy?

ChatGPT, the artificial intelligence-based tool that has taken the world by storm, shared that a "bug may have caused the unintentional visibility of payment-related information." As a result, the company took the tool offline to fix the issue. 

GPT-4 recently showcased the ability to compute taxes. AI has been steadily making inroads into the technological solutions accounting firms use. And with the advent of publicly available generative AI tools, it is easy for anyone to use such tools for "getting (quick) help." At accounting firms, using such publicly available AI tools will most likely be for the day-to-day client situations that the firm's people come across.

It all raises some critical questions:

  • Should accounting firms have AI use policies?
  • Even if firms have AI use policies, how will they be able to measure and monitor what happens when someone at the firm uses AI, and what happens after the usage?
  • Does the accounting profession need an AI-governance regulation?

My recent interactions with accountants seem to indicate that accountants are worried about potential client data confidentiality issues and difficulties in understanding how AI will work in accounting. 
Why?

AI tools have the potential to provide a significant increase in efficiency, accuracy and productivity. As a result, they can help deliver new and powerful benefits to clients. But they may also raise concerns about clients' confidential information, reliability, accuracy and transparency. Some examples of AI-related challenges are:

  • Data. Because of the fundamental ways AI tools are built, learn and work, they are only as good as the data they are trained on. That means the data used for learning may need to be stored somewhere, hopefully without any personally identifiable information. Therefore, accountants need to recognize the confidentiality risks in the increasingly AI-powered accounting profession. They need to carefully review those privacy policies and terms of service about how client data will be handled. What if such PII data and clients' financial and tax information get stored, referenced and (hopefully not) shared in unintended ways? 
  • Biases. As AI needs data to learn from, it also is prone to be influenced by — and sometimes to create — "biases" in the data that is used for initial training and continuous learning. As artificial intelligence systems become more sophisticated, they can make decisions and take actions that can significantly impact individuals and society. AI systems can suffer from "hallucination or delusion" to provide responses that may seem to be based on factual data. As AI systems become more autonomous, accountability issues and who is responsible for the technology's actions must be considered. AI can, therefore, raise ethical questions, including issues of privacy, autonomy and accountability. 
  • Transparency and explainibility. As AI systems become more complex, it can be difficult for humans to understand how they make decisions. This can make detecting and correcting anomalies or errors in the system complex. For example, AI systems used for financial projections may be biased to apply large corporation regulations to midsized businesses. Such AI outcomes can become burdensome for firms and clients without proper review. 
  • The black box. AI systems can be complex and challenging to understand, making detecting and correcting anomalies difficult. This is known as the "black box problem." Understanding how an AI system makes a specific decision or prediction can be difficult, making the detection and correction of bias challenging. In the accounting profession, this can pose severe challenges if an AI's outputs, forecasts, recommendations, audit observations, etc., are not adequately human-reviewed. 

Is there a need for an AI-use policy?

In practice, using AI in accounting and auditing can give the impression that the process is more objective and reasonably controllable. AI systems can quickly analyze large amounts of data and detect potential errors and fraud. However, if you rely on AI, and it becomes more and more complex to understand how AI delivers the outcomes you rely on, there is a possibility of overlooking the biases and other concerns associated with Ai. 

To mitigate the inherent risks in using AI, accounting firms might want to start thinking about whether there is a need to implement an AI-use policy. 

Some factors to consider are:

  • Data protection. How can client confidential/sensitive information get exposed? That includes data breaches, improper storage or disposal of data in/by AI systems, the likelihood of misuse of data, unintended sharing of data, etc. If you consider AI a "third party," you might have better clarity about the data protection measures you would want to implement.
  • Data-handling procedures. The AI-use policy could outline how the firm will collect and analyze data using any AI tool. These procedures could aim to minimize the risk of confidential client data being fed into AI tools. 
  • AI is not a replacement. Considering the biases, the black box problem and explainibility issues with AI, there may be a need to create processes that provide for human, expert review of client reports and regulatory filings, etc. To avoid potential detrimental client impacts due to the black box problem and biases in AI systems, there may be a need to balance the reliance on AI and human expertise to make decisions. AI should be used as a tool, not a replacement for accountants' accountability and responsibility of using professional expertise and human judgment. 
  • Disclosures. AI will become more and more pervasive. Including appropriate disclosures to clients when your firm uses AI to produce client work and advice may be a prudent practice to follow. At the same time, clients may need to be made aware of how your firm uses its experience, expertise and intellectual property to ensure clients are not unduly exposed to challenges associated with AI.
  • Regular risk assessments. The AI use policy could require regular risk assessments to identify potential vulnerabilities and areas of improvement. 

The silver lining is bigger than the cloud

Within the accounting profession, the tools/software that firms use are created for use by accountants and their clients, not by the general public. Despite several apps sharing information in such software through APIs, it is still a "closed community" of users. Most such software used by the accounting profession limits the data elements exposed through APIs. 

The biases mentioned above are likely to be reduced as AI finds its way into smaller-business accounting — i.e., the AI tools will get trained on data more relevant to the client work that most firms handle. AI tools can provide numerous benefits to accounting firms, far more than the concerns that come with them. The technology industry is extremely sensitive and aware of the likely pitfalls and risks. The silver lining is bigger than the cloud. 

For reprint and licensing requests for this article, click here.
Technology Practice management Artificial intelligence Machine learning
MORE FROM ACCOUNTING TODAY