The rising adoption of AI solutions at accounting firms has come with a growing need for open and transparent conversations with clients about how the technology is used, and what measures are taken to address their concerns about it.
AI has already been a core part of not just accounting software but technology in general for years, to the point where it may be more difficult to find where it's not used than where it is. While the general public may be vaguely aware of this, they may be surprised to learn the true extent of it in today's world. Jeanne Hardy, founder and CEO of New York-based Creative Business Inc, which specializes in art industry clients, noted that the extreme reach of AI technology today means anyone trying to avoid it entirely will have a very difficult time of it.
"Intuit has been using AI for years now. Google and Microsoft, they're using AI. They're using AI in restaurants. All the vertical industry software uses AI. Your banks are using AI, that's how they know what credit cards to ask you to get, when they should cut you off, what lines of credit you're eligible for," said Hardy, who is also the founder and CEO of finops platform Levvy.

Starting the conversation
Given this, accountants are making an effort to ensure their clients are aware not just that they're using AI but how and why they're using it, usually through conversation. Richard Jackson, global artificial intelligence assurance leader for Big Four firm EY, noted that clients often initiate these talks.
"We are having this conversation around the use of AI with almost every client—whether it's senior management or at the board level or the audit committee level, people are asking questions like 'what are you seeing, what is my organization doing, how do we compare, help us understand what you're doing,'" he said. He added that if clients don't bring it up first, his firm usually will. "In instances where it's not client-initiated, we're actually raising it ourselves to have some of those very transparent conversations around what we're seeing with the use of this technology, what risks and challenges it raises, whether in the client environment or the audit itself, and then how do we address those?"
Thomas DeMayo, who leads the cybersecurity and privacy advisory group for Top 25 firm PKF O'Connor Davies, shared a similar experience, saying that while no one has specifically asked for a formal AI disclosure, the use of the technology tends to come up organically in the course of the client's due diligence talks.
"We do periodically get due diligence where they're making sure our systems are safeguarded. They ask questions and those questions have evolved to where they do [talk] about AI components. They ask [if we are] using a public model, do we keep AI private, those types of things," he said.
Far from a chore, practitioners like Michelle Voyer, who leads the software solutions group for Top 25 firm CohnReznick, enjoy having these talks as it allows them to demonstrate their technological sophistication.
"We're open with clients about the tools we use, particularly when they contribute to accuracy and efficiency. It's about reinforcing trust and demonstrating that technology enhances the quality of our work, without replacing professional judgment or accountability."
While the topic is often brought up conversationally, some firms also put additional language in their engagement letters that addresses how technology, including AI, may be used. Voyer, for instance, noted that this can help provide an additional layer of clarity.
"We structure our engagement letters to reflect how technology may be used in the course of our work. When clients express preferences around the use of automated tools, we document those choices and clarify how that may impact timelines and costs."
Others, like Hardy's firm, don't include such language yet but may do so in the future. Currently they're planning to send an email update on how they use AI and protect client data. From there they might decide to put a paragraph in their engagement letters, using that email as a foundation.
Regardless, however, she felt AI use should be an ongoing conversation considering the rapid pace at which the technology advances.
"Because of the way that it's changing, and new models coming and new applications happening, I feel like it should be a regular conversation to normalize it. Maybe it's quarterly, or maybe you send a newsletter 'we want to update you,' because people are reading the news [about new developments]" she said.
Similarly, DeMayo's firm is also considering adding an AI section to its engagement letters and he has already drafted verbiage himself for this specific purpose.
"It talks about the fact that we don't use any public models. People within the firm can only use approved products: you cannot just go download an AI tool or go to Gemini. It won't work. You have to use what we've specifically invested in, what we've specifically vetted as being secure and being private to us. Anything that goes into [our AI], we have strict assurances that that particular provider is not going to use our data to train their models, so that's a very big important part we convey to our clients," he said.
While PKFOD is still deciding whether or not to add this language, overall DeMayo predicted that firms will start taking more initiative to ensure clients are informed on how they use AI, as they will probably be asking anyway.
So, what do you talk about?
Regardless of how the conversation starts, Jackson from EY said it tends to center around the particular use case of the technology and what the firm is doing to make sure they're doing so safely. On this, he said they make sure to emphasize the role of the human in the loop, ensuring no one comes away with the impression they're letting AI do all the work.
"We talk a little bit about the testing procedures that every one of our tools has to go through before we put it into production. But what it also then drives is the conversation around the importance that the human who reviews the output is able to adequately understand [it]. But everything I described there is with the net benefit of an improved quality output, because now you're no longer just solely relying upon the human's ability to understand it. You're actually supplementing with capabilities from technology," he said.
Firms also spend time going over specific client concerns about AI. Every single practitioner, when asked what clients' chief concerns about AI use were, all said data privacy and confidentiality was first and foremost in their mind above all else. Practitioners take pains to set their clients' minds at ease, as they would with any other concern about the engagement, and oftentimes they are successful.
"I wouldn't want to suggest in any way that every conversation is always rainbows and unicorns," said Jackson. "I think that clients have an understandable and a real set of questions that they want to understand. 'Well, where are you using it? Have you?' And then you get into the conversations of, 'well, are you using my data or using technology with a large language model? How do I know that you're not training back my information and insights to the large language model' and so you absolutely go into these conversations." He added that they ultimately "see it as hugely beneficial to what the auditor is trying to achieve."
But sometimes even that is not enough. Hardy, from Creative Business, likened it to the days when people were still hesitant about online banking. To this day there are still clients who insist on paper checks. When a client truly and genuinely refuses to allow the accountant to use AI, Hardy said it may be time to refer them to another firm that is a better fit. Still, this is rare. Usually all that's needed is some gentle diplomacy.
"I think if you start small, kind of tailor it to them and bring them along with you, then you'll probably have more success changing their mind than if you send them a six-page document outlining all the AI that you're using everywhere," she said.