While the specific impacts are still being determined, professional liability insurers that cover CPA firms increasingly are coming to a consensus that AI is a source of risk that must be controlled by strong governance. These risks have yet to make impacts on how much firms pay for insurance, but experts say they almost certainly will in the future.
Stan Sterna, senior vice president and risk control lead with insurance company Aon, which administers the AICPA Member Insurance Program, said there has yet to be any substantive claim activity linked to the use of AI at accounting firms, which he said wasn't that surprising since it always takes a few years to really understand the impact of a new risk. There has yet to be major litigation against accounting firms related to their AI usage, so no real understanding of what such litigation will focus on or what the financial impacts of it will be.
"There really hasn't been a lot of claims or large dollar amounts paid on claims," he said. "It hasn't reached a level where [insurers] can make judgments like 'the majority of claims deal with hallucinations' or 'the majority of claims deal with an issue with software issues' or something like that."

He noted there was a similar issue with cybersecurity: While insurers early on understood it to be a risk, it took a few years of observation before specific procedures began emerging.
"The cyber underwriters, it took them a while to get up to a point where they are now, where they actually have got a list and they ask questions from that list with regard to data security. It's not like that yet with AI, at least on the professional liability framework," said Sterna.
However this does not mean insurers today are ignoring AI entirely. While insurers have yet to systematize AI risk in the way they do other types of risks, they do want to know that the firm is aware of the risks and has taken at least some steps to address them.
"They're going to ask a firm about their AI policy and procedures. They want to see firms are approaching the use of AI with the same basic risk management protocols that they would have in place for engagement letters or client acceptance and continuation or documentation. They're asking firms, 'Do you use AI? Do you police it? Do you have protocols in place?" he said.
John Raspante, director of risk management for professional liability insurance firm McGowan, made a similar point. While insurers have yet to land on specific protocols and procedures for AI, he said they do broadly recognize the technology as something that carries risk, not something that mitigates it. As such, insurers want to see firms taking this seriously. Despite the public excitement, AI is still a novel development.
"The confidentiality issue of exposing client data on the [internet] is a concern. While most firms have procedures in place to protect client data, it's still nonetheless a concern," he said in an email. "The reliability of AI as a tax preparation tool and a research search engine is still being tested. CPAs should be vigilant in not allowing AI search results to be shared with clients until it is corroborated with traditional tax research methods."
He said that McGowan highly recommends language be inserted in engagement letters to disclose to clients that AI is being utilized, as many clients are opposed to AI as a tool to assist CPAs, so disclosure in a signed letter is a prudent best practice. He also recommended providing a clause that lets clients opt out of AI altogether. Such a practice can be part of a larger AI governance and oversight program. While the particulars may vary, he said a strong compliance program around AI might actually lower premiums.
Gary Florian, senior vice president of underwriting and policy services for insurance firm Camico, said the company wants to see firms taking this matter seriously. For his own company, it is not so much about whether a firm uses AI in and of itself but, rather, how the firm does so and whether it has done so with intention and planning.
"Data security is critical. Additionally, accountants should treat AI output with caution and avoid relying on it without verification. We are paying close attention to how AI is being adopted across the profession as part of our ongoing risk assessment and loss prevention efforts," he said in an email.
This all indicates that while insurers are still in wait-and-see mode when it comes to AI, they are learning fast. While AI has yet to be fully formalized into professional liability insurance risk analysis, Sterna said that won't be the case for much longer.
"I'm not going to be a bit surprised if, in a year or two, they come up with more detailed questions and maybe even come up with some guidelines and protocols," he said. "And it may be just a similar framework to what we have now in place, which is have a policy, communicate the policy, train people on the policy, have oversight of it and maintain and monitor it. And then what is that policy going to look like? To me, the main thing is human review. It is the scariest risk with this, the lack of human review and just reliance on AI."





