AI implementation at a CAS practice is hard enough, but it becomes even more so when people don't fully understand what AI can and cannot do.
Speaking during the Information Technology Alliance's spring collaborative in Memphis, Jessica Barnas, the partner leading the finance and accounting solutions advisory group for Top 25 firm Wipfli, lamented that public discourse around AI has given people the impression it's some sort of magic wand that can fix anything, which then leads to unrealistic expectations around its capabilities.
"I talked to a lot of clients," she said. "They think that AI is like an elf that jumps out of the box and does things magically. They just say, 'Can't AI do that?' I even had one of our partners [tell me this recently]. We're working on a five year revenue prediction. He said, 'Well, can't you just upload that to Copilot and have it spin up the business plan and everything?' And I'm like, 'Do you have any idea how generative AI works? It doesn't do that.' But I think that there's just this misconception [that], oh, technology it is just this magic wand that's going to make all of my accounting problems disappear."
Chris Gallo, director of outsourced business accounting services with Kansas-based firm Creative Planning, who was on the same panel, made a similar point, saying that it's important to be realistic about what technology can do. While it can do a lot, he echoed Barnas in saying that some people seem to think it is magic.
"If we believed everything that everybody told us, you would be flying around in flying cars right now. We need to take it with a grain of salt at some point. Because why wouldn't we just say 'ChatGPT, build me a flying car,' and then the bot people that Tesla's building will just go do that. Right? It becomes a little bit ridiculous at some point too. ... There's a lot of unaligned expectations," said Gallo.
Misconceptions about AI capabilities also serve to drive fear on the part of accountants. Barnas said a big part of the change management process when it comes to implementing AI is allaying fears from staff that they're not going to fire everyone and replace them with bots. While there have been major improvements in AI over the years, she does not believe it is in the position to replace human accountants just yet. Instead, it has become a great way to augment those humans and make them more competitive against the humans who are not using AI.
"They think 'AI will eliminate my job!' So we talk about our philosophy," Barnas said. "We're looking to adopt these tools to help you get bigger and better and embrace the advisory role, but the only way AI will replace you is if a person using AI will replace you. You need to give that level of comfort to your teams so that everyone knows we're just trying to get better. We're just picking up new tools. This is not a replacement for you."
There is a similar fear when it comes to billable hours, explored in another panel (
"I took this process down from seven hours to half hour every week. Now what? Teach me how to do advisory. Because being a CFO, doing modeling and projections, it is not something [you learn] from reading a book or sitting in on one webinar. We would all be doing that if that were the case. So how can we train our teams on what to do next? All of that is involved in change management: being a guide and providing the safety for each step," she said.
Gregg Landers, the last panelist and managing director of client accounting and advisory services and internal control services with Top 10 firm CBIZ, talked about how many of the misunderstandings and misconceptions regarding AI can be allayed by people just experimenting with it themselves. That not only gives them a better impression of its current capabilities but will train them in using those capabilities to their fullest potential.
"I've been encouraging some of my teams to use their personal generative AI a little Black Mirror-like, [where you] keep talking to it, and it talks back. You get accustomed to how to give a context, how to get better answers," said Landers. "Sometimes, if you're nice to it, [you get] a tighter answer than if you're not. So experiment around with it."
He gave an example from his own life, where he needed to learn more about digital services taxes. Through an extended conversation with an LLM, he was able to understand what DST is and how it works and how accountants manage it. He was able to get good outputs from the model, though, because previous experience taught him that he needs to provide more context and information for a decent answer, because these models can get tripped up by ambiguities. He compared it to a fortune cookie that could be interrupted in many ways. People should be clear and concise when prompting AIs.
"We've become a society of fortune cookies," he said. "I may ask, 'How is that project going' and you tell me, 'It's going good,' but what I mean is 'Is it on time?' and what you might mean is 'I had this hiccup that put me two weeks behind, but now it is resolved so it is good.' We can't have fortune cookies when interacting with generative AI. You need clear, concise, contextual communication."