The AI-human connection

2020 could be the year humans merge with artificial intelligence. Elon Musk, modern day patron saint of audacious tech dreams, promised last year that his secretive company Neuralink would someday implant extremely thin threads into human brains to “achieve a symbiosis with artificial intelligence.”

The idea is for these threads to enable humans to use their brains to control external electronic devices, like phones or computers, a capability especially of interest to people living with physical limitations and disabilities.

It remains to be seen how quickly this dream of a human-machine merging will be realized. But metaphorically speaking, it’s already here: The meaningful implementation of AI in accounting certainly depends on human partnership, a piece that is often missing from the fear-based approach accountants often take when considering AI. Artificial intelligence doesn’t work without human input, but more importantly, it doesn’t work without human trust. What good is technology that can process millions of transactions in minutes if the anomalies it finds have to be double-checked by an accountant? And how do we build this trust between human and machine?

Aaron Harris, chief technology officer at Sage, maker of ERP and financial software, has been using an approach for coding AI that involves assigning anomalies a risk score. “We’re at the stage [in AI development] where we have to develop trust,” he said. “Sage is working on technology that can test every journal entry in a ledger. If for every 1 million entries tested, the AI misses 1 percent of bad transactions, that’s 10,000 missed entries — there’s no value in that. But if the AI misses even one single bad transaction, an accountant won’t trust every other data point checked. There is no value in that either. So there must be trust.”

The way Sage is building that trust is by taking a step back and allowing the AI to assign each suspicious journal entry a risk score. With a risk score, human users can observe that the technology had “intelligently” considered the transaction, which increases trust. If this sounds more like psychology than technology, it is: Harris constructed this plan by speaking with trusted accountant partners of Sage, focus-group style.

These days in the profession, AI is most often understood to be capable of reviewing large amounts of data — amounts that humans must sample rather than review completely, because the sets are too large. This makes AI ideal for audit, for instance. But of course AI is capable of much more than simple data review and anomaly detection. It can produce examples, perform data analysis, and demonstrate correlation. Companies like Adaptive Insights, which provides a budgeting, forecasting and reporting solution, use AI to create business stories effectively — stories that talk about where a company is going based on analyzed data. For founder Rob Hull, therefore, the way humans should be relating to AI goes beyond trust. It has to do with judgment.

“There will always be a role for people in the business process,” Hull said. “The role of humans will always be to apply judgement, and this is really difficult to automate. Humans must judge whether a trend identified is in fact an error, or an anomaly, or not; whether the process has in fact been executed smoothly; and review data that doesn’t seem to fit into whatever process needs to be cleaned up.”


The human element

The type of technology that Musk wants to create and embed inside the human brain is miles away from the type of computer-based AI that accountants are concerned with. Machine-based AI is a sophisticated type of automation that learns as it is used and trained. All of the Big Four firms have AI programs that they have implemented with extensive training — i.e., data input — to make the system work for their needs.

KPMG is a good example. The firm partnered with IBM Watson in 2016 to apply it to its audit service, the service area seeing the most robust use of AI today. At the time, CEO Lynne Doughtie declared this the “Cognitive Era,” saying in a statement, “KPMG’s use of IBM Watson technology will help advance our team’s ability to analyze and act on the core financial and operational data so central to the health of organizations and the capital markets.”

Since then, KPMG has applied the AI technology to more and more complex tasks, such as R&D tax relief for clients. According to the firm, tax professionals and research and development staff spend thousands of hours manually reviewing materials to identify relevant evidence to claim all eligible R&D tax credits, but missing a single detail in such a high-stakes area can mean missing out on millions of dollars of tax relief: Chief innovation officer Brad Brown estimates the government stands to return $148 billion in R&D tax relief dollars over the next decade, provided companies file for all the credits they are due.

This is where IBM Watson comes in. But the AI program doesn’t arrive fully formed. It must be trained. KPMG trained it using 10,634 documents, five different R&D models, and 13 tax professionals. After this, Watson reportedly recommended the correct tax treatment about three out of four times. This reminds us, of course, that human judgment is still very much needed at this stage to assess AI-driven recommendations, but it also shows that AI is nothing without human teaching and training. This is how it learns.

To put this into perspective, a product on the market right now that firms of any size can implement is BQE Core Intelligence. It is a voice-enabled dashboard for accountants to keep track of all their clients’ businesses, and for their clients to use as well to keep track of their own businesses. The virtual assistant responds to spoken questions: “Who is my best customer?” or, “Why was my revenue down this month?” These questions require complex analysis on the part of the software. A client’s “best” customer might mean the customer that bought the most product, or has been the most consistent; revenue could be down for myriad reasons. The assistant learns each client’s business as it is used, and it also remembers who you are asking about several questions in, so that the speaker doesn’t have to remind Core Intelligence which client or issue they are referring to. The more a firm uses the platform, the more it understands users’ speaking and use habits.

Another example is Introhive, which provides an intelligent way for firms to maintain client relationships. The platform tracks client communication and will do things like prompt an accountant if they haven’t sent or received an email from a particular client for a certain amount of time.

“When you have a lot of different relationships like partners at large and midsized firms do, it’s easy to let relationships lapse accidentally or not remember to reach out to people as often as you would,” said CEO and co-founder Jody Glidden. “The greater your relationship gap gets, the more difficult it gets to manage it the way you would like. AI software can bridge that gap.” To create this technology, Glidden and co-founder Stewart Walchli “had to be able to tap into interactions people and employees have with the outside world, to make sense of who knows who and how do they know them.”

So, AI requires human training as a matter of course. Just through normal use, the software will get more “intelligent,” easier to use, and provide more value to the accountant. There is no real downside to this when dealing with numbers, ledgers and key performance indicators. So when does training AI require more careful planning?

IBM's Watson on Jeopardy
IBM's Watson computer system, powered by IBM POWER7, competes against Jeopardy!’s two most successful and celebrated contestants -- Ken Jennings and Brad Rutter -- in a practice match held during a press conference at IBM's Watson Research Center in Yorktown Heights, NY on January 13, 2011. Watson will compete against Jennings and Rutter in the first-ever man vs. machine Jeopardy! competition, which will air on February 14, 15 and 16, 2011, with two matches being played over three consecutive days.


Ethics in recruitment

The capabilities of AI outlined here thus far show that within the accounting profession, the technology is well suited to audit, and when applied strategically, for process analysis and business analytics as well. But it also has potential for one of the foremost issues for firm operations — recruitment.

EY, as an example, hired over 22,000 people last year in the Americas, and projects hiring 15,000 in the U.S. for fiscal year 2020. For such high volume, AI has become an indispensable tool. The Big Four firm has also implemented 1,300 bots with clients, bots being pieces of software programmed to do repetitive tasks that require intelligence when humans perform them, like selecting applicants, for instance. The firm has also implemented the IBM Watson Candidate Assistant solution to assist candidates seeking positions with the firm, accessible through the EY website.

“The way we look at our recruiting process is we want to focus on objectives,” said Larry Nash, Americas director of recruiting for EY. “We want to improve the candidate experience, which is paramount in any day and age, but particularly in a time when talent is well sought after like it is today; we want to expand our candidate pipeline; and we want to increase recruiter productivity.”

The recruitment roadmap is simple and logical: sourcing talent, assessing talent, interviewing, and making decisions on an offer. The goal, Nash said, is to increase efficiency in any or all of those steps in any way possible, whether it be through AI-driven technology or otherwise.

Implementing AI, however, has the added benefit of reducing unconscious bias in the hiring process. Many studies have shown that unconscious bias shows up across gender, race, perceived cultural background, age and myriad other factors that can end up hurting not just candidates but hiring companies as well. In order to make sure biases don’t show up in AI recruitment processes, the training that goes into teaching the program must be thoughtful and precise.

For this article, I tried Candidate Assistant on EY’s website myself. I am not an accountant, but I wrote in the chatbox that I was interested in jobs that involve writing. The bot immediately pulled up jobs that seemed like a match, and prompted me to either upload or paste my resume if I was interested in more matches. I uploaded my resume very easily, and more targeted jobs came up, sorted according to how high or low a match they were for me. It even took into consideration my address, and found jobs that geographically made sense for me. Overall, it was a very easy experience, at least for these first few steps of job-seeking. Compared to my past experience, it was a huge relief to not have to play a guessing game of what search words to try, or to comb through thousands of offerings.

“Watson reads a candidate’s profile and matches it to job openings instead of having them guess, ‘Where do I fit in?’” Nash said. “This is saving job-seekers a lot of time.”

In the way KPMG trained Watson for R&D tax relief, EY trained Watson on its specific needs for recruitment. Out of the box from IBM, Candidate Assistant is pre-trained on 60 commonly asked questions from job-seekers. EY was able to modify, add or change the questions based on reports that Candidate Assistant provides as it’s used. In a very natural way, this process eliminates unconscious bias because bias is not being consciously fed into its training. An example would be name bias: Watson doesn’t have a cultural reaction to a name based on perceived gender or race; rather, it only looks at qualifications.

This doesn’t mean bias doesn’t exist at all. The way job descriptions are written, for instance, matters when it comes to how the program matches potential applicants.

“We’ve got to be really careful,” Nash said. “A lot of the matching we’re seeing relates to the job description and how you describe the role you’re advertising. What we’ve been doing globally over the last two years is rewriting our job descriptions, using gender-neutral terms, taking anything out that may have a bias on gender or other identities. ... Once you have a sound job advertisement, that reduces the bias in matching and screening.”

Unforeseen consequences

It’s important to note that AI doesn’t promise a bias-free paradise. While EY works hard to make sure bias doesn’t enter the hiring process, there is still much that we don’t know about the ramifications of AI insertion into tasks previously performed only by humans.

Gabriel Fairman, founder of Bureau Works, became something of an expert in AI when he developed artificially intelligent software to better manage his content delivery company. Bureau Works provides localized branding and content strategy for global companies, and as such has a large staff of translators. When Fairman introduced AI to streamline the translation job assignment process, he thought he would be able to shed staff by gaining efficiencies. “I thought project managers would be out of a job because in our organization, project managers spend 80 percent of their time placing jobs with translators,” Fairman said. “Now that the AI platform took that over, what would they do?”

He found that instead of replacing project managers, AI displaced them. They now had a lot more time to spend on tasks that were previously deprioritized in favor of task assignment: checking work, scoping opportunities, and communicating with staff. Bureau Works didn’t save on payroll, but it became a much better company.

Implementation of AI is bound to raise similar questions, and that is where projects such as Armanino’s new AI Lab Resource Center come into play. The California-based Top 100 Firm debuted the center last year to provide collaboration and educational opportunities for members to learn how to apply practical AI solutions in organizations using robotic process automation, predictive analysis and virtual assistants. The education piece is key: While accountants and business owners don’t have to know how AI code works, it’s imperative at this moment in time that if they plan to use AI, they understand what that materially means for their staff and clients.

“When you ask CEOs, ‘What are you doing around AI?’ there’s not really a well-articulated strategy,” said Tom Mescall, partner-in-charge of consulting at Armanino. “We want the AI Lab to demystify what AI really is from a business perspective, and to get companies out of the starting gate. Our members can participate in real-world examples on how AI is being utilized, ask questions, and explore different business situations. We’re trying to remove all the hype and headline buzz around how impactful AI will be.”

AI is buzzy indeed. But it’s in the software accountants are using, and it’s increasing efficiencies across the board. Learning more about it can only position firms for success in the long run — until the next innovation, of course.

Blockchain, anyone?

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning KPMG EY
MORE FROM ACCOUNTING TODAY