Demystifying AI

Randy Johnston AI podcast screen

There is going to be a big gap between accountants who use artificial intelligence and those who don't, says Randy Johnston, executive vice president at K2 Enterprises, who shares ideas for leveraging this transformative technology.

Transcription:

Dan Hood (00:03):

Welcome to On the Air with the Accounting Today, I'm editor-in-chief Dan Hood. It's hard to describe how we all feel about artificial intelligence. It's terrified. We're optimistic, we're confused. We're maybe a little bit hungry now and then that's really not surprising given how confusing a subject it really is and how many conflicting messages we're getting about it from wild claims that it will inaugurate the rapture to equally wild claims that it will initiate Armageddon. Here to demystify AI a little bit and particularly as it relates to accounting, is Randy Johnston. He's the executive Vice president of K2 Enterprise and one of the foremost thinkers and experts on accounting technology in the field. Randy, thanks for joining us. 

Randy Johnston (00:34):

Dan. Thank you and welcome to you and all the listeners. 

Dan Hood (00:37):

Excellent. Well, first off, we should probably start by asking how do you define AI? Everyone has a slightly different definition. Everyone you talk to describes it slightly differently. How do you define it? 

Randy Johnston (00:47):

Yeah, I'm pretty straightforward on this, Dan, but you have to remember, technical background. So I have a computer science degree in addition to my other talents, so I tend to be very fussy about a true definition of artificial intelligence. But the whole field of artificial intelligence, which was first defined in 1950, has evolved today where people are talking about artificial intelligence in the terms of generative ai, the chat, G p T and so forth. But artificial intelligence is actually a collection of algorithms that simulates human intelligence. It's really that simple. 

Dan Hood (01:22):

I am always fascinated about it because there is a, at one point when people say, well, it's going to simulate or it imitates, or it attempts to reproduce human intelligence. I say, first off, what human intelligence, but too, sort of always wondering, and I'm always fascinated by this and I realize this wasn't a question we talked about, but I always like to dive into this a little bit. Why do they try to reproduce human intelligence? I mean, why was the thinking, let's reproduce human intelligence as opposed to saying, let's make something that's intelligent in a very different way, right? Computers have can in theory, think very differently from human beings. Why do we choose human intelligence as the model there? Do you know? 

Randy Johnston (01:57):

Well, we're really trying to leverage the talent that we have as people, and if you think about how you come up with new ideas, there is this goal to make artificial intelligence sentient. In other words, just being able to have kind of a personality, a consciousness, if you will. And that's a long-term goal that's been in AI for decades at this point. I figure it's going to be at least another 10 to 15 years before we get close. One of the tests for AI right now is being able to have a conversation like we do and not being able to figure out its machine on the other end. And the goal, 

Dan Hood (02:32):

That's the Turing test, right? 

Randy Johnston (02:33):

Yeah, it is the Turing test and being able to have that test for 20 minutes where you can't figure out that you're actually responding with a machine or conversing with a machine, but the human brain is fascinating if you have not been watching the developments in neurology and neuroscience, the way our brains work is just stunning the amount of energy that comes in the output that comes out, and that's the issue related to computing as well. See today, our computers, which are all Von Neumann, Turing type of concepts, they're ones and zeros. We're getting ready to make a big jump with quantum computing, and I suspect the way quantum computing is going to be applied to artificial intelligence will give us even more capability than we see today. 

Dan Hood (03:18):

Alright, well good answer. That makes sense. Now I understand. Like I said, I always sort of wondered that. Well, with that, let's narrow it down a bit. This is a huge field and it's going to have a huge impact on humanity and the economy and all kinds of broader things like that. But let's narrow it down a little bit to accounting. How do you see it impacting, and I kind of want to break this into a couple of different time periods. How do you see AI impacting accounting today? And then maybe we can take a look a little further out a couple of years or maybe five years, whatever you're comfortable doing. But let's start with how it's impacting us today. 

Randy Johnston (03:49):

And I am happy to take those additional windows and I think I've got pretty good guesses for you, but today we can leverage it for a lot of routine tasks in accounting. Now I'm going to frame a lot of my discussion around the privacy concerns because the way it works right now is if you provide a prompt, an input if you will, to ai, the prompt and anything that you put in becomes the property of the company that's providing the AI platform. So if you're using chat G p T, it actually becomes the property of OpenAI. So you don't want to put client confidential information inside these systems. You don't want to put in financials, you don't want to put in names, but doing simple things like responding to emails or perhaps even creating proposals or scheduling people, many of those types of things are quite doable today. One of the things that you'll want to learn is how to create prompts. And this whole area right now is called prompt engineering, and there are techniques and tools to get that job done. But in fact, a recent column I wrote was about how to do a business plan strategy and tactics on using ai. And I actually used AI engines to generate those strategies and tactics and for the record, less than 45 seconds to do the initial draft. 

Dan Hood (05:12):

Wow. But no, is it safe to assume you're pretty good at the using right at the prompting? I mean this is a little bit like you used for Google, you used to have to know if you knew bule and operators and stuff like that. That was a really useful skill for that. I was a similar kind of thing, like how to use the system to ask it the right questions to deliver the answers you need. 

Randy Johnston (05:30):

That's exactly right. And it turns out creativity here is really the key. If you can dream of it, there's a good chance that AI can produce an answer in it. And I've been asking accountant to generally think about the things that are time consuming to them and ask the generative AI engines to help them respond. And see, I think there's going to be a major difference in accountants that use AI versus accountants that use ai and good accountants will be able to leverage their skills and be able to check the AI results. But realistically, I don't think I start any of my tasks now that I don't ask an AI engine for some guidance. And it really eliminates that blank paper syndrome where you're starting with nothing at all. You get to start with something. Now, I worry about that a little bit, Dan, because we know that human intelligence generates creative new ideas seemingly out of midair, but the fact of the matter is AI is actually going to narrow and channel, and that's a risk of using AI for this starting point. 

Dan Hood (06:37):

But then there are plenty of things where we talk about human creativity and it is a beautiful and great thing and we want to maintain it, but there's plenty of tasks that people do every day that really don't require, it's not Michelangelo painting the ceiling of the Sistine Chapel it, it's a marketing email or it's an introduction to a request for proposal kind of thing where it's just you just need, as you say, the blank page, you like, I need some words to start this. And they don't need to be perfect. They don't need to be the preamble of the constitution. It just needs to be adequately correct in English and get my point across. So there's a lot of room for that, 

Randy Johnston (07:10):

A lot of room for that. In fact, I usually prompt accountants just think about that mundane and boring stuff that they have to get done and assume that AI is going to be an intelligence assistant that you can train to become even better over time. And as you use AI more and teach the AI engine how you work and think the AI responses will be even better for you. So the simple way for most of you is just think about the top five or top 10 things that you just kind of, I wish I didn't have to do. And start using AI to do those things and you'll find you can do 'em in much less time. Now, Dan, you know us from being public speakers and writers and that type of thing, but I can tell you in this year when I wrote my C P E courses, I did not write a single review question. 

(07:59)

And in the CPE world, you actually have to generate five review questions per hour and all that stuff and match 'em to objectives. I've written over 500 review questions this year, all in ai, and I only had to edit one of them. Other than that, they were frankly better than I could write. And I'm actually trained in educational behavioral psychology. So I actually know how to write objectives correctly and how to write test questions correctly. And it's like this stuff is stunning. Now you've met my wife too, and she's also trained in education. I showed her how these bloody questions were being generated and she said, oh my, so maybe we'll do George. Oh my, oh 

Dan Hood (08:42):

My. 

Randy Johnston (08:43):

It was 

Dan Hood (08:44):

Great. So basically I'm assuming you're feeding in your course materials to it and saying, give me some questions based on this. 

Randy Johnston (08:52):

And actually in many cases, if I'm going to write a course, I'll say, I'm going to write a course on this topic. I'm going to focus it on accounting professionals. I have a description that looks roughly like this. Give me objectives, that behavioral objectives that meet the criteria that we have to use, and it'll generate the objectives. And I'll look at those and say, yeah, not quite. We'll fix these a little bit. Okay, now I'm going to use these objectives. Now write 10 questions, multiple choice that meet these objectives, identify the objectives, identify the right answer, how they come. It is stunning how good it is, but more on the point of day-to-day accounting tasks. If you begin to realize that things that are thrown onto your plate or that you delegate to others in your organization, you could actually delegate some of those things to a generative AI tool and it will come back with an answer in almost real time. 

(09:51)

And that part of it could even be more efficient than a delegation to another staff member. Or if you've got staff members that are trained in using ai, they can come up with far better draft results for you to review in a very, very short period of time. And let's just use the proposal that you had. There's a lot of proposals that are generated for different activities, and you can write proposals in the style of your business or firm by simply providing a prior proposal and say, I need to write a proposal. Here's an example of one I've written in the past. This is Randy writing again, here's my writing style, or this is firm A, B, or C, and here's our style. Write a proposal for these situations with this parameters in the same style. It'll fill it out. 

Dan Hood (10:41):

That's amazing. Now we could spend a lot of time on a lot of things here. There's a lot of rabbit holes I could go down, but one quick one I want to go with on this one is right now everyone's right, big model of AI is chat, g p T and chat G B T. And correct me if I'm wrong, I could well be, it seems like a lot of the function of chat g t is language generation, as you say, it can take your style, your writing style, and your presentation style and mimic it. I've seen people say, write me this speech in the style of a cockney urchin from the 1890s, and it did a pretty good job of bringing in cockney slang, that sort of stuff. But it's not necessarily, for instance, it doesn't know, I dunno how good it is at math for instance. I know it's very good at breaking down language and reproducing it and finding patterns within language and that sort of stuff. But is chatt BT really the model we should be looking at, or is it should be one model of something that would apply in a lot of different ways? 

Randy Johnston (11:41):

Yeah, it is clearly poor at math and accounting. Okay, so the current generative models you should not use in that way. Now I want to fork people's thinking in this because what you'll discover is predictive analytics can be an AI that is useful in these types of financial pieces, but because of the way the large language models that are behind these generative AI products work, they really work in four separate steps. They do tokenization first, then they do embedding, then they have what's known as an attention model to select the words that they're going to respond with, and then they do completion. It's a four step process in most of these current models. And the issue is it's just a game of statistics. How often does this word come up in the context of these other words? So just like you and I are talking here, Dan, if you, and notice you probably completed the word you before I said it, and I paused with intent for our listeners on that. 

(12:41)

But notice you have a pretty good idea of what's going to be said next because you understand the English language in the context of our sentence. Well, that's what's happening in ai. But it is smart because it is crawled a predefined section of the internet. The large crawl model that was used between 2016 and 2019 to feed most of the generative AI engines right now have extracted a lot of the content of the internet. And notably, it is English speaking biased because of the crawl itself, but it is limited in scope in that way. So you have to watch in many of the models chat G P T in specific, it has not supplied things past September, 2021, the application of chat G P T into the binging. Microsoft search engines is completing some of the more current pieces. But you have to recognize until we get full internet search and completion capability through current, you always have a little bit of a historical bias, 

Dan Hood (13:52):

Right? Well, and among going to things you say, if it's only 2021, it won't know the latest accounting standards or the latest tax regulations or that sort of thing. And then even if it does know them, it might just make them up. It might just hallucinate them, which is a whole other terrifying, terrifying issue. But I want to move from now because I think that's a great picture of where we are now. Maybe start to look forward and maybe as we look forward just to, because we could speculate for, I could speculate wildly, you could give informed speculation as to the future in a wide range of things. But maybe if we focus out on a couple of areas, one would be the issue of ownership. You talked about if you put stuff in chat G P T, it's effectively owned by OpenAI in terms of how those things, how that's going to work out. 

(14:39)

Maybe we talk a little bit about the hallucination issue and whether that's going to be fixed or how that might be fixed in part because you say when you're putting in your courses to get your questions, your courses, so you can those questions and go, oh yeah, that's terrible. Why did they put that in there? That makes no sense. Whereas there's the famous story of the lawyer who did a brief, had AI do a brief for a case and it made up a case just at a whole cloth, not even misinterpreted, made it up, which is bizarre, but also suggests that the computers are dreaming. So maybe those two in particular, but then also and then maybe what happens when we move beyond chat G P T as our main idea of what AI is. 

Randy Johnston (15:20):

Yeah, so let's take it out three to five years, as you had suggested earlier. And there is already indications that the current large language models have reached their limits. So there's going to have to be new models built and they are being built on a regular basis. Last numbers I saw, Dan, was that there've been 29 new commercial models this year, three new educational models, and I think there's probably more than that. Those are just the ones I'm aware of. Now what you can expect is that the precision of the models will become better. You can expect that there will be more private models. And of course, Microsoft Copilot 365 pricing is finally available, $30 per user per month on top of the E three E five platforms. So many of us will have it as part of our Microsoft Word, Excel, PowerPoint, OneNote Suite. 

(16:14)

So you could expect that if you're writing a document in Word, that you can prepare a presentation from it just by saying go do it. And that has actually been demonstrated and is live today in the Microsoft 365 platform. But what I believe will happen is the privacy will begin to kick in and there are a number of models that are available. The alpaca model from Stanford, the LAMA model of Facebook has said that they're going to keep their AI engine public and so forth. You'll be able to install your own AI engine for your firm or for your business and keep the data private. And that's actually a big step forward. Now, the other part of these models, Dan, is tuning them up today in chat G P T, there is a feature known as reinforcement learning from human feedback, R L H F. 

(17:09)

It's actually presented with a little finger or thumb, and if it's right, you're supposed to actually click that and it improves the model. But a lot of people aren't doing that right now. And there is concern on alignment problems. That's really what you're talking about when we're dealing with hallucination and making stuff up from whole cloth. We know that techniques used for war gaming or red teaming are being applied, but there is concern from the very best of the developers that it's going to be hard to make the models more accurate. And Sam Bowman from New York University and philanthropic has said it's going to get harder as the systems get better. So I'm actually concerned about that. And even with the newly formed AI group that's trying to set AI standards right now, the computer scientists behind these models just don't know how they can do some of the things that they're doing. And that's another one of those like, huh, how is that possible? 

Dan Hood (18:18):

Well then stop doing it if you don't know how you're doing it. This is how we got dinosaurs at Jurassic Park. I'm just saying. 

Randy Johnston (18:25):

Yeah, I think so. And those talents are called emergent capabilities. But here's what I'd like for you to think about From a county, you can expect that the models will become predictive, that you'll be able to do things that are routine, like forecasting in confidence and in private. So that'll become very natural. I'm a big fan of advisory services. I believe that you'll actually be able to analyze businesses and apply appropriate strategies to make them more profitable, make 'em run easier, improve their processes. Things like scheduling will become easier to do. So what I want you to do is start thinking about processes that you're doing today and how they can be optimized by ai. 

Dan Hood (19:10):

Excellent. Alright, there's so much more to talk about, but we need to take a quick break and then we'll come right back. Alright, and we're back. We're talking with Randy Johnston of K two Enterprises, all about artificial intelligence, what it is, where it's going, and we've got a great sense of its direction and I'm encouraged by, should talk about things like better security or better able to control your data when you work with ai. Being able to have your own private AI as encouraging from that perspective. Little discouraged about the whole, we're not sure how we do this and we're not sure how to fix it kind of thing, but hopefully they're working on that. Now let's go a little bit even more narrowly into accounting, and I want to talk specifically about accounting software because it seems like every day some vendor comes out and says, Hey, we've got ai. 

(20:00)

We're using ai, we're all about ai. And you're never sure how much AI they're actually using and what they're applying it to. And there are a gazillion vendors and some of them genuinely are right out at the cutting edge or the bleeding edge of AI research. And then there's others who've basically bought a license to chat G P T and integrated that into their thing. So it's a huge spectrum. I realize that. So this is a wildly unfair question, but in general, do you have a sense of where the vendors in the accounting space are? Have they really wrapped their heads around ai? Obviously, like I said, I know there'll be outliers on either end, but where are they in terms of this? 

Randy Johnston (20:36):

Well, I appreciate the wildly unfair question, Dan, because I think I can actually give you an opinion on it and I think some basis in fact, which is probably the more important thing here. So first for our listeners, just to get 'em think about this, the algorithms behind ai, the quantity of algorithms developed are pretty stunning. Early on there were only about 30 to 50 AI algorithms, and today there's well over 500, many of these are in the public space and can be used and reused openly. On the other hand, a number of vendors have been developing their own private AI algorithms. Now, when it comes to all of this ai, there's actually a chain of neural networks which work kind of like our brains machine learning, which a lot of the vendors are using and calling it ai, but I don't consider machine learning ai. 

(21:36)

It's a branch of, but it's not the real deal. And then there's the AI algorithms. So the question really becomes, when I'm vetting products, Dan, how many AI algorithms are you using? Number one and number two, how many have you developed yourself? Okay, now that as the framework, then we can go back to the providers, to the profession, the big boys, the Walters cls, the Intuits, the Thomson Reuters, and we can ask what algorithms are inside their platforms. And we do see evidence of AI being applied, let's say inside checkpoint engage. We also see that inside some of the Walters CLR tools, we see much more machine learning inside the Intuit platforms. Now, I know in another podcast we are discussing accounting software, but I'll just throw accounting software vendors into a pool together. And most accounting software products that are out there don't really have ai. Everybody's trying to hook their bandwagon to the AI moniker. And I politely just call that fake ai, like we heard of fake news or fake advisory in the past. It's really just marketing bss and you have to be very, very cautious saying they've got ai and it might be machine learning and it might be some predecessor, but it may not be a true ai. 

Dan Hood (23:07):

Gotcha. So that's something, but in theory that's something we should expect them to be doing more and more. I mean, would you expect over time a lot of that fake AI to go away and be replaced by genuine ai or are they just never going to get it? 

Randy Johnston (23:21):

No, they actually will get it. And this is under nondisclosure, so I can't speak about it directly, but I am aware of a new platform that is completely AI driven that will support the tax marketplace. It'll be announced later this year, but I'm also aware of a number of products that you would know of and their AI efforts. And the best way to think it is the developers are looking at ways to apply AI with their engines, and they're basically peeling off one of the public domain models. They're plugging their product against it, and then they're using that AI engine in a private fashion to generate responses. So as an example here, Dan, the COR V tax planning product actually has connected a true AI engine to produce tax planning guidance. Well, that's kind of interesting. Now, is it still early in the process? Sure, but I could probably now name 10 or 12 products that have good solid beta tests running of applying AI to do specialized items inside their software platform. 

Dan Hood (24:36):

Excellent. Good. And as you say, we can expect more and more of them to be adding this, right? So it's not just fake AI forever and real AI over here, but they'll all converge on something real. 

Randy Johnston (24:47):

A good way for our listeners to think about that, Dan is think about the evolution of the web and think what websites were early on and how they got better and how mobile ops were clunky and got better and so forth. We're at a very, very early stage of ai, and right now many of the development teams were caught flatfooted. I think the reason we saw the letter that was signed by tens of thousands of developers were they actually got caught flatfooted and wanted six months to be able to catch up. Well, we're in a period where a lot of these companies now get it and they're trying to catch up. You've got some innovators, you've got some laggards, and in fact, I refer to that as type one and type two AI type one's, enhancing your product and type two is new revolutionary types of results, 

Dan Hood (25:40):

And we can expect to see more and more of those as more and more people get up to speed on it once that six month elapses. So it's interesting, you're bring this up, who's developing an internally, there are accounting firms, mostly the big four firms that are developing their own proprietary ais internally. And this leads me to my next question, which is when accountants think about this thing, do accountants need to be building AI themselves? Is this a thing they need to be focusing on or is it the kind of thing where they can just expect it, that it will be built into a lot of the tools they're going to use and maybe there'll be AI engines that they can sort of buy off the shelf to supply other needs? I mean, do they need to be working? Does Affirm need to have its own proprietary ai? I'm leaving aside the big four 

Randy Johnston (26:23):

Understood on that, and I think my simple answer is most of us do not need to build these platforms ourselves. I believe that you will find enough high quality AI tools to use that you can train on your own methodologies that they'll produce favorable results, your firm or business, and we can go back to the simple Microsoft Copilot 365 example, but we can also go to these private generative tools that we might host. And so today, if you use the three five chat OpenAI three five Chat G P T model, you could put up a private database and load it with all of your proposals and your methodologies and so forth, and it will begin acting like your firm acts. And that's a very simple installation. At the risk of sounding a little too technical, it's only like a 600 gig library to install and it runs very, very straightforward and you can privatize it. 

(27:28)

So I expect some of it to be done that way, but the vendors themselves will each have an offering and they'll be responsible for controlling the privacy, but just recognize that privacy still an issue. There's currently, I would call it a rumor, but it's actually got confirmation that even Microsoft is having trouble controlling the AI content across instances of 365. So I have my K two instance of 365. You have yours at Eant, you're running there, you put in something in your engine, and all of a sudden I can see it over in my engine. Not good, right? Yeah. Now it's not real common, but there are situations like that. So this whole issue of privacy and hallucinations, those two are the biggest threats on these platforms, but I expect three to five years, a lot of those will get worked out 

Dan Hood (28:26):

And in the meantime, there's plenty you can do without putting your private information, private client information, as we talked about all kinds of marketing things or just general writing tasks that you do every day that don't require any proprietary information or data or anything like that. But it makes things a lot easier. 

Randy Johnston (28:41):

Dan, it's very straightforward on this one, by the way. Everyone should have an AI initiative regardless of your size if you're even a sole proprietary. But if you're a small firm, you need to have a little bit of time set aside to understand what the tool can do for you. And I believe that you will be rewarded with time leverage because the time you invest in it will come back to you in results that you can use. So it's time to be learning, and I can't imagine any business that shouldn't have a little bit of an AI initiative regardless of size. 

Dan Hood (29:16):

No, it certainly will. Just in the same way that at some point, somebody recently said, all businesses are now technology businesses to a certain extent, all businesses are going to be AI businesses one way or another. It's going to touch everything. I want to just sort of give us a final thought of how accountants could be talking about it. I think you've given us some clear clues, right? One is it sounds like you don't need to be building your own ai. You don't need to know how to build an ai, but you do need to know how to use it, whether it's prompt engineering learning that now, and I always wonder about prompt engineering, whether it'll be sort of like I said, like Boolean operators for Google. No one needs to do any of those anymore because Google's gotten a lot better at finding things for you. And so a lot of that early stuff went away, but for now, prompt engineering, but then for later on, it's just that things like what can it do for us? How can this impact your business? What can we hand off to ai given the war for talent and all that sort of stuff? Are there other things you think accountants should be thinking about just as a final takeaway that they should be thinking about it when they think about ai? 

Randy Johnston (30:08):

Yeah. So first thing I'll do is just respond to saying, think about your own personal situation. Think about all the things that are problematic. We've kind of identified this earlier in the discussion, but I would suggest you go after your top three time consuming items, whatever those are. So I would personalize that first, but once you get past that and you begin to learn how the platform works, then I would turn it towards your firm's needs or perhaps downstream to your client's needs. If you're in public practice or your customer's need, if you're in industry and say, what is it that our clients could benefit from? And I would begin building tools and methodologies to support your clients or your customers. To me, that's where it really works. Now, I'll flip it up with just one little other piece. We know that talent is short, and I have taken a much more team first, our employee first mentality on this. Look around at your team and say, how could I make all of my team members efforts easier? And I think you will discover right away that AI can do that. It doesn't necessarily have to be a young tech savvy person. You can almost go push by person, say, what could I use AI for with that person and help build that and help teach them that. 

Dan Hood (31:36):

Wow. Very cool. Excellent. I hadn't even thought about when you throw out the idea of what can your clients, your customers do with it, that's a whole advisory service right there, right? We'll come in and tell you how AI can improve your business, and it's only going to be a bigger and bigger opportunity as time goes by. Well, this is a great conversation. I could pump you for information and ideas and thoughts for days and days and days because you've got a lot of 'em, and it's always fascinating to talk to you, but unfortunately, we have to go. So I want to say Randy Johnson, awesome stuff. Thank you for joining us. 

Randy Johnston (32:07):

Very pleased to do so, Dan, and I look forward to speaking with you and your listeners again. 

Dan Hood (32:11):

Cheers. Alright, and thank you all for listening. This episode of On the Air was produced by Accounting Today with audio production by Kevin Parise. Rate or review us on your favorite podcast platform and see the rest of our content on accountingtoday.com. Thanks again to our guests, and thank you for listening.