Track 4: Getting Started With Artificial Intelligence

Transcription:

Jacob Sperry (00:09):

Hello everybody.



(00:12):

So I'm looking forward to introducing the session today. My name is Jacob Sperry. For those of you that have heard me say something similar in the last session, I apologize for the repetition, but I'm the Vice President of customer Experience at Trullion. We are an AI enabled platform bringing accounting solutions to audit firms and controllers. I'm excited to introduce our esteem speaker for this session and like last time, the remainder of this introduction was written using ChatGPT-4. Artificial intelligence is a hot topic, but many professionals still find it mysterious. If you're unsure how to start with tools like Bard ChatGPT, and Copilot, you're in the right place. Our session getting started with artificial intelligence will help you understand and use AI tools to boost your efficiency and productivity. We're honored to have Randy Johnston with us today. Randy has over four years of experience in technology and was introduced into the Accounting Hall of Fame in 2011. He's been named by Accounting today as a top 25 thought leader in accounting from 2011 to 2024, and is a regular contributor to the CPA Practice Advisor, journal of Accountancy and other publications. He owns multiple businesses including K two Enterprises and Network Management Group, Inc. The largest managed service provider for CPA firms in North America. Randy's wealth of experience is a college instructor, instructor management consultant and technology advisor will shine through in today's presentation. Please join me in giving a warm welcome to Randy Johnson.



Randy Johnston (01:47):

Thank you very much and it is a pleasure to spend some time with you again this afternoon. So I do appreciate your time this morning and Hitendra's presentation just a moment ago. He gave you lots of good solid advice. I thought that was quite well done too. The topic is so broad, we could spend eight hours on this and maybe not hardly scratch the surface. So I was trying to be selective about the things I'd want to know if I was in your shoes running a firm. And I pulled a few different things additionally here over the last couple hours. But I have policy, I always show a picture of the grandchildren before I present. So you can see I started this in about 2014, about 14 years ago, sorry, in about 2010. And so we do spend a lot of time with our family and we have always tried to leverage the technology that we do to do more things that we want to do in life.



(02:40):

So we have the pleasure of spending a week with all of these in Orlando starting Saturday. So we're really happy about that. So in any case, but you've heard enough about all the businesses, so I'm just going to skip past that at this point. Other than there are websites that are CPE sites for CPA technology, accounting software technology, paperless and so forth that we maintain. So we're going to really step through a variety of topics where I want to be very practical now about what I think you could do today as far as some of these AI services go. And you've heard enough definitions I suppose, but all we're really trying to do here is get where computers can do tasks that humans would otherwise do. Now, general artificial intelligence, I figure still may be 25 years away, 2050 ish is about what I'm thinking on that that's sentient and consciousness and all that, and I follow all that stuff.



(03:39):

But there are practical stuff you can do right now. And a lot of this evolved out of the machine learning worlds and the neural networks world. Most of the technology that we're using right now evolved from the 2010 neural networks. So we're really using practically about 14-year-old technology. Now, Hitendra just explained language models, I actually settled out of that aside, but large language models, medium language, small language and narrow are actually important because they run different things in different ways. Most of the chatter is about the large language models, but there's practical applications, let's say in a small language model to do things like forms recognition like expense management. So the vendors right now are going to be developing all sorts of models along the way. And realistically, machine learning when people are talking about it is a subset of AI. And we could get into a lot of the weeds here, but I do want you to note that generative AI is just coming across the peak.



(04:52):

If you follow the Gartner methodology and it is heading for a trough is the way I'd see it. Now, I'm not dissing ai, I just want you to understand that these wild explanations and everything that people were talking about, there's going to be some expectations that become more real as we see it. Now, the future is very, very bright on ai, but just recognize that's where we're at in the hype cycle. So the recent buzz has clearly been around generative AI since the wide release in November of 22 of ChatGPT. But if I just give you a quick tour of which there's hundreds of these large language models. When ChatGPT came out, it was superb in the three five model, but by the time we got to future releases, the four oh model which improved and then got to Microsoft's copilot model, which was an improvement by the time we got barred updates in December now named Jim and I, Jim and I did a better job on accounting related tasks and then when we got Claude three, it did a better job.



(06:02):

So we've got a horse race with lots and lots of players out there and I worry about the accounting profession and regulatory compliance and lots of other things. But I am going to run most of these models with you today live and hopefully the internet access here will cooperate. So you can see how some of this works, but the net here is it's all been about generative AI, but there's lots of other ais that are equally as important. So the star that generated a lot of this show frankly was ChatGPT. And we're going to run the most recent model on Chat DP, GPT-4o, little o Omni, and that's what I'm going to use as my model basis, but we'll run several different models with you today. So it is as Hitendra was showing there towards the end, it's just a statistical model to generate answers and it really, there's a lot of complex computer science behind it, but it's just statistics and they're really, I'm using 'em today as a turbo charged search engine.



(07:05):

These models have been trained on lots of data globally, but as a Tora was trying to lay out for you, there's bias in the models because much of the worldwide web is in English and much of it is US-centric. And I can go into all sorts of things along that line, but the Google model Gini is using about six terabytes of data and 1.6 trillion parameters to create its model. Whereas GPT-4 turbo, for example, has about seven terabytes of data and 1700 76 billion parameters and so forth. These models take a lot of horsepower to create. Nvidia had their earnings announcements today with the 10 for one split. And I've been watching NVIDIA and many of you have seen me present through the years. I've got a whole pot full of Nvidia content and tech update this year about their engines. But I'm going to tell you that I don't believe that NVIDIA's graphics processors of the way forward, which is actually a little bit of a warning there on that one.



(08:11):

But anyway, notice quantum computing to me is part of where this is at because I'm staying in the saddle of working on accounting technology because I can see what the promise of AI running on quantum computing and I actually care less about AI running on neural processing units and so forth because of the capability that's there. So again, I'm not going to try to teach you quantum today. I could spend a couple hours on that. It's great fun to talk about, but I thought what I would do is just throw a couple of things out here in a hurry for you because there are hundreds of accounting centric applications that have working AI in the models today, and I wanted to put them up in these categories so you could have a picture of them or we could talk about them. But there are advisory tools out there like for impact data or ADA that are using AI in their basis.



(09:13):

You have some CAS tools out there like digits or keeper that can do some of this type of work. Of course you've got extraction tools like Maker's Hub who happens to be out in the hall that have some very fascinating capabilities there. And you've got other reporting tools like periodical. Now if you're playing the tax and audit games, there are business tax AI tools that are beginning to work pretty well. I noticed Dwight was here yesterday, so you might've talked to him, but the additive CEO was onsite here yesterday. Another tool that I've recommended frequently into CPA firms is black or tax autopilot. The firms that use this tool this year told me that they were getting more than a fivefold return on it, and the idea was very simple. It does all the work of a 10 40 work paper like CCH scan, auto flow, sure prep or companion grunt works and it does all the labor so no outsourcing is needed and it produces a partner reviewable tax return.



(10:23):

You pump the documents in, you get a prepped return out, and every practitioner that I've talked to that I put on the platform has said it's worked brilliantly. Okay, now is it early? Sure, is it going to scale? We don't know, but there's lots of promise on that type of tool. And when I think about labor, that's pretty different. And then you've got other products that have these time, sorry, these AI tools in them like Digilance or like Laurel, which has a very interesting AI time sheet mechanism. There was questions about audit guidance and material actually has some of that in ai. And of course you've got technical memos in time credit, and then even few standard players like Walters Clear's teammate document Linker has it, and the Zoho team, which is out in the hall also has some AI powered things in lots of different portions of their models.



(11:23):

I think the team out there should know that pretty well. So I wanted to call out this guy right here, the generative AI toolkit from the AICPA. It has flaws in its background. It's not perfect, but it's a pretty good starting point and it's downloadable and you can read through that without much effort and it has a follow on piece as well. Over the last seven years I've helped with the AICPA's accelerator program. This is the seventh year to qualify to be in the accelerator pool. This year you had to have an AI component. There were 70 products that would be related to accounting that were vetted down to the ones that were selected, but you can see some of them listed here. So one other thing that I thought after listening to the presentations this morning, I said, you know what? I want to tell you why I think things are happening the way that they are.



(12:21):

And my traditional recommendations for buyers of Windows and Mac PCs has looked like this, but on May 24th when Intel stops producing their I5 through I9s and now that the iPad M4 is out and Apple's releasing their M4 what I recognized was that neural processing units are the thing to buy at this point. So in 2016, I recommended to accountants to buy GPUs to support AI activity. At this point, I'm changing that as soon as the products are released to neural processing units, everything that you buy ought to have neural processing units in it. And I'll try to explain why, but the neuro processing units in our phones and in our laptops and desktops can run the models locally and the models are being built so they can be run in the cloud or they can be run on the device computer if you want to or on the phone.



(13:20):

That's the strategy that I'm seeing unfold here. So my technology recommendation is going to change to Core Ultra, and it's predicted that 19% of all computers purchased this year will be AI powered. And I think that number actually might be a little low. So for example, yesterday Microsoft announced the new Surface Pro Copilot plus pc. What's important to hear in here is they have a feature called Recall. And what you can do is run AI locally on the surface and it can remember the AI transactions for a period of days and reproduce them because as a Tora correctly said earlier, most of the AI models cannot reproduce the same result because they're a statistical model. This thing can, and that's important. And if I had a suggestion for you, if you're a risk averse accountant, which some of you might be, our core recommendation in AI right now is that you stick fairly close to the Microsoft copilot family of ai.



(14:28):

It's not the best, but it indemnifies you. And there's some reasons why I send you down that path. Now, I'm not against ChatGPT, I'm not against Claude, I'm not against Gemini, but you've got a little less liability, plus many of you have Microsoft 365 licensed. You're using it with your Walters Kluwer and Thomson Reuters and Intuit products. And gosh, it's integrated and there's some safety factors in there that's just kind of a little separate piece. But I also wanted you to see that this product used the Snapdragon X Elite, and it is the first product to be released with this. And you can see it ships on June 18th. So we'd been watching for the Snapdragon X Elite, the NPUs that are going to be done by that family. So I didn't want to go too far on a rant, but I realized many of you're making buying decisions.



(15:23):

I would not buy a bloody computer this year that didn't have a neural processing unit in it. And many of you're going to be replacing computers because you bought computers for the pandemic and you're about at that three four year cycle. No computers without neural processing units is my new moniker. So just again, I want you to see that I was listening to the things unfolding this morning. I thought that was important. Okay, now all of that said, getting to the core topic here on the Gen AI tools, of course ChatGPT has a lot of the traction and I follow that company and so forth. It's been around since November of 22. Microsoft put 11 billion in this company alone. The free version three five is quick, but it hasn't been updated for a while. January 22, the last time I looked, of course the version four product you can license, it's about $20 a month.



(16:17):

There's a teams version, an enterprise product that's a little more sophisticated at $30. And of course the GPT-4 Omni was new on May 13th, and that's what I'm going to run with you today. Of course, in the Google world you've got the Gemini product formerly known as Bard. I feel like it's that formerly known as Prince thing, which became available in March of 23. You can use it for free, you can sign up for a subscription. Actually the no charge chimney is pretty good, but there's a $20 version of that as well. And of course, copilot was officially launched in February 23, broadly available in November. Scaled to full general availability early this year, and loosely speaking, it's $30 a month to add it to your Microsoft 365 plans. There are some situations where you can get it for $20 with business premium plans. And of course Microsoft has been investing globally in AI and lots of other initiatives on this.



(17:22):

But on April 23rd, they introduced the new FI three model, which runs on mobile phones, both iOS and Android. So that's kind of an interesting play to watch them do this because they see the same vision. Centralized computing of AI in cloud data centers like Azure, Amazon, Google, Oracle, and others. They have the local computers, desktop or laptops, and they're expecting a lot of this to run on phones. Then of course you've got Claude. Now the Anthropic model I like a lot because of the guardrails that are in here. See, the basic design of this product has a constitution which tries to make it more ethical and trying to keep these AI models in between barriers is hard. It's a technically hard problem. Amazon's put 4 billion in this particular product. Again, no charge for the regular Claude model, which is probably good enough for most of you, but if you want to use a more advanced quad model, you can pay $20 a month to run the Opus model.



(18:28):

Now myself, I don't do any of that. I run it all in chatterbox, which allows me to pull all these models into a single instance and I can just switch between any model I want to use to run them and you can avoid some of the charges, but that's the way their pricing is today and I'm okay with that. If they want to charge me for the access the other way, I'm cool. But those are your four biggies and as professional accountants, I believe that you will get acceptable results for many of the queries that you might make with any of these four models. And it's a leapfrog. It looks like some of you are old enough, you remember when there used to be a VisiCalc Lotus 1, 2, 3 XL war and we were evolving through technology like that and they were leapfrogging each other. I really like it when there's at least three technology competitors in the market, but we don't get many choices like that today, although we kind of have a little of it.



(19:25):

Alright, so I wanted to set it up like that because we're going to start with ChatGPT as the reference engine, and I'm going to run some of those. But the idea here is I'd like for you to think about what it is you're trying to get done in your firms. Now you can follow my advice or not on using copilot, but you're going to see I will use the same prompts against various engines to see how that works. Now, prompts are simply the request that you make of these AI engines and what I would recommend you consider doing is building a prompt generator that you share among your business, among your firm. So I had actually pulled that aside just so you could see this, but fundamentally what I've done for my K two business is I've built a prompt generation engine and I put controls around this.



(20:20):

So I've got parameters that I'm passing into ChatGPT, how big I want the output to be, how much I want the sentence to be, what I want titles to be. And in effect, our K two business tends to write about 70 new courses every year among a very small team of about a half a dozen of us, and we have to write crisp content in a hurry and we can generate PowerPoint slides and we can do course planning and we can create demo files and we can pump things to newsletters and we can pull SQL commands and so forth. You kind of get the idea here. So I'm going to tell you that I've built these with CPA firms to help them standardize their use within the firm because if somebody's learned it, why don't we share it with everybody else? It seemed pretty straightforward to me.



(21:11):

Further, we can teach the engine to configure itself and we're going to suggest a few options like that to you in just a minute. So if we try to get a little bit further here, you can do this in the free model. I think you'll get a little bit more with the paid model if you're going down the GPT road because you have access to the four and the four oh models and so forth, you get faster response times, you get some other features there. So it's probably worth doing that. You'll get the image generator like Dally and so on. But the first thing I'm going to caution you, regardless of which engine you use, you need to go in and do the customization work. So you'll see here that I've got a customization that says I'm a CPA, I live in this location, I do these things, I have these interests and so forth.



(22:06):

Make sure you go to the back end of the engines and tell 'em who you are and what you look for and what it's like. Okay? So make sure you get that done. And you know what happens in when you configure these engines like this? This is partner Tommy Stevens out of Woodstock, Georgia, out of Atlanta. It basically says, look, certainly Tommy, I can tell you about data security and artificial intelligence. That would be a fairly common response given those customizations. Now when you're working with Gini, the Google platform, it's very similar, but Gemini has a very current tool set, a very current data set. So as Hitendra was showing with his VIN diagram, there's the data that's known today and data that's known with your client so forth. We're going to see much more current responses in copilot and in Gemini and in Claude than we do in the chat models generally.



(23:07):

And again, kind of a free plan available here. So all of these products we'll do sightings. So most of the time if I'm going to try to use something to minimize the hallucinations, I will ask for the sightings. Where did this come from? So at least I can back check it that far and you'll find that that will help too. So alright, so what I want to do that's more than enough setup, I want to go into demo mode here as time allows, and we'll just do a few things because I was trying to figure out what type of prompts would make the most sense to you. So I decided to stick over in the tax side of the bucket, but I could have been doing auditor CAS or any of these other things too. So for example, maybe I want to generate a tax return engagement letter.



(23:59):

Now we know that many of you get those from your liability carrier and I won't go into all of that mechanics behind it, but what I decided to do just for convenience because well frankly you don't want to watch me type. Alright? So I have the various typing that I would've typed into these models just so you don't have to watch my belabored typing. And what I'm going to do here is I'm going to come to ChatGPT first, and you can see that in ChatGPT I can choose the omni model, I can choose the four model and I can choose the three five model. I'm actually going to start with the three five model and do a new chat. And what I'd like you to see is just how quick this particular engine response and what the response is like. And you can see it's not a bad generation.



(24:49):

Now just socially I do not write any of my articles using AI. However, I have configured ai so I can submit articles and say, write it in my style and it's uncanny. It sounds just like me and it uses the same words I would use and it's like spooky. Okay? So you can be very specific about how you configure things to tell the engines who you are and so forth. But see if we pack up on this, that generated that response fairly quickly and it's a pretty good letter. Is it perfect? No, but is it better than starting with a blank piece of paper? Well, to most of us it is. And since I write so many CPE courses, which requires so bloody many questions to be authorized for self-study, a year ago I wrote courses and generated 600 questions of which I had to fix two this year I generated almost 800 questions of which I had to fix zero.



(25:48):

Do you know how long it used to take me to write one good CPE question which aligned with the learning objective and the correct answer and why the other answers are wrong and so forth? I'd be doing well if I could get one done every 20 minutes you do the math and figure out what that did for that particular part of my job. Well, okay, so you've seen a response here with ChatGPT-3 five. Let's try it again with GPT-4o. So let's just do a new chat and I'm going to put that same item in here. And one of the things that you'll note is graphics can be analyzed nowadays. So I generally remove that graphic element when it goes. So we're just doing plain response. Look at the response time on this. See it is processing each token. It's a statistical exercise to figure out what the next words should be.



(26:42):

And even though I'm on the paid premium version, you can see that the four oh response is notably slower than the three five. It is better but not always better. So that's another one of the twists here. You'll notice that this letter looks pretty good, we're going to let it finish generating here, but you can see it's still in process. Last time I could tat a little bit and it was done right and blah, blah, blah, blah. Okay, so it's built a pretty good letter and again, we could be very specific about making it Microsoft Word formattable or Google Docs, formattable or Zoho, writer, formattable, all those things are good. And then one last little piece here, we're going to do the omni version now and again, I'm going to do a new chat and do that same request one more time and turn it loose.



(27:38):

Notice the response is a little better actually than the four oh in this situation and you'll find that the model is actually a little more refined. Now Omni has lots of other voice interaction pieces and things that I won't talk about today, but I want you to be aware, multiple models, single company, you can interface on the backend with API calls so you can get to the models directly, but you do have security risks because the license agreements from OpenAI ChatGPT says when you put something in the model, we own the intellectual property, therefore you should never ever put client data in here at this point. That's my guidance. And I have watched professional demonstrators to CPAs say, let me upload this client bank statement and show you how it works. It's like what are you doing? Okay, but maybe it's been because a little more conservative and I try to protect my client's confidential information.



(28:39):

I don't know, but it's also regulatory problem. Anyway, so you get the idea. So you've seen three different ChatGPT resolutions on this. I have copilot running here on the side. I'm going to do a new copilot prompt. I think this is the easiest way to say it. When you look at copilot in this free model, you notice that you can be creative, more balanced or more precise as accountants, I'm going to recommend that you always choose the more precise model. I think it'll give you better results. And let's just turn this one loose and we'll let it respond. Now, it depends on the day. Copilot is usually a pretty good responder. The materials that you'll see as I proceed, it's been pretty consistent at its response. And you'll notice that the style of the ChatGPT model, sorry about that, and the copilot model are pretty similar.



(29:39):

The structure is much the same. Why? Well copilot 365 is licensed from open AI and in November when they had the big fiasco and the AI open AI employees went to Microsoft and went, they knew that they wanted to have that model and they're using it. Okay, now I can show you the technical diagrams behind Microsoft's strategy, which says we're going to isolate the data of copilot in Azure, but we know in fact copilot is leaking data from one Microsoft 365 instance to another. Right? Now, let me say it a different way. My data is getting mixed with your data and Microsoft can't figure out how it's happening. Okay? That's a little problem. And particularly if I'm putting anything that I'd consider confidential out there, that's a real problem. But there is also retrieval data that's controlled with what's known as graph Microsoft graph data, meta Facebook to most of you meta released their AI model generating software 18 months ago or so for a lot of the artsy things, arts, music and so forth.



(30:55):

And lots of people have built on the meta model, but so did Microsoft and they specifically modeled a new graph engine that they interface to. So when you log into Microsoft, some of the data is protected behind the graph engine and some of it's protected by the copilot engine or if you use this open one, really none of it's protected in the same way. Did I say that simply enough? You could get it. I'm not trying to talk over your head. I'm trying to be straightforward in the way we're talking about this help. Okay, so you can see the relationship here. Again, not a bad letter, not quite as useful, but you'll notice one of the things that I like about the Microsoft way of doing things is it actually has references here where the data was being pulled from. So they're not perfect sightings, but they do show you what on earth it came from and it gives you at least a sense, and I like that part of copilot and I can force that to happen in ChatGPT and I can force it to happen in Gemini and so forth and I usually do, but I was trying to just do these kind of native models along the way.



(32:05):

Okay, so I should have sitting up here ready to go Gemini. Okay, so this is the Google engine right now and if we just turn this loose, and again, I'm going to do the same thing in terms of trying to paste that in, why is it not pasting properly? I do not know. Let's see if we can pick it up one more time



(32:34):

And let it go that way. Now, one thing I admire about Google's Gemini approach is its responsiveness. You'll notice in the other models we were kind of watching it go by, this is bam, it's there. And I also noticed up here since this morning Gini was updated, so who knows what that update was, but it basically is talking about some of the data analysis and so forth. Whenever I'm going to do a demonstration, I always try to just check things out beforehand. And that one there about 7:00 AM this morning, things could go a little different, but again, it's a pretty good engagement letter. Does that make sense? Now what I'd like to do next is I'm going to flip back and forth between the presentation and you should have this presentation available to you. Source media error has that accounting today and you're welcome to it and you can try these things and we try to explain why things are working the way they do, but again, you could copy this text in, you could actually tell it to put in word format or whatever, but if you didn't notice it in the Gemini version, it actually provided three different versions of the engagement letter up there at the top.



(33:51):

So let's just flip back over so you don't miss that because you'll note that there's this show draft and you can see that there's draft one, two and three of the engagement letter. So it not only did one for you, it did three at the same time. And I like that feature. Again, the sightings feature in here is nice. So Google has actually done a pretty good job on some of those type of items. Now we recognize that you have research tools for lots of reasons, but we know a lot of your young professionals don't use 'em. You don't license as much research as you might for the firm because of the licensing costs and blah blah blah. I get all those things. And we know if you look at the statistics out there, the most common research tool today has been Google. So that's what a lot of professionals are using.



(34:42):

And what I found for me where I used to use the secure more secure search engines like DuckDuckGo and so forth, once the AI models came out, I hardly ever do search in Google or DuckDuckGo or anybody like that anymore. I just don't do it because the results are so much more useful out of the gen AI tools for the things in the style in which I want to use them. And it made perfect sense to me that Google's search share drop from 70 down to 52% in about a nine month window. I'm thinking I'm not using it. It makes perfect sense to me. Alright, so next we're just going to do does computer software qualify for section 179 expensive? And I'm not going to actually run that in all the different models like might be suggested there, but I am going to create a new chat and just so you don't have to watch me type, I'm going to pick up that question and drop it in here and say go.



(35:57):

And again, I'm just asking you to consider is that technically accurate? Does it seem like it's reasonable? We can change the tone of the response and so forth. That's probably a better result than if I'd have come over here and I actually should just do this just for giggles and grins because I didn't do this in practice for the session. I'll just go to miss Google and we get a response. And by the way, Google has announced they are now putting Gemini capabilities into their Google engine. Alright? So that's way better than it was 30 days ago, 45 days ago, whatever. It was pretty recent. But I want you to know that that was happening as well. Well, okay, we did it that way, but could we do this over in ChatGPT more or less the same way. So I'm going to use the omni model here and do a new little chat. We're going to turn it loose and say go. And I don't think it will affect performance in a big way, so we'll let that run. So I'm going to also do a new chat over here in copilot and likewise turn that loose



(37:16):

And let it do its thing. Again, you'll see similarities here. Wish I would stop doing that. You'll see similarities here between the two models, but they are not exactly alike. Notice the sightings again, notice just the order. There's several variants between these. Now again, I don't know that my conclusion is spot on for you, but I will tell you if you played with the models, you asked questions that didn't contain client data that you had interest in and ran 'em across the models and saw the results, you would probably favor one of these models over another one. But if you decided that



(37:59):

The first week of June you better go back and check it again, maybe the first week of July or whatever because these things are changing fast enough that you may be working on a model that was great the day you chose it and the best of all and a few weeks later different just like that Jim and I update today. Now I'm going to go back and second guess myself and try to understand why. Okay, so in any case, are you getting the drift of what's possible here? Let's go a little bit further. Again, I want you to see some of the capabilities here, but notice that you can be very specific about things like if you're researching the secure Act 2o on 401k, give me a bullet list about the most important items. Okay, so let's go try to grab that. Are these helpful?



(38:51):

If not, I will not do more. Alright? But I want you to just again, see what's going on. So we're just going to do a new 4o chat and turn that loose and let it answer that question. Now again, picture yourself in the office. You've been asked this question by a client, you're trying to figure out a response and this guy's going out doing the work for you. I think you might've seen it got results from six different sites to get this answer. Again, I probably should go ahead and prompt it to do sighting, but the net here is technically that's a pretty accurate answer if you go back and look at it. Okay, and how long would it have taken you to generate that? So the claim that I've been consistently making, I'm watching other presenters say the same thing, an accountant using AI will outperform an accountant that doesn't use ai and it just can make your life so much easier. But I want you to be real conscientious about client confidential data I and other things. I've got those cautions coming up towards the end of our time together, which is closing in quickly if we again do the same type of thing. Let me just switch over to Gemini and pump that through Gemini and let you see the result.



(40:16):

Again, it's coming out fairly quick, but notice the pause there. It took us a little while. Oh, look at that. That's kind of interesting because that worked this morning on the old Gemini model and it's not working this afternoon on the new Gemini model. That's pretty cool. Okay, problem is when you're trying to get productivity in your firm that happens to one of your users, then what? Okay, well you're getting the drift, I can continue doing that, but let's just look at some of the other options here because we can get the tax research results and they're pretty good. And notice that we can also do all sorts of other types of correspondence. For example, responding to net tax notices, which I think will be at far higher volume because of some of the consequences of the last few years. And notice that we could actually create a very specific letter with very specific parameters.



(41:16):

Here's a better way to think about it. The more specific you are in the prompt engineering, what you put in the engine, the better answer you will get and the more you get practice at putting in better prompts, the better answers you'll get over time. It's part of the reason I like to use a shared sheet, at least for now actually I'm doing it in lists in many cases, Microsoft lists to get that done. But think about the types of things that you're asked to do on a regular basis in your firm. And again, you can do a pretty nifty job with this technology. Now, as I've already stated, I think Microsoft copilot is a little safer place for you and copilot if you're running Windows 11 is built into the bar at the bottom. That's the way I'm using it here is Windows 11.



(42:07):

And of course it's in the edge browser, so many of you use the Chrome browser or some other browser. But I actually switched over to Edge just to have copilot available there and I found the switch was absolutely worth doing. So that's just me. And again, we can go on for more examples along the way, but you will discover that if you're inside copilot, the usefulness for putting into the email into Word documents, into PowerPoint presentations because of the integration into Microsoft Suite is pretty helpful and that may be worth the investment. However, to deploy in your firm, I would suggest you just license a few seats with a few people to develop your standards. I do have an AI policy that I can ship you. I actually have public domain ones as well. You should have an AI policy in place before you start and you're welcome to any of mine or any other ones that you find out there.



(43:06):

But generally copilot right now is the Safer Maneuver can be part of your Microsoft 365 subscription. It's been around for a while and really the January tipping point when it became general release for all sizes of firms was a pretty big deal. Alright, now there are other AI tools out there that you could consider. Examples would be like Grammarly or Originality or frankly if you're an Excel user, Microsoft has put AI capabilities including copilot down into Excel. There's Mind Bridge in the audit world and others. But I wanted to start with all those other tools on the front side. Now are there dangers? Absolutely biased outputs, copyright infringement, data privacy. We do a whole a hundred minute CP session on data privacy and regulations related to AI alone right now. It was really onerous trying to explain it. And then we realized a hundred minutes probably wasn't enough data security, the deep fakes.



(44:10):

A year ago I built the Blues Brothers deep fake video because associate Brian Tankersley and I decided we'd assume the roles of Belushi and Aykroyd and we actually substituted in, took us less than 30 minutes to build the deepfake video. So we are very concerned about DeepFakes. We're no longer allowing people to do wire transfers with voice confirmation because of deep fake audio takes less than 12 seconds to capture a voice and be able to reproduce it further. Companies like Yamaha have real time translations for vocalists to be able to sing and have it translated in real time into other languages while they're singing. I mean there's very cool stuff going on there and you've got the hallucinations. There's also trying to talk to a few of these things in a few minutes. Here's a picture of Michelle Obama and in the AI systems you'll notice that she is identified as a young man wearing a black shirt with a high confidence level and is wearing a hairpiece with a high confidence level.



(45:17):

See, the bias in some of these systems is pretty darn stunning. Further copyright infringement, and there was another lawsuit filed this morning are very dominant. New York Times is involved with these. The artists are involved with these and of course ChatGPT and Gemini Google are all trying to defend their position. And of course New York Times has actually sued Microsoft and OpenAI and others for copyright infringement. So this is going to be a little bit of a battle as this unfolds further today, there are five states with data privacy regulation rules. We're forecasting that over the next 18 months that there will be around 24 states with data privacy rules. I would prefer a national regulation, but that probably isn't in the cards. But you need to know there's going to be data privacy regulations like Maryland's, like California's in lots of states. And of course data security is also a problem because frankly most of the engineers behind these large platforms don't actually know how they work.



(46:20):

They actually have skills called emergent skills that just surface that they didn't program in the large language models. And technically the engineers behind it, they actually don't exactly know. They've got a pretty good idea but not a perfect idea. So just be aware of that because data security leakage is high. And of course we've already identified deep fakes in my materials. That's a super good five to seven minute video to take a look at deep fakes. It really identifies the top 10 deep fakes. And of course the hallucinations are a big deal too. I like to cite that in February of 23, Gemini basically took web space telescope pictures and were, that's not what they were at all. I mean there's weird stuff like that that happens. So my net here is that generative AI is a game changer for everyone. Not only you in public practice, but all of your clients.



(47:21):

And it's going to be very interesting to watch this unfold because everybody's going to use it. There's going to be a lot of people that think they're experts at it that frankly just aren't. And AI has changed a lot in the last two years. But just for the record, I wrote AI code in Lisp in 1975. AI has been around since 1959 and I was fortunate to do some of the first voice recognition through the years too. So everybody's talking like it's brand new and it's like this is really old stuff. But what is new is the use of the large language models based on the neural networks, which is a 14-year-old technology. There's going to be radical breakthroughs I figure in these technologies in the very near term because the computer scientists are still trying to learn how all these models work. So unfortunately, I am just at the point where I was supposed to be letting you ask questions and I'm out of time actually. But I am going to take a question or two if we could. So any questions that you have? Yes ma'am. I hope a mic will be coming to you, but I'll try to hear you in the meantime and I'll repeat for the recording as need be.



(48:37):

Oh, in comes Vanna with the mic. Beautiful.



Audience Member 1 (48:43):

Thank you Randy, for this really insightful session. I use GPT lot and what I have been feeling, of course things have been improving, but still I feel the accuracy level with respect to the technical outcomes which come still needs to be verified. So is it right to use it for technical results rather than spending more time and actually finding out whether it's technically accurate or not and use it just for maybe a language model or letter generations and stuff?



Randy Johnston (49:12):

At this point, I would not assume that technical results are correct. You might've noticed I cited material. They're actually using a more restrictive model, which you could use for technical results related to audit sightings. So we see vendors trying to build products like that. We do know other companies that are building private models, but I think the risk is still too high. So another piece of guidance that's very clear, nothing should go to a client that isn't reviewed by a professional and the probability of there being errors generated by AI models is still pretty high when it comes to technical accuracy. I think the easiest way to say it, most AI models can't do math. So as long as you don't do balance sheets, income statements or cash flows, no problem. But I know Or tax returns or audits or yeah, it doesn't work. Yes ma'am. To here,



Audience Member 2 (50:09):

A funny comment related to what she just said. I am in Director of Accounting Programs at UNBC and one of our professors accused some students in her management class of using ChatGPT or artificial intelligence for their project. It turns out she was right, but they wrote an apology letter and the apology letter started with dear insert professor's name. So make sure you always proofread anything you use with ChatGPT.



Randy Johnston (50:40):

That is a wonderful story and I admire it greatly. And by the way here I'm a little odd because I actually do want students using these AI models. Most professors do, but the problem is we have a learning curve kind of like we had when we had calculators and computers and all these new technologies we need to leverage 'em.



Audience Member 3 (51:06):

The thing was they completely falsified the fact that they interviewed this person.



Randy Johnston (51:11):

Yeah



Audience Member 3 (51:12):

Was the total fault was the issue.



Randy Johnston (51:15):

I hate those type of things, which I called situational ethics this morning. Other questions? It looks like we've got one on this side and one back there please. I might be able to hear you and I'll repeat in



Audience Member 4 (51:24):

Examples that you had letters that are pretty well known, they have a fairly well defined structure.



Randy Johnston (51:30):

Yes sir.



Audience Member 4 (51:31):

As I've played with copilot Microsoft role as I've with copilot and then it's integrated into various products. One of the things that I've found that is if you try and make original content, if you want to write an email or a paragraph or something and you give it the instructions and let copilot write it, it's very, it's not in your language, but if you write something, even if you butcher it and that you tell it to edit, then it'll clean it up and it'll still be in your language, but it'll be kind of nice and crisp and concise. Has that been your experience?



Randy Johnston (52:05):

Yeah, and I will repeat that. Basic good advice. When you're doing something that's original, you can write it in your own language, hand it over to any of these AI engines and it will clean them up. Well, and that is absolutely correct. I actually have a technique that if I have something that's similar to what I want, I'll upload that first and say Here's an example of what I'm looking for and then put it out. And that's why many times I'll take prior writings that I've done if I'm going to write a new article and put it up there and let it generate something. And like I said, it's stunning how good it is. So that is a good use for the ai. Thank you. And I thought the other one was back to here. Yes sir.



Audience Member 5 (52:51):

So quick question. Are you able to figure Gemini and copilot the way you configure ChatGPT?



Randy Johnston (52:58):

Yes is the short answer. Both Gemini and copilot have similar configuration pieces on the back end like you do on ChatGPT. Remember that the inheritance from ChatGPT into Microsoft would lead you to believe that's probably the case. And there are actually pieces in Gemini I think are more configurable. But here's another strategy. If you want to actually get the tone right, you can tell the way I speak. I'm from Kansas so I don't have real good English. So what I do is I actually feed these engines parameters about me, about what I want it to be and how so that's why those models that I copy and paste from, they actually tune up the engine. I actually care less about some of that backend configuration than on a new session that I give it the parameters I'm working from. Does that make sense? So I can actually tell the engine, keep it this long and so forth. Make sense? Alright, well it looks like it is time for me to exit, but I am here all through the evening and you are certainly welcome to approach me on anything. Thank you for your time this morning and this afternoon. See you later.