Voices

Don't be a victim of AI hallucinations

A lawyer is facing a sanctions hearing for trusting artificial intelligence hallucinations and presenting AI-produced fake citations in court! According to a recent news report, the lawyer used an AI tool for the first time as a legal research source and did not know that the content produced by AI could be false. This is despite the lawyer asking the chatbot if the cases cited were real! 

This lawyer ended up in this situation because he trusted the AI's "hallucination." Yes. AI can and does hallucinate at times. The peril of not knowing how an AI tool is created, how it works — and how it can hallucinate — can be pretty damaging. 

Hallucination in AI, according to Wikipedia, "is a confident response by an AI system that is not justified by its training data." It is an AI response that sometimes can seem factful but is not true. It can simply be an answer "made up" by AI. 

So, why does AI hallucinate?

When asked, "Give me five first names of males that start with the letter H and end with the letter A, with each name between 7 to 10 letters long," the following was the output:

1. Hamilton
2. Harrison

3. Horatio
4. Humphrey
5. Humberto

Note that although all names started with the letter H, none of the five in the first output ended with the letter A. 

On prompting further with smaller sentences, asking, "Give me five male first names. Each name must start with the letter 'H' and end with the letter 'A.' Each name must be between 7 to 10 letters long," it gave the following response: 

1. Harrisona
2. Hamiltona
3. Humphreya
4. Harlanda
5. Hawkinsa

Now. all names start with the letter H and end with the letter A. But in real life, are those words used for naming males? 

This was easy to spot. But as the lawyer mentioned above experienced, very confident-sounding-but-incorrect AI responses can be hard to spot, and without using additional research resources, it can turn out to be a real risk. 

Why did AI create such responses?

Generative Pre-trained Transformer — or GPT — tools contain a "transformer." A transformer is a deep learning model that is based on the semantic relationships between words in a sentence to produce text using an encoder-decoder (input->output or prompt->response) sequence. Transformers create new text from the large repository of text data used in their "training." This is done by "predicting" the next word in a series based on the previous words. If the AI model is not trained with data that is adequately relevant to the prompt, not reasonably equipped to handle complex prompts (inputs), or provided with vague prompts, it may not interpret the prompt accurately. But it is designed to provide a response, so it will try to predict and give an answer. 

More important, how can you tell if an AI tool is hallucinating?

I wish there were foolproof ways to tell if your AI tool is hallucinating. The only way, currently, to avoid falling prey to AI hallucinations is: Trust (AI), but verify (outside of AI). 

But there are some preventive measures you can take to help (somewhat) avoid falling prey to and incurring damage from hallucinated AI responses. 

1. Watch out for context: It all boils down to "context." In the example of male names above, it was instantly evident that AI's response was not entirely relevant to the context of the question asked. Complex "prompts" can make AI produce (concoct?) complex responses, and it may not always be easy to spot non-factual responses. Fortunately, accountants are in the practice of always putting data and information in context. This "instinctive" habit should make you feel something is amiss. So, always examine the output in the context of your input to AI tools. Human judgment is precious, especially in the AI world.

2. Balance the risk-reward ratio: Imagine yourself in a situation where the IRS sends a notice to your client, and you end up realizing that it is because you used questionable AI output. In the accountants' work domain, depending upon what you are looking to accomplish with public AI tools, a certain degree of risk is associated with "using" the AI responses to make business decisions. The higher the risk in a given objective, the better it is to do more comprehensive research outside the AI tool. Note that AI tools can immensely help you narrow down the research effort and save you precious time.

3. Who told you? I asked ChatGPT a tax question, and it gave an answer with a detailed explanation. I then asked ChatGPT, "Who told you?" Here's what it replied. "As an AI language model, my responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. I have not been directly told by a specific individual or organization, but rather I have been trained on a diverse range of data sources, including books, websites, and other texts, to provide information and answer questions to the best of my knowledge and abilities."

Now, if you were to tell the exact same thing to your client(s), how would they react?

Any AI model is as good (or bad) as the data it is "trained on." Wait for the days when the prominent vendors in the profession train their own private AI models on accounting, tax, and audit data — surely coming soon! AI embedded in your day-to-day software tools may not provide you enough room to verify the outputs, but being mindful of the likelihood of incorrect AI outputs is the starting point for you. 

4. Corner the AI tool: The wider/more generic the scope of your prompt (question) to the AI tool, the higher the possibility of outputs not precisely addressing the intended question or being inaccurate or less accurate. Asking more detailed questions, providing "boundaries," telling AI "to act like an accountant," and even instructing, "If you do not know the exact answer, say, 'I don't know,'" can significantly improve the chances of getting accurate responses. (Have you heard of the new type of job, i.e., "prompt engineer," that pays crazy salaries?). 

5. Learn what to expect from AI: To know this, one must know how AI is created, how it learns on its own, and how it works. You do not need to be a programmer or have any previous knowledge of AI technology to get your AI foundations right. You don't need to learn it in technical ways, either. 

These are just some starting points for you to get thinking about AI in ways different than just using (and getting amused by) the new-age AI tools. Also, note that we did not touch upon how AI will get more infused into your day-to-day software tools — and how much ability you will have to actually interact with the AI components of such solutions. 

Do you now feel this is too scary? Relax! When we come to know what we did not know before, we are one step forward in our quest for knowledge and better accomplishments. 

Getting a comprehensive understanding of any new technology like AI, is the starting point of making it one of the most powerful tools you have ever used. As they say, you cannot outrun a powerful machine (can you race against a car speeding 100 miles an hour and win?), but you can drive it to your intended destination.

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence
MORE FROM ACCOUNTING TODAY