AI risk concerns don't stop people from trusting AI

People may be suspicious of AI and cognizant of its risks, but that hasn't stopped people from using it anyway and trusting the program.

A recent survey of small business owners from Xero found that 80% are concerned AI development and adoption are outpacing regulation. The survey also found that sensitive information disclosure (41%) and data privacy violations (41%) are the biggest ethical challenges relating to AI use in their business.

Despite these concerns, however, those who are using AI seem to be trusting it an awful lot. More than half (51%) of small businesses said they trust AI with identifiable customer information, while 45% trust AI with their sensitive commercial information. Xero said that by being too comfortable with sharing personal identifiable information with AI tools, many small businesses are putting their data at risk.

The poll noted that small business owners using AI are implementing various controls to try and tamp down these issues. This includes creating policies and guidelines for employees (26%), providing training to employees on identifying biases or inaccuracies (25%) and seeking written consent from clients and customers before using AI tools (23%).

Another survey, this one from cloud-based analytics platform Alteryx, found a similar pattern of being suspicious of AI on one hand and using and trusting AI on the other.

For organizations currently using generative AI, three of the top four concerns related to data: data ownership (29%), data privacy (28%) and IP ownership (28%). The concerns for organizations not using generative AI were similar, with data privacy concerns coming in at 47% and a lack of trust in the results produced by generative AI at 43%.

But despite these concerns, people are forging ahead anyway. The poll found 70% of generative AI users reported they trusted AI to "deliver initial, rapid results that I can review and modify to completion." Almost as many people had far less skepticism: 69% of those who use generative AI at work state they would "always trust the answers given by generative AI."

Some may question the wisdom of this, as generative AI can sometimes "hallucinate" or, in more common terms, just plain make up stuff. In January it was estimated that ChatGPT, for example, makes up stuff 15-20% of the time. While ChatGPT 4.0 features improved accuracy, OpenAI warned people that it can still produce hallucinated answers.

Knowing that many are unaware of the nature of AI and its applications, which may drive people overly trusting its outputs, Xero also released a guide to AI for accountants and bookkeepers. The guide features advice on what AI is and what it can do; how it can be applied in an accounting practice; current stances on AI in the profession; opportunities and challenges for small business; and using Xero to build AI capacities.

"We wanted to cut through the hype and fear-mongering and answer a really simple question: what does AI mean for your typical practice?" said Mark Rees, chief technology officer at Xero. "The Xero guide is intended to help accountants and bookkeepers make well-informed choices when it comes to using AI tools, to help manage the risks and realize the benefits for them and their small business customers."

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Machine learning Xero Risk Data management
MORE FROM ACCOUNTING TODAY