Five obstacles to AI ROI

Jonathan Richard Schwarz, Thomson Reuters' head of AI research, pointed out that while there is a great deal of enthusiasm about artificial intelligence, many organizations still struggle to get meaningful returns from the technology. While there's no single reason why this might be, Schwarz pointed to a number of different factors — technological, conceptual and organizational — that have been stymieing efforts to successfully implement AI. 

Horizontal versus vertical

hurdles.jpg
vm/Getty Images/iStockphoto
Schwarz said that there have been two general approaches toward AI seen over the last few years: horizontal, as typified by big public models like ChatGPT or Gemini, and vertical, highly specialized models built for narrow applications. Each approach, however, has major limitations. 

The broad horizontal approach, he said, is fine for general purposes but lacks the depth for more complex tasks that go beyond just answering emails and summarizing documents. Conversely, the vertical approach has a great deal of focus but lacks the "decades of content" needed to properly contextualize problems and often lacks the infrastructure to be run at scale. 

"Right now, businesses have tried both approaches," said Schwartz. "They've tested them. They realize neither can really deliver the ROI that professionals actually need, and that's why we've built CoCounsel differently from the start."

Autonomous systems not autonomous enough

Robot Fail
nastassia - stock.adobe.com
Real ROI, he said, requires AI systems with a high degree of autonomy and capability. However, many of the products on the market today are better thought of as conveniences that make small incremental improvements. What organizations need to truly realize return on investment, he said, are professional grade systems with transformative capabilities. 

He cited the example of self-driving cars. For a long time, car manufacturers have added features such as lane assist, blind spot monitoring and adaptive cruise control. While these are nice, he said, "you still had to put your hands on the wheel to drive." Full autonomy, on the other hand, is a transformative development that completely rethinks what it means to be in a car. "This is where I think we are right now," he said. "Within professional AI, we have a lot of tools out there which are conveniences like those features in the car, but very few have been able to cross the chasm towards full autonomy, where you can take your hands off the steering wheel."

Accuracy still lags

Erase-error
zimmytws - Fotolia
Schwarz added that AI models have, over the years, gotten better at writing as well as following directions. There's still room for improvement, he said, but they're trending in the right direction. On the other hand, accuracy and correctness remain a major challenge. Too often, models still get things wrong or completely make up information. Such an error rate is still too high for serious automation in many areas. 

"On correctness, in particular, on emphasis on evidence, the models are really struggling despite years of talk about hallucinations about the importance of sort of citing correctly, and we're at this point where, indeed, what we're getting are models that can sound legal and the reports look professional, but when you really dig into it, you're getting quality that is citing random blog posts, that is making claims that are not supported by citations, and work that just isn't at a point where you could really even imagine real automation happening," he said.

Data quality is more important than quantity

Bad data
magicmary - stock.adobe.com
When AI first broke into the mainstream, the conventional wisdom was that models need as much data as possible to function well. The more data you could feed an AI, the smarter it got, and the more effective it became. Schwarz said this has largely been disproven. People have since come to understand that quality of data is much more important than quantity. 

"There's a scientific consensus that data quality is substantially more important than data quantity, that expert feedback, content, data is critical for progress. ...  This idea that you could train on the web and the model solves everything is a nice story we used to tell ourselves, but doesn't hold with what we're seeing in reality," he said. 

More computing power does not equal higher quality

Stack of computers
THAM KEE CHUAN/ThamKC - stock.adobe.com
Similarly, while previously it was believed that the more power an AI model had, the better it would perform, this view has also largely gone by the wayside. Schwarz said he has seen too many people, once their AI implementations start hitting snags, simply throw hardware and power (and, therefore, money) at the problem until it goes away. But this is solving the wrong problem. Similar to how more data does not mean better results, better hardware does not necessarily translate into better performance. At best, it leads to a "jagged" intelligence where the model performs admirably at some tasks and abysmally at others. 

"My personal view is that if you're in a situation where you have this jagged intelligence where, in these economically valuable domains, you don't see the productivity growth, then rather than throwing more hardware and more compute at the same sort of approach, you should aim and try and bring in this domain expertise into the training process," said Schwarz.
MORE FROM ACCOUNTING TODAY