Jonathan Richard Schwarz, Thomson Reuters' head of AI research, pointed out that while there is a great deal of enthusiasm about artificial intelligence, many organizations still struggle to get meaningful returns from the technology. While there's no single reason why this might be, Schwarz pointed to a number of different factors — technological, conceptual and organizational — that have been stymieing efforts to successfully implement AI.
Horizontal versus vertical

The broad horizontal approach, he said, is fine for general purposes but lacks the depth for more complex tasks that go beyond just answering emails and summarizing documents. Conversely, the vertical approach has a great deal of focus but lacks the "decades of content" needed to properly contextualize problems and often lacks the infrastructure to be run at scale.
"Right now, businesses have tried both approaches," said Schwartz. "They've tested them. They realize neither can really deliver the ROI that professionals actually need, and that's why we've built CoCounsel differently from the start."
Autonomous systems not autonomous enough

He cited the example of self-driving cars. For a long time, car manufacturers have added features such as lane assist, blind spot monitoring and adaptive cruise control. While these are nice, he said, "you still had to put your hands on the wheel to drive." Full autonomy, on the other hand, is a transformative development that completely rethinks what it means to be in a car. "This is where I think we are right now," he said. "Within professional AI, we have a lot of tools out there which are conveniences like those features in the car, but very few have been able to cross the chasm towards full autonomy, where you can take your hands off the steering wheel."
Accuracy still lags

"On correctness, in particular, on emphasis on evidence, the models are really struggling despite years of talk about hallucinations about the importance of sort of citing correctly, and we're at this point where, indeed, what we're getting are models that can sound legal and the reports look professional, but when you really dig into it, you're getting quality that is citing random blog posts, that is making claims that are not supported by citations, and work that just isn't at a point where you could really even imagine real automation happening," he said.
Data quality is more important than quantity

"There's a scientific consensus that data quality is substantially more important than data quantity, that expert feedback, content, data is critical for progress. ... This idea that you could train on the web and the model solves everything is a nice story we used to tell ourselves, but doesn't hold with what we're seeing in reality," he said.
More computing power does not equal higher quality

"My personal view is that if you're in a situation where you have this jagged intelligence where, in these economically valuable domains, you don't see the productivity growth, then rather than throwing more hardware and more compute at the same sort of approach, you should aim and try and bring in this domain expertise into the training process," said Schwarz.




