AI can, in theory, make people more productive, efficient and smarter at work. But the results of a recent study show this is not guaranteed to be the case, as a significant number of people report receiving low quality AI "workslop" outputs that paradoxically increase the amount of work they have to do.
The authors of the study, who discuss their findings in the
Of 1,150 U.S.-based full-time employees across industries polled, 40% report having received such content in the last month. Typically, those who receive it have to then spend time verifying information, correcting errors, and otherwise doing work that the person who sent it should have already done. One subject said they had to waste time following up on information and checking it against their own research, and then had to waste even more time setting up meetings with other supervisors to address the issue, and then finally they wasted even more time simply by having to redo the work entirely. Another said an email, while nicely written, was very unclear and so had to spend time tracking down all the relevant people for additional clarification.

Employees said such low-effort, low-quality content makes up about 15.4% of all content they receive at work. The researchers noted that people spend an average of one hour and 56 minutes dealing with each instance of workslop. Based on participants' estimates of time spent, as well as on their self-reported salary, the researchers found that these incidents carry an invisible tax of $186 per month per person.
As one might imagine, people do not like getting this content, and they think less of the people who send it to them. When asked how it feels to receive workslop, 53% report being annoyed, 38% confused, and 22% offended. When asked about their feelings towards those who generated the content in question, about half viewed them as less creative, capable and reliable than they did before receiving the output. Further, 42% said they were less trustworthy and 37% said they were less intelligent. Further, 32% said they are less likely to want to work with the sender again in the future.
Where does this content come from? Mostly between peers, at 40%, but not entirely. The study found 18% was from direct reports to avengers and 16% was from managers to their team members or even higher up in the chain than that. While workshop occurs across industries, researchers found that professional services and technology are disproportionately impacted.
The insidious effect of workslop, said the researchers, is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver. It was described as passing the cognitive buck, not so much doing work but shifting it around the organization.
Part of the blame comes from organizations themselves, as the researchers believe this phenomena is fed by unclear AI mandates—while organizations are advocating for people to use AI all the time, not everyone specifies how, as they lack discernment in how the technology is applied. They also said there is a mindset difference in how people use AI, divided between what they called pilots and passengers. Basically, pilots are much more likely to use AI purposefully to enhance their own creativity and achieve their goals. Passengers, in turn, are much more likely to use AI in order to avoid doing work entirely.
The researchers said that leaders need to address both the mindset and the mandate issues if they want to get the most from AI.
"Workslop may feel effortless to create but exacts a toll on the organization. What a sender perceives as a loophole becomes a hole the recipient needs to dig out of. Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone," said the paper.
AI strategy still work in progress
These findings call to mind recent data released by
The survey found that only 24% of respondents said their finance leadership is fully aligned on the strategic role of AI. A larger portion, while 43% reported partial alignment, with engagement varying across leaders. Meanwhile, 9% said leadership is misaligned, 8% noted no alignment, and 16% were not sure of the level of alignment. Wolters Kluwer said this highlights the need for finance leaders to develop and clearly communicate a unified vision on how they expect AI to shape their finance operations.
These mixed results could be a consequence of the mixed feelings finance leaders have regarding AI. The survey found that while 14% said they are very comfortable with their organization's level of AI investment, a larger number 38% reported being only somewhat comfortable. Further, 17% felt AI investment was too low, but 28% were not sure, highlighting the need for more clarity around AI budgeting decisions
"Finance leaders are clearly recognizing the potential of AI, but the journey from exploration to scaled deployment is complex" said Madhur Aggarwal, executive vice president and general manager of corporate performance management at Wolters Kluwer.
Seventy-nine finance leaders responded to this survey, conducted on Sept. 17, 2025, during the North America CCH Tagetik inTouch25, in Houston, Texas.