Gen-AI: Good for profits, bad for society?

A set of surveys from Big Four firms Deloitte and PwC, as well as finance and HR platform Workday, indicate that while there is a lot of optimism from respondents about how generative AI can make their own organizations more profitable, there is far less optimism about how it will impact society and long-term job security.

Overall, there is great support and optimism about generative AI from leaders. The Workday survey said that 62% of business leaders welcome AI, which aligns well with the Deloitte finding that 62% of leaders say that generative AI sparks feelings of excitement. Overall, 91% of all organizations expect generative AI will improve productivity at their organizations, according to the survey.

This enthusiasm could possibly rooted in predictions that the technology will bolster their businesses. In the PwC survey, 58% of CEOs globally believe it will improve the quality of their products or services, while 44% say it will increase profits and 35% think it will increase revenues. With 52% of CEOs saying their No. 1 priority over the next three years is finding new revenue streams, generative AI is viewed as one way to do this.

Business leaders generally expect generative AI to transform their organizations, the question being mostly a matter not of if but when. The Deloitte survey found only 1% said it was never going to happen. This is dwarfed by the 14% who say their organization is being transformed right now, the 17% who think it will take less than a year, the 48% who say it will take between one to three years, and the 20% who think it will take longer than three years.

At the same time, survey respondents were not blind to risks. CEOs polled by PwC identified a cybersecurity breach as the primary pitfall at 77%, followed by the spread of misinformation in their company (63%) and legal or reputational damage (55%). At least some leaders appear to be taking these risks seriously. The Deloitte survey found 46% are currently establishing a governance framework for the use of generative AI tools, 42% conduct internal audits and testing of AI products, 37% train staff for AI risk and 36% ensure generative AI outputs are vetted by humans. About 26% use outside vendors to conduct their testing.

Societal risk

While many survey respondents were optimistic about AI's impact on their organizations, their predictions about the wider societal impacts were far less rosy. For instance, the Deloitte survey found that while 30% believe generative AI will ultimately distribute economic power throughout society, a much larger proportion, 52%, said it will further centralize it. Similarly, while 22% said generative AI would serve to reduce economic inequality, 51% predicted it would lead to an increase instead.

The Workday survey found that rank-and-file workers did not even hold that much optimism about their own organizations, in contrast with business leaders. The survey found that 23% are not confident their organization puts employee interests above its own when implementing AI, 42% of employees believe their company does not have a clear understanding of which systems should be fully automated and which require human intervention, and four in five employees say their company has yet to share guidelines on responsible AI use. They also lack confidence their organizations will prioritize innovating with care for people over innovating with speed (compared to 17% of leaders) and are skeptical that their organizations will ensure AI is implemented in a responsible, trustworthy way (compared to 17% of leaders). Workday said these divergent results indicate there is a "trust gap" between leaders at organizations and their employees. Addressing this gap will be vital to growing adoption of AI technology.

"We know how these technologies can benefit economic opportunities for people — that's our business," said Chandler C. Morse, vice president of public policy for Workday. "But people won't use technologies they don't trust. Skills are the way forward, and not only skills, but skills backed by a thoughtful, ethical, responsible implementation of AI that has regulatory safeguards that help facilitate trust."

For reprint and licensing requests for this article, click here.
Technology Practice management Artificial intelligence
MORE FROM ACCOUNTING TODAY