AT Think

Leading adaptive transformation in the face of AI

In the last 60 days, how many times has someone in your firm mentioned artificial intelligence in a partner meeting, hallway conversation or client discussion? And how many of those conversations ended with a clear decision about what to do next?

Processing Content

For most firms, AI feels like it's everywhere and nowhere at the same time. Team members are experimenting, vendors are embedding it into core platforms and clients are asking questions. Meanwhile, leadership teams are trying to balance opportunity with exposure without fully knowing how fast this wave is moving.

This isn't like implementing new tax software or upgrading your audit platform. AI is changing how we produce and review work (and whether we can trust the output). It speaks confidently and works quickly. And if you're not careful, it can move faster than your policies, procedures and risk controls.

That's what makes this moment different. You don't have the luxury of waiting for the dust to settle, but you also can't afford to rush in without structure.

So let's talk about how to build guardrails that accelerate progress rather than slow it down.

AI is moving faster than our processes

One challenge firm leaders face is speed. AI evolves faster than our governance structures, policies and review cycles can keep up with.

Historically, accounting firms introduced new tools in a measured, linear way: pilot, evaluate, roll out, train and refine. AI doesn't wait for that. It's being introduced into firms organically, and sometimes without formal approval. A manager experiments with a generative AI tool to draft a client email. A staff member uses it to summarize the Tax Code. Someone pastes internal data into a public interface without fully understanding where that data goes.

The technology accelerates faster than our traditional change management models can handle. That's not a reason to panic, but we do need to rethink how we lead transformation.

AI in hand - chip concept
Andrii Yalanskyi - stock.adobe.com

Data leakage and overconfidence are the real risks

AI presents real risks. When people misuse generative tools, they can expose sensitive data. AI agents will do exactly what you ask them to do. If someone directs an AI agent to "analyze all client revenue data," it will attempt to crawl through whatever data it has access to. Without clear boundaries, there is a risk of data leakage.

There's also the risk of hallucinations. AI systems can produce responses that sound highly authoritative even when they're wrong. The confidence in the tone can mask inaccuracies.

It's getting better all the time, but it's not perfect. And when client deliverables are involved, "mostly right" isn't good enough. Fact-checking and professional judgment are still non-negotiable. We can't allow AI's efficiency to erode the integrity of our work.

Don't let fear paralyze the firm

Many firms initially responded to AI's risks by going too far in the other direction. We scared people. In an effort to manage risk, some leaders essentially shut down experimentation. Intentional or not, the message was, "This is dangerous. Don't touch it."

The problem with that approach is fear slows innovation more than guardrails ever will. If people are afraid to explore new tools, they will avoid them entirely and fall behind competitors or use them secretly without guidance.

Neither outcome is acceptable. Managing AI risk without killing innovation requires a different leadership posture.

Guardrails accelerate innovation

There's a misconception that governance slows things down. In reality, confusion slows things down. Your team hesitates when they don't know what's allowed, what's prohibited and what requires review. They wait or they guess.

Clear guardrails remove that friction by answering questions like:

  • What types of data can we (and can we not) enter into AI systems?
  • Which platforms are approved?
  • What review process do we need to follow before using client-facing output?
  • Who owns oversight?

When we define those boundaries, people can innovate inside them. Innovation moves faster when the lane lines are visible.

Leadership must be actively involved

We can't delegate AI entirely to IT or a small innovation committee. This is a leadership issue. Leaders shape how the firm thinks about risk, experimentation and accountability. If partners treat AI as a toy or a threat, the rest of the firm will follow that lead.

Adaptive transformation requires visible leadership involvement. Some examples include:

  • Talking openly about AI in meetings;
  • Asking how team members are using it in engagements;
  • Modeling responsible experimentation; and,
  • Reinforcing that professional skepticism still applies to machine-generated content.

Culture forms around what leaders consistently emphasize. If you never discuss AI, it becomes a shadow activity. If you discuss it thoughtfully, it becomes a strategic initiative.
Ultimately, leading adaptive transformation in the face of AI is about mindset. We're moving from controlled, periodic change to constant acceleration. Your role is to manage risk without stifling initiative, encourage experimentation without tolerating recklessness and maintain professional standards while embracing efficiency.

AI will continue to evolve. When leaders build guardrails, train their people and stay actively engaged, AI will become a force multiplier, not a liability.


For reprint and licensing requests for this article, click here.
Technology Practice management Artificial intelligence Change management
MORE FROM ACCOUNTING TODAY
Load More