For CPA firms, generative AI is no longer a "future" discussion—it's already reshaping how work gets produced, reviewed and priced. Enter the AI Architect: the person responsible for ensuring AI is used intentionally, consistently and safely across the firm, owning governance, tool evaluation, and the design and scaling of AI-enabled workflows.
The real question isn't whether your people will use AI; it's whether the firm will use it in a way that protects quality, confidentiality and clients' trust.
In an Allinial Global Firm Tech Community session on March 26, the CIOs from two large firms compared notes on a role many firms are circling: the AI architect.
Why might you need an AI Architect?
I opened the discussion asking the question, "Should CPA firms have an AI Architect?" The panelists—Chris Morrow, CIO of Warren Averett, and Matt Huff, CIO of Tanner LLP—shared their experience with the evolution of this need in their firms.
Morrow offered a simple signal that the need is real: Once the firm rolled out Microsoft Copilot, usage climbed to roughly 95%. Adoption created an immediate second-order problem—everyone wanted direction at once. Questions about governance, product evaluation and implementation were consuming more than a full-time equivalent across leaders and teams, so the firm created a dedicated AI Architect position. Even then, the role quickly became a magnet for requests from every service line. To keep decisions consistent, the firm also formed a four-person AI committee to prioritize use cases and keep innovation moving without letting "busy season shortcuts" become permanent risk.
Huff framed the architect's value in CPA terms: professional skepticism. AI can draft narratives, summarize standards and suggest conclusions—but it can also hallucinate, omit context or mishandle sensitive data. Treating AI as a "magic black box," Huff warned, is how firms end up with undocumented assumptions in workpapers or client-facing errors that are hard to defend later. He described three maturity tiers— (1) consuming AI tools, (2) curating vendor AI and (3) constructing custom solutions—where each step up increases both the firm's capacity for growth and the need for controls to protect it. An AI architect (or equivalent owner) should exist to keep the firm asking: What problem are we solving? Is AI necessary? How will we test outputs and document reliance?
Since AI is already here—how will you govern it?
Governance was the through-line. Huff described building an AI governance program intended to be legally defensible, including partnering with an attorney specializing in privacy and data security. That kind of structure can feel like friction—he cited divergent perspectives that new "IT gates" could slow innovation—but Huff emphasized that governance done right accelerates adoption by giving the firm confidence to move faster on vetted tools and workflows. But the panel's message to firm owners was pragmatic: In a regulated profession, speed without guardrails is a liability. The group also discussed data ownership and maintainability, including examples where moving from a SaaS platform to a more controllable environment improved visibility into data flows—while requiring firms to be more intentional about tech debt and long-term support.
On execution, Morrow explained why his firm keeps AI development tightly controlled. With a small software team and no broad "citizen developer" model, experimentation flows through defined channels. He described how a business analyst uses AI-assisted "vibe coding" to create prompts that can generate code that is 80–85% complete, which developers then review and finalize. While the risk is reduced speed, the upside is increased consistency and supportability. The takeaway is that repeatability matters: If a workflow is going to scale across offices and service lines, it needs standards, review points and a clear answer to "who supports this when it breaks?"
The session ended with a decision firms are already facing: what to do with reclaimed time. As AI starts saving measurable hours, firms can slow hiring through attrition, redeploy capacity to higher-value advisory work or push into new compliance and assurance opportunities. The panel cautioned against "AI washing"—tools marketed as AI that are really outsourcing—and encouraged leaders to focus not just on how technology is priced, but on how AI-driven efficiency translates into greater throughput, improved client experience and sustainable revenue growth.
When do you need an AI Architect?
Consistent with past technology waves, organizations have yet to define a standard org-chart role for AI Architects. Whether you need a dedicated AI Architect depends on firm size, risk tolerance and how far you intend to go—using off-the-shelf AI features, governing vendor tools, or building AI-enabled workflows and products. But regardless of structure, someone has to own the governance: deciding what's allowed, what's not, how tools are evaluated, and how results are validated before they touch a tax return, an audit workpaper or a client email.
Bottom line: You may not need an AI Architect this year, but you do need AI architecture ownership—governance, vendor evaluation, risk controls and an implementation roadmap that converts AI efficiency into advisory capacity, a better client experience and a firm that's both innovative and defensible.







