AT Think

AI is changing infrastructure economics and accounting models

For the last decade, cloud architecture has been the default for large enterprises. 

Processing Content

The rationale was straightforward: Avoid building depreciating infrastructure, convert fixed costs into operating expenses, and rely on hyperscalers for scale and reliability.

That model remains appropriate for many systems; however, as AI becomes embedded in pricing, underwriting, operations and analytics, a new risk variable has emerged. These systems are increasingly trained on proprietary data and reflect the company's internal decision logic. If that information is exposed, the impact extends beyond customer data; it can affect competitive positioning and enterprise value.

As a result, some companies are re-evaluating where AI-linked data and processes reside. In certain cases, that means moving critical workloads back on-prem or into more controlled infrastructure environments.

For finance leaders, this shift carries significant governance, capital allocation and accounting implications.

Data exposure now means exposure of decision logic

Historically, most breaches were painful but contained. Customer data was exposed, regulators were notified, remediation costs were estimated, and the company moved forward. The core operating model usually remained intact.

Now, as companies deploy AI more broadly, training datasets increasingly reflect how the company prices risk, handles exceptions and makes operational judgments. Fine-tuned models have quickly incorporated patterns derived from years of transactions and internal data.

If that information is exposed (whether through external breaches, weak vendor controls, or poor internal governance), the loss is not limited to customer records. It can reveal how the company makes decisions and eat away at the company's competitive moat.

For businesses whose success depends on proprietary data, differentiated pricing models, unique underwriting criteria or specialized analytics, that risk can directly affect enterprise value over time.

Employee-driven AI usage is expanding the risk surface

Employees are using publicly available AI tools to summarize contracts, analyze financial data, draft strategy materials or generate code. In doing so, they may upload:

  • Confidential financial information;
  • Proprietary pricing data;
  • Customer lists;
  • Unannounced transaction details; and
  • Trade secrets embedded in technical documentation.

The intent is usually efficiency, but it creates data leakage risk.

Uploading proprietary information into external models, particularly those governed by evolving data retention and training policies, introduces uncertainty. Even if vendors provide representations around data usage, the company no longer controls the environment in which its information resides.

If proprietary data leaves controlled systems through routine employee use of AI tools, governance may no longer align with the economic value of the company's intellectual property. The more central AI becomes to operations, the more consequential this leakage risk becomes.

Trade secret protection is driving selective moves back on-prem

The renewed interest in on-prem or dedicated infrastructure is not about rejecting the cloud. It is about protecting assets that matter most.

When AI systems are trained on proprietary datasets, management must evaluate:

  • Whether data is commingled in multitenant environments;
  • What rights vendors have to logs, telemetry or derivative data;
  • Whether model outputs could indirectly reveal sensitive logic; and
  • How internal access is controlled and monitored.

In some cases, companies are concluding that certain AI-linked systems are too strategically sensitive to operate in broadly shared environments.

The response is segmentation. Commodity applications remain in the cloud while proprietary AI training data, model weights and decision engines move into environments with clearer physical, logical and contractual control.

That shift carries incremental cost and complexity, but for companies whose competitive advantage is embedded in data and decision logic, the tradeoff may be justified.

The accounting angle: cloud vs. on-prem is not neutral

Over the past decade, companies have become accustomed to accounting for cloud computing arrangements under ASC 350-40. Subscription fees are generally expensed as incurred, while certain implementation, configuration and customization costs associated with hosted software are capitalized and amortized over the term of the arrangement.

As companies shift critical AI workloads toward owned or more controlled infrastructure, several accounting dynamics may change:

  • Greater investment in internally developed AI platforms may fall under internal-use software capitalization guidance rather than cloud computing guidance. That changes what gets capitalized and how costs are amortized.
  • Increased ownership of hardware or dedicated infrastructure introduces fixed asset accounting and depreciation considerations. Useful-life judgments become more significant, particularly given the pace of technological change.
  • Long-term infrastructure commitments may also alter expense recognition patterns relative to pure SaaS subscription models. What was previously a recurring service expense may become a combination of capital expenditures, depreciation, and amortization.
  • Importantly, depending on how infrastructure is structured, lease accounting under ASC 842 may also become relevant.

If an entity obtains control over specifically identified servers or data center assets (meaning the assets are explicitly or implicitly specified, the company directs their use, and has the right to substantially all of the economic benefits), the arrangement could contain a lease.

In that case, a right-of-use asset and corresponding lease liability may need to be recognized on the balance sheet, expense recognition would follow operating or finance lease treatment, and EBITDA, leverage ratios and covenant calculations could be affected.

As infrastructure strategies evolve, particularly where companies move toward dedicated or isolated environments, finance teams will need to carefully evaluate whether arrangements remain service contracts or cross into lease territory.

This is not merely a technical accounting exercise. A strategic shift toward controlled infrastructure may change the balance sheet profile, key metrics and disclosure considerations. Infrastructure design, contract structuring and accounting analysis should be aligned from the outset.

A brief note for boards and CFOs

Boards and executive teams should treat AI infrastructure as a strategic asset, not a technical detail. AI systems are increasingly embedded in pricing, underwriting, operations and analytics. The data and models behind those systems may represent a meaningful portion of enterprise value.

Key questions include:

  • Which datasets and models underpin our valuation and competitive positioning?
  • Where do those assets reside?
  • Do we have effective controls preventing employee-driven data leakage to public AI tools?
  • How portable are our core systems if vendor relationships change?
  • Does our accounting treatment reflect the long-term nature of these investments?

These questions sit at the intersection of risk oversight, capital allocation, financial reporting, disclosure and enterprise value protection.

As companies selectively move AI-linked systems into more controlled environments, governance, infrastructure strategy and accounting treatment must move together.

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Accounting standards Cloud computing
MORE FROM ACCOUNTING TODAY