PwC offering assurance on AI systems

Big Four firm PwC announced it can now provide independent assurance of AI systems so clients can be confident they've been designed, deployed and operated responsibly, transparently and, in a growing number of situations, aligned with regulatory expectations.

The firm's AI assurance services will be performed by teams of AI and machine learning specialists who also have proven expertise in risk management, internal controls, audit and attest services, and external standards. 

"As the first major professional services firm to bring the next evolution of AI assurance services to market — helping to advance the way organizations respond to the demand for transparency and trust in the AI systems they build — we are proud to continue to be profession-leading in a domain that will define the next era of business transformation," said Jenn Kosar, PwC US's AI assurance leader. "Assurance for AI builds on our leadership, where we have been early movers in developing methodologies, tools and guidance that help as organizations align their AI practices with core principles, like fairness, transparency, privacy, safety and explainability."

Robot Audit Inspection Magnifying Glass
HONGWEI - stock.adobe.com

Broadly, an AI assurance engagement is intended to help the client gain independent perspective on the integrity, robustness and fairness of their AI systems; assess governance frameworks and control environments; evaluate data sourcing and related management practices to address risks of bias, drift and other common risks; assess the design and effectiveness of their own monitoring and testing procedures; provide users with tested insights to support outcome explainability and interpretability; and receive insights into how company procedures compare to leading practices and emerging standards and regulations. 

Clients might need such services in cases like preparing for new regulatory requirements, managing third-party exposure, upholding corporate values or just having trusted, decision-useful information. Deanna Byrne, PwC US's assurance leader, noted that as AI increasingly works its way into the global economy, the demand for assurance that these systems can be trusted has only grown. 

"As organizations increasingly adopt AI to drive innovation, transform operations and unlock new sources of value, it is becoming integral to decision-making across industries, and the need for trust has never been greater," she stated. "With this opportunity comes a parallel demand: stakeholders — including boards, regulators, customers and the public — want confidence that it is effective, fair, accountable and trustworthy. Assurance for AI can help deliver this confidence," she said. 

While accountants have been examining AI systems to some degree over the past few years, the black box nature of many models has been a challenge. There have been calls recently for audits of AI algorithms themselves, but the more common approach has been observing AI impacts and governance, seen in standards such as ISO 42001 which, rather than looking at the details of specific AI applications, aims to provide a practical way of managing AI-related risks and opportunities across an organization.  

For reprint and licensing requests for this article, click here.
Technology Audit Artificial intelligence Data governance PwC
MORE FROM ACCOUNTING TODAY