EU AI Act likely not a hassle for most use cases

While the EU AI Act—which officially went into force about a year ago—creates a number of new obligations regarding AI, its more stringent mandates apply only to the minority of organizations involved with high-risk use cases, with the rest able to mostly get by with what they already do to comply with the EU's General Data Protection Regulation. 

This is according to Dr. Rafae Bhatti, chief information officer of Thunes Financial Services and a speaker at the Governance, Risk Management and Control conference in New York, hosted annually by the Institute of Internal Auditors and ISACA (formerly the Information Systems Audit and Control Association.) While navigating the EU AI Act might seem intimidating to people, he said that, for the majority of organizations, the things they're expected to do are better thought of as extensions of current regulations with which many are already complying. 

"It is not completely a situation where you need to start from scratch. You may already have certain cybersecurity controls, certain data privacy controls, and that is one of the important pieces of guidance that I'd like to share with you so that you can feel a little bit more comfortable about not having to start from scratch as it relates to cybersecurity and data privacy," he said. 

EU AI Act
tanaonte - stock.adobe.com

Before anything, he said, understand the scope of the regulations, which vary based on the AI in question as well as the specific use case toward which it is applied. Certain use cases, such as social scoring or real-time surveillance, are outright prohibited. Below that are high-risk use cases that involve industries such as health care, employment, education and other sectors with major societal implications. After this are those who fall in the limited risk category, which includes chatbots and image generators, followed by those deemed minimal risk, such as AI-enabled video games or spam filters. 

Another way to think of these various risk levels and their consequences, said Bhatti, is in terms of "career ending, sleepless nights, committee meetings or PowerPoints." 

The good news, he said, is that very few organizations are involved in prohibited use cases, and even those involved in high-risk ones will be uncommon considering they're restricted to specific sectors. 

"If you are a company that is only creating an application which is a game, you are probably not subject to most of the requirements. If you're just a shopping website and not doing anything to do with employment or health care or education, there is going to be very minimal you're required to do," he said. 

Anything of a limited or minimal risk, he said, doesn't trigger the AI-specific requirements of the EU AI Act, meaning entities should just continue doing what they're already doing to comply with existing regulatory frameworks, "and if you're doing it well you should be OK," said Bhatti, adding that generally, "the only thing you still have to worry about is GDPR principles." 

If something is considered high-risk, however, not only is it subject to greater GDPR scrutiny—meaning "if previously you were not taking it seriously, now is the time to take it seriously because there will be a requirement for a conformity assessment." But some of the AI-specific measures also kick in. Part of this is more stringent security requirements, such as controlling for AI-specific attacks such as data poisoning and prompt injections (broadly referred to as 'adversarial robustness' controls.) 

Beyond this, those involved in high-risk use cases must also consider fairness and nondiscrimination controls; transparency and explainability controls; accountability and human oversight. What exactly counts within these categories, though, can be a matter of debate, starting with whether the use case is even high risk or not. 

"Is this AI high risk? The lawyer might say, 'legally yes.' The engineer might say, 'technically no.'  They are both at medium risk of losing their careers. This is going to be a back and forth. Just be prepared to have that argument," he said. 

Then there are the other controls that, themselves, can rest on slippery definitions. For instance, the fairness and nondiscrimination control requirement ostensibly is to mitigate the effect of bias in AI models. But the definition of these things can be tricky. An engineer might ask what exactly is the definition of fairness; a lawyer might answer, "whatever keeps us out of court," which he conceded was an unhelpful answer, but one that some will likely use. 

Similarly, while explainability might seem like a simple enough concept at first glance, the detail and granularity of these explanations can be a point of contention. Some people may go into exhaustive detail about how their AI works while others might try to say, "It works in mysterious ways." Such an answer is not necessarily in the spirit of the rule, but some try to use it anyway. However, he said such questions are only required to be addressed by those using high risk use cases. 

Transparency controls will be more common, as they are required for those involved in limited-risk use cases. Generally, he said, people need to know that they're interacting with AI, such as through a privacy clause that tells users the system uses it to process their data, or even a note in the interface. However one does it, following this regulation needs documentation as well as a conformity assessment. 

The last bucket is accountability. Who is responsible for the AI? He cautioned against taking a cavalier approach to this question. There needs to be real accountability, along with the ability to escalate further up the chain. 

"Your answer shouldn't be that it leads to a voicemail. A 1-800 number is not going to cut it, an email is not going to cut it," he said, though noted that only high risk cases require documentation. Still, even if it's not strictly required, he said it's a good idea to consider this anyway. 

He stressed that most of the time organizations will only need to account for transparency. This does not, however, mean they should ignore all other controls. While it may not be specifically required to control for fairness and explainability, he said it is likely still a good idea for any organization dealing with AI. 

Bhatti said AI itself can be a valuable tool in complying with the EU AI Act, as it can do things like perform initial risk analyses and gap assessments, as well as monitor and retrieve vast stores of organizational data. However, he cautioned against letting AI agents perform actual remediation steps, as he felt there is still too much risk (noting, for example, how an agent accidentally deleted a company's entire codebase by accident).

"If someone is trying to convince you that automatic remediation using [AI agents] is happening now, and that you should adopt it, proceed with caution," he said. He noted that a few years from now "we can get to a point where we have enough confidence with what automatic remediation is doing. But not today." 

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Risk management Machine learning Regulation and compliance
MORE FROM ACCOUNTING TODAY