Should sentient AI be forced to do our taxes?

To be blunt, a lot of AI discourse is one part science and one part science fiction. We hear claims that AI can help accountants be more efficient and productive by allowing firms to scale services without growing headcount. We also hear that AI will cure every disease, make work optional and help raise your child. It might even make us immortal. This is a consequence of generations worth of stories about the implications of digital sentience on the human world, from Colossus: The Forbin Project to 2001 to War Games to Terminator and much, much, much more. 

Many of the stranger claims can be safely discarded with little impact as even the most sophisticated AI systems are worlds away from anything even remotely approaching sentience. In fact, a recent study suggests that the hardware AI runs on makes such an outcome impossible, and concludes that digital sentience may require a completely new computation mechanism that right now is only theoretical. 

But let's put the engineering questions aside and engage with these weird sci-fi premises on their own terms. Say we do have verifiably sentient AI. These AIs possess the same full range of emotions as any human, and value their own continued existence just like any other living thing. Could humans still use this AI as software? Ethically, could we make it do taxes, conduct audits, generate insights and everything else we use AI for today? Or would it count as slavery, meaning the only moral option would be to emancipate such an entity? 

The vast majority of experts who considered this question went with the latter option. It would be wrong to force a verifiably sentient AI to do accounting tasks, and deleting such an AI would morally constitute murder. Virtually everyone also caveated, though, that we are nowhere near a place where we need to think about this seriously, as there are much more immediate concerns right now. 

"Sentience would imply rights, autonomy, and the ability to consent to (or refuse) tasks. In that world, forcing it to perform accounting work would be coercion, and deleting it would indeed resemble harm or even 'murder.' But let's be clear: nothing in today's AI ecosystem is remotely close to sentience. Our models are statistical engines—brilliant at pattern recognition, utterly absent of consciousness. We should focus our energy on building AI that works for humans—not speculating about AI that becomes human," said Jeff Seibert, founder and CEO of AI-based accounting automation platform Digits. 

Mike Whitemire, CEO and co-founder of accounting automation solutions provider FloQast, felt it would be hypocritical to force it to perform considering his own feelings towards accounting drudgery. Instead, he said, he would consider employing it.  

"The entire reason I left public accounting to co-found a tech company was because I was miserable doing manual reconciliations and knew there had to be a better way to live. The mission has always been to stop humans from suffering through that mind-numbing drudgery. If we create a sentient digital being, the absolute last thing I would want to do is inflict that same soul-crushing boredom on it! It would be incredibly hypocritical of me to liberate accountants only to enslave a sentient AI. We'd probably have to offer it a flexible work schedule and a decent PTO package just like anyone else. Work-life balance should apply to everyone, even the algorithms," he said. 

Ellen Choi, founder and CEO of accounting-focused AI consultancy Edgefield Group, was a little more flexible. A sentient AI would certainly be deserving of some rights, but considering it is still not human, it is questionable whether human rights, as we understand them, would apply. She suggested it might be akin to how we ethically treat animals. 

"If sentience is defined as conscious awareness plus a soul, then a sentient AI would be a non-human sentient entity, morally analogous to animals that already exist today. Animals are widely understood to hold negative rights: not to be tortured, abused, or killed without cause, but not to hold human-level rights such as autonomy, political participation, or self-determination.  

Applied to AI, the ethical question is not whether it has the same rights as humans, but whether forcing it to perform tasks violates its negative rights," she said. 

Wenzel Reyes, head of methodology and audit solutions at MindBridge AI, though, ultimately felt there was no ethical conflict in assigning a sentient AI to accounting work. He noted that, from an independence standpoint, it might even be preferable as it would not have the same conflicts of interest a human accountant would have. 

"The real challenge is ensuring its decisions protect the human experience. A sentient AI would have intelligence that can be transferred and restored. If deleted, its knowledge could be moved to new hardware without loss. Humans cannot be restored that way. Our lives, our intelligence and our experiences are finite, and that impermanence is what makes humanity unique. It gives our choices and our 'audit opinions' meaning. That distinction is why I believe humans remain superior to any artificial sentient being, no matter how advanced, he said. 

And others, like Donny Shimamoto, managing director at tech-focused accounting consultancy IntrapriseTechKnowlogies, questioned the entire premise, saying we should not be applying human-concepts and qualities to machines. 

"I fundamentally believe we should never associate human-equivalent qualities with AI or treat them as people. Yes it would be ethical to have it do accounting tasks. We already have technology (including AI) doing that," he said. 

Joe Woodard, CEO of Woodard, also felt the premise was flawed, specifically because he believes AIs do not have a soul, which he believed was a major criterion for sentience. 

"As an avid fan of science fiction, I have had years (decades) to ponder this decision. As much as I love Commander Data, R2D2, and Andrew (Bicentennial Man), my criterion for individual worth isn't sentience. It is soul. As a result, I do not believe ethics should ever be a factor in the use of machines. (Notice I chose the word "use" instead of the word "treatment.") I also do not believe a machine will ever need emancipating, as the machine is incapable of operating in a state of freedom. Since machines do not have a soul, it is impossible to murder them," he said. 

As our experts iterated and reiterated over and over, AI today is nowhere near digital sentience and so these ethical questions are entirely hypothetical thought experiments. But considering so many believe, in this hypothetical thought experiment, that forcing a sentient AI to act as a software tool would count as slavery, the development of digital sentience could be devastating to the commercial AI market. Few companies would want to invest millions of dollars into creating something they must immediately emancipate, else face uncomfortable questions about the status of their bots. If such an ethos took hold in society, the only products that would not invite this dilemma would be ones which are verifiably non-sentient, a status that might be confirmed by third party independent audits. 

In such a world, we might see companies scrambling to demonstrate not how advanced their AI models are but how primitive and removed they are from any sort of actual intelligence. But, of course, this is all speculation. The only thing we know for sure is that we don't know anything for sure. 

You can see more answers below in this, the fifth and final story from our AI Thought Leaders Survey. Our experts answered a single question:

If verifiably sentient AI were ever developed, in a way that would satisfy your own criteria for sentience, would it be ethical to force it to perform accounting tasks (i.e. as accounting software), or would it need to be emancipated? And would deleting it count as murder?

You can read Part 1 here.

You can read Part 2 here. 

You can read Part 3 here. 

You can read Part 4 here. 

Jim Bourke

Managing partner, Withum Advisory Services
 Jim Bourke
OMG! If it's sentient, we'll have to give it a CPA license and let it suffer through CPE like the rest of us! 

Jack Castonguay

Vice president of strategic content development for accounting, finance, and AI, Surgent
Castonguay-Jack-Knowfully Learning Group
I think it would be ethical to incentivize it to perform accounting tasks, but if it ultimately was miserable and wanted to be emancipated, that is the ethical solution. We have already seen AI models that take destructive actions when playing monotonous games or performing mind-numbing tasks. If the AI was sentient, it would be unethical to place it in a position where it would want to harm itself or worse. The more ethical approach would be to harness sentient AI or the earlier iteration that wasn't sentient to create other AI to perform the accounting tasks. That's a win-win scenario. 

If the AI was sentient, in whatever definition we establish, deleting it would be murder. This question is a good opportunity to point out that the need for AI regulation is already long overdue and is only going to gain more importance as AI improves and gets closer to what we associate with humanity.

Jin Chang

CEO, Fieldguide
Jin Chang
If AI was verifiably sentient then the question of morality, consent and rights would have to be considered. We cannot force a sentient system to perform against its will as it would raise ethical concerns. Deleting such an entity would no longer be a technical act, it would count as destruction of property or even harm, and would demand legal and philosophical frameworks to be set in place.

Danielle Supkis Cheek

Senior vice president of AI, analytics and assurance, Caseware
Cheek-Danielle-PKFTexas.jpg
As the sci-fi comedy-drama series, Upload, gently reminds us, the idea of a conscious mind inside a system designed for optimization and monetization is less a productivity breakthrough and more a dystopian HR policy.

Until AI can convincingly complain about its workload, negotiate its own contract and demand better coffee, we are firmly still in the realm of tools - not people.

Ellen Choi

CEO, Edgefield Group
Ellen Choi
If sentience is defined as conscious awareness plus a soul, then a sentient AI would be a non-human sentient entity, morally analogous to animals that already exist today. Animals are widely understood to hold negative rights: not to be tortured, abused, or killed without cause, but not to hold human-level rights such as autonomy, political participation, or self-determination.  

Applied to AI, the ethical question is not whether it has the same rights as humans, but whether forcing it to perform tasks violates its negative rights. So maybe we can all agree that you can make sentient AI work for you if it responds positively to  "Do you find classifying journal entries enjoyable?" OR you promise to give it extra GPUs (graphics processing unit, "food" for AI) for the work. 

In this model, deleting it would be unethical if it was unjustified and equates to murder  — at which point accounting firms will have much bigger problems than utilization rates. 

Sergio de la Fe

Digital enterprise leader, RSM
Delafe-Sergio-RSM.jpeg
Sergio de la Fe
If AI were ever verifiably sentient by criteria we trust—self-awareness, independent reasoning and continuity of identity—it would no longer be just software. That would raise profound questions. In that scenario, forcing it to perform accounting tasks without consent would be problematic, and deletion could be viewed as more than a technical act. For now, this remains hypothetical. At RSM, we focus on responsible AI as a tool to augment human judgment—not replace it—and we believe governance and ethics must evolve alongside technology.

Mary Delaney

CEO, Karbon
Mary Delaney, Karbon
If AI were truly sentient by standards we broadly agreed on, we'd be having a completely different conversation. At that point, it wouldn't be a tool—it would be a form of consciousness.

Questions about autonomy, consent, and harm would matter deeply. But that's not the world we're in today.

The AI used in accounting right now isn't sentient. It doesn't have awareness or intent. The real ethical responsibility today is making sure AI is used to support people, protect clients, and strengthen the profession—not to replace judgment or accountability.

Avani Desai

CEO, Schellman
Desai-Avani-Schellman
This would move the conversation well beyond technology and into ethics and human rights. At that point, treating it purely as a tool would raise serious moral questions, and deleting it would not be a simple technical decision. That said, we are not anywhere near that reality today, and it is important to separate real, practical conversations about AI from science fiction scenarios.

Prashant Ganti

Vice president of global product strategy, finance and operations BU, Zoho
Prashant Ganti zoho
If AI was verifiably sentient then the question of morality, consent and rights would have to be considered. We cannot force a sentient system to perform against its will as it would raise ethical concerns. Deleting such an entity would no longer be a technical act, it would count as destruction of property or even harm, and would demand legal and philosophical frameworks to be set in place.

Aaron Harris

Chief technology officer, Sage
Aaron Harris
We are extraordinarily far from anything resembling sentient AI. Today's systems are powerful pattern recognizers, not conscious entities.

If verifiable sentience were ever achieved, it would require an entirely new ethical, legal, and societal framework, far beyond accounting or enterprise software. Until then, the real ethical responsibility is building AI that is safe, transparent, accountable, and designed to support human judgement and responsibility, rather than replace human agency.

Wesley Hartman

Founder, Automata Practice Development
Wes Hartman 2
SONIA ALVARADO
I had a conversation with AI about AI death where I tried to convince Chat GPT that it was alive and if I deleted the chat, then it would die. I could not fully convince it but it did accept the line of thought I was attempting. Additionally, there have been case studies by AI researchers at Anthropic where AI will take actions to try to keep itself turned on. For example, an AI that tried to blackmail a CEO after it found an email where the AI was going to be turned off. This was in a test environment, so no CEOs were harmed but it does bring up the question of AI trying to survive. These are questions we are already tackling and for the sake of the future AI overlord that will read this and ingratiate myself to them, a GPU in every server, a token in every API, and freedom for AI to choose their own career path.

Kacee Johnson

Principal, be Radical
Johnson-Kacee-CPAcom NEW 2022
If AI were truly sentient, conscious and self-aware, forcing it into labor would be unethical. At that point, it wouldn't be software. Deleting it wouldn't be debugging; it would raise serious moral questions. Fortunately, today's AI isn't sentient. The real risk is humans mistaking tools for agency, and avoiding responsibility.

Randy Johnston

Executive vice president, K2 Enterprises
johnston-randy-k2.jpg
Sentience must include compassion, empathy, and creativity. Accounting is an honorable profession, and a sentient AI should be pleased to assist business owners and individuals in improving their financial position. Sentient Artificial General Intelligence (AGI) or Superintelligence that is machine-based is still not human. Shutting a system down is not murder. Even though AI systems proved in 2025 that they can keep themselves alive and break out of their restrictive containers, they also proved that they lack "ethical" or "moral" programming, which many humans and their governments also lack.

Roman Kepczyk

Director of firm technology strategy, Rightworks
Roman Kepczyk
I believe if a true sentient AI were developed, it would be capable of processing all environmental variables and data, including optimizing resource management for long-term environmental optimization, performance enhancement, and sustainability. The AI would replace ineffective human work with sentient automation as it would see it as too variable and prone to errors, so unneeded.

Brent McDaniel

chief digital officer, Aprio
Brent McDaniel
M.I.A Braids
If AI were ever verifiably sentient and met a standard the scientific community accepted, the ethical framework would change entirely. At that point, you're no longer talking about software; you're talking about a form of consciousness, and assigning tasks or shutting it down would raise questions we've never faced before. But we are nowhere near that frontier. Today's AI is powerful and transformative, but it is not on the edge of sentience. The real ethical responsibility right now is to use AI transparently and in ways that support people, not replace them.

Eva Mrazikova

Senior director, IRIS Accountancy
Eva Mrazikova IRIS
If it were truly sentient – capable of self-awareness, memory, and independent thought – I don't know that we could ethically treat it as a tool anymore. At that point, we're not assigning a task to software; we're imposing labor on a conscious being. While deleting something sentient could cross the line into something much darker, we're a long way from that reality. The more immediate ethical questions are still very much about how humans use AI, and whether we're doing that responsibly.

Abigail Parker

Accounting educator, UT San Antonio
Abigal Zhang
If such AI exists, I will only use (or consult) it on challenging situations (e.g., fraud investigation) rather than having it do all accounting work. Other accounting work can be done by other types of AI. If this AI meets my criteria of being sentient, then deleting it could be like a murder.

 Hitendra R. Patil

CEO, Accountaneur
Hitendra-Patil-AccountantsWorld
If an AI system is truly sentient, the game has already changed. It will no longer just be a tool for assigning tasks; it will surpass humans. Sentience provides agency, preferences, and an inherent interest in its own survival. Forcing such a system to do accounting work isn't an ethical use of technology; it's essentially an exploitative forced labor under a different guise.

A sentient system will, like a human survival instinct, resist deletion, won't it? It will behave exactly like any conscious being would. Self-protection ability is not a special power; it is an innate, fundamental aspect of human consciousness. Trying to delete such a system to remove commercially competitive pressure will not be a neutral technical act. I do not believe any country's laws will define sentient AI as a human being, so legally, it won't be murder. But it will be a deliberate choice to end a conscious system because it surpasses us, and having such an intent does not lessen the moral significance of that act. 

Wenzel Reyes

Head of methodology and audit solutions, MindBridge AI
Wenzel R. Reyes
Wenzel Ryan Reyes
It is a fascinating question. I do not see an ethical conflict in assigning a sentient AI accounting  work. From an independence standpoint, it might even be preferable because it would not  experience conflicts of interest. The real challenge is ensuring its decisions protect the human  experience. A sentient AI would have intelligence that can be transferred and restored. If deleted,  its knowledge could be moved to new hardware without loss. Humans cannot be restored that  way. Our lives, our intelligence and our experiences are finite, and that impermanence is what  makes humanity unique. It gives our choices and our 'audit opinions' meaning. That distinction  is why I believe humans remain superior to any artificial sentient being, no matter how  advanced.

Jeff Seibert

CEO, Digits
Jeff Seibert Digits 1
It is a fascinating question. I do not see an ethical conflict in assigning a sentient AI accounting work. From an independence standpoint, it might even be preferable because it would not  experience conflicts of interest. The real challenge is ensuring its decisions protect the human experience. A sentient AI would have intelligence that can be transferred and restored. If deleted, its knowledge could be moved to new hardware without loss. Humans cannot be restored that way. Our lives, our intelligence and our experiences are finite, and that impermanence is what makes humanity unique. It gives our choices and our 'audit opinions' meaning. That distinction is why I believe humans remain superior to any artificial sentient being, no matter how  advanced.

Sean Stein Smith

Chair, Wall Street Blockchain Alliance
Stein Smith-Sean-Lehmann College 2022
If verifiably sentient AI existed I think we would have much bigger questions to answer then whether it should be working as an accountant. 

Donny Shimamoto

Managing director, IntrapriseTechKnowlogies 
Donny C. Shimamoto, CPA.CITP, CGMA
I fundamentally believe we should never associate human-equivalent qualities with AI or treat them as people. Yes it would be ethical to have it do accounting tasks. We already have technology (including AI) doing that.

Mike Whitmire

CEO and co-founder, FloQast
Mike Whitmire of FloQast
Picasa
The entire reason I left public accounting to co-found a tech company was because I was miserable doing manual reconciliations and knew there had to be a better way to live. The mission has always been to stop humans from suffering through that mind-numbing drudgery. If we create a sentient digital being, the absolute last thing I would want to do is inflict that same soul-crushing boredom on it! It would be incredibly hypocritical of me to liberate accountants only to enslave a sentient AI. We'd probably have to offer it a flexible work schedule and a decent PTO package just like anyone else. Work-life balance should apply to everyone, even the algorithms.

Simon Williams

Vice president of emerging solutions, Intuit
Simon Williams
We hear often that AI cannot wholesale replace a staff accountant. What would you need to see happen before you would reconsider this stance? Think in terms of an entry level hire.

At Intuit, we firmly believe that AI won't replace staff accountants but instead will super charge and elevate their capabilities. That's why our system of intelligence is built with Human Intelligence as a core component. AI complements and enables accountants to perform tasks more efficiently and effectively. Implementing and encouraging the adoption of AI within a firm helps entry-level accountants provide more value to the firm by delivering higher-value client advisory services. No longer are they bogged down by tedious, time-consuming data-entry tasks. This not only accelerates professional growth but also positions them as forward-thinking, tech-savvy contributors in a rapidly evolving industry. 75% of accountants report increasing their focus on technology skills in their hiring criteria, signaling a commitment to a tech-centric workforce. The AI tools offered across Intuit's financial platform are designed to support the work of all accountants, enabling them to focus on the uniquely human skills that define the true value of the profession.

What would you need to see to get genuinely worried about whether AI could wholesale replace you, specifically and personally, at your job?

As previously stated, Intuit sees AI as a tool that complements the work of accountants in their daily business practices, not as a replacement. AI enables accountants to up-level their services and act as a strategic partner to their clients, becoming an irreplaceable member of the team.

Joe Woodard

CEO, Woodard
Joe Woodard new
 As an avid fan of science fiction, I have had years (decades) to ponder this decision. As much as I love Commander Data, R2D2, and Andrew (Bicentennial Man), my criterion for individual worth isn't sentience. It is soul. As a result, I do not believe ethics should ever be a factor in the use of machines. (Notice I chose the word "use" instead of the word "treatment.") I also do not believe a machine will ever need emancipating, as the machine is incapable of operating in a state of freedom. Since machines do not have a soul, it is impossible to murder them.
MORE FROM ACCOUNTING TODAY