Insights
Don’t Ask ChatGPT to Do Your Taxes (Or Your Legal Work)


0 min read
You already know AI is impressive. You’ve used it to debug code, draft emails, summarize dense docs, and probably win an argument or two. So it’s tempting — completely understandable, honestly — to turn to it when you get a cryptic IRS notice or need to know if your freelance contract holds up. But here’s the thing: in legal and tax contexts, AI doesn’t just give you a bad answer. It gives you a confidently wrong answer, and that’s a very different kind of dangerous.
The appeal is obvious. Legal consultations are expensive. Tax professionals charge by the hour. And AI is right there, free or nearly free, available at 11pm when you’re panicking about a deadline. It speaks fluently, cites laws with authority, and never makes you feel foolish for asking a basic question. For people who grew up trusting Google, trusting AI feels like a natural next step. But this is where the analogy breaks down — badly.
The Hallucination Problem Is Real, and It Has Consequences
AI language models don’t look things up — they predict text based on patterns in training data. That means when they don’t know something, they don’t say “I don’t know.” They generate something that sounds right. In legal contexts, this has already caused real damage. Lawyers have submitted AI-generated briefs to courts citing cases that simply don’t exist — fabricated citations that looked completely legitimate on the surface. Some of those lawyers faced sanctions and public embarrassment. Their clients faced worse.
Tax law isn’t safer. A 2024 Washington Post investigation tested TurboTax’s AI assistant and H&R; Block’s AI Tax Assist by posing questions about filing scenarios involving a child attending college out of state and cryptocurrency reporting. Both tools gave misleading and inaccurate responses, even for these moderately complex situations. The IRS Taxpayer Advocate Service has since issued an official caution, noting that AI tools “may encounter difficulties interpreting complex tax laws correctly or considering unique circumstances.” These aren’t edge cases — they’re the kinds of questions millions of Americans deal with every filing season.
Even When AI Gets the Law Right, It May Apply It Wrong
Here’s a subtler but equally dangerous failure mode: AI can cite real, valid laws and still give you completely wrong advice. In the UK tax tribunal case Bodrul Zzaman v HMRC [2025], AI-generated submissions contained legal citations that were real — but legally irrelevant to the actual dispute. The tribunal noted that the AI couldn’t distinguish between persuasive precedent and speculative argument, so even when it “got the law right,” it applied it in the wrong context. The taxpayer’s appeal was dismissed.
That’s the insidious part. You can’t easily fact-check bad legal reasoning if you’re not already a lawyer. The output looks polished. The logic sounds coherent. But underneath it’s a language model doing what it does best: producing text that resembles the right answer, not necessarily the right answer itself. And in tax and legal matters, the difference between ‘resembles correct’ and ‘is correct’ can cost you thousands of dollars, or worse.
The Law Moves Faster Than Training Data
Tax law isn’t static. Regulations change. Court opinions reshape interpretations. The IRS issues new guidance constantly — from January through early May 2025 alone, it released 35 new Practice Units covering specialized topics like foreign tax credits, base erosion rules, and treaty provisions. AI models are trained on historical data with a cutoff date. They may simply not know about a rule change that happened six months ago. And they won’t tell you they don’t know — they’ll just answer based on what they’ve seen.
This is especially dangerous for anyone dealing with international tax issues, cryptocurrency, real estate transactions, or anything involving a recent life change like divorce, inheritance, or starting a business. These are exactly the situations where tax law is most nuanced, most frequently updated, and most likely to have AI trip over itself.
Your Data Doesn’t Stay in the Chat Window
When you paste your tax documents, salary info, business contracts, or legal disputes into an AI chatbot, where does that data go? This is a question most people don’t think to ask. Legal and bar association guidance has flagged that AI tools vary widely in their data retention practices — including how long information is stored, whether it can be retrieved through legal discovery, and whether your inputs are used to train future models.
For a tech-savvy user, this should register as a serious red flag. You might be feeding your most sensitive financial and personal details into a system that stores them indefinitely, shares them with third parties, or surfaces them in ways you didn’t anticipate. Attorney-client privilege doesn’t apply to a chatbot. If your legal strategy or tax position ends up in a server log, it may not stay private.
So When Is AI Actually Okay to Use?
To be fair, AI isn’t useless here — it’s just misused. It’s genuinely helpful for building general literacy around legal and tax concepts. Want to understand what a 1099-K is and why you’re getting one? Ask AI. Curious what a non-compete clause typically covers before you meet with an employment lawyer? AI can orient you quickly. Need to draft a list of questions before a tax consultation? Perfect use case.
The rule of thumb: use AI to get oriented, not to get advice. Orientation means understanding concepts, vocabulary, and general frameworks. Advice means applying those frameworks to your specific situation with real stakes. The first is something AI can handle well. The second requires a licensed professional who understands your full picture, is legally accountable for their guidance, and won’t hallucinate a court case to fill the gap.
The IRS doesn’t accept “GPT told me so” as a defense. Neither does a judge. For anything with real financial or legal consequences, use AI to prepare — then talk to a human.
Written for tech-savvy readers navigating an AI-saturated world

