AI Hallucination Is a Real Risk. Just Not the One We’re Solving

The real challenge is building tax studies grounded in real data, documentation, and expert review.

Mark Stapleton | Director of Quality Control, Onshore

Mark Stapleton

Mar 2, 2026

When people hear “AI in tax,” the first reaction is predictable: “What about hallucination?”

It’s a fair question.

Generative AI can produce answers that sound right and are completely wrong. If you are asking a model to invent explanations or draft positions, you should absolutely worry about that.

But that is not what we built.

At Onshore, we didn’t set out to build a better way to generate tax narratives.

We set out to fix a broken workflow.

And that distinction matters.

For decades, tax incentive work has been shaped by fragmented data, manual reconciliation, after-the-fact interviews, and assumptions that quietly carry forward year to year. The exposure in that system does not come from artificial intelligence.

It comes from opacity.

Hallucination is a generative problem. Opacity is a structural one.

We chose to solve the structural problem.

Automation Is Not Generation

Generative AI predicts text. It fills gaps based on probabilities. That is powerful, but it is also where hallucination lives.

Our system does not operate that way.

We use AI to extract structured data from source systems, reconcile it against financial records, classify activity within defined frameworks, flag inconsistencies, and build documentation tied directly to evidence.

The AI is not inventing.

It is processing.

Every output ties back to actual source data. Every study is reviewed by credentialed experts before anything is finalized.

AI is applied at the point of substantiation, not at the point of storytelling.

That distinction matters.

Most of the risk in tax does not come from a model fabricating an answer.

It comes from processes that make it difficult to trace how a claim was constructed in the first place.

So we built the system first.

AI is embedded inside a defined workflow. It reinforces structure. It surfaces edge cases. It reduces ambiguity.

Then humans apply judgment.

We believe AI should make work more accountable, not less. It should increase clarity, not introduce uncertainty. It should strengthen defensibility, not abstract it behind probability.

Hallucination is a headline problem.

Opacity is the real one.

We built Onshore to eliminate it.

Human-led. AI-powered. And, unwavering focus on delivering the best outcomes for our clients.

Ready to take a closer look at Onshore?

Find out if Onshore is a fit for your company in 15 minutes.

Walk through the process with our team

Ask questions about data, security, and compliance

See how much you could save by switching

Ready to take a closer look at Onshore?

Find out if Onshore is a fit for your company in 15 minutes.

Walk through the process with our team

Ask questions about data, security, and compliance

See how much you could save by switching

Ready to take a closer look at Onshore?

Find out if Onshore is a fit for your company in 15 minutes.

Walk through the process with our team

Ask questions about data, security, and compliance

See how much you could save by switching

Ready to take a closer look at Onshore?

Find out if Onshore is a fit for your company in 15 minutes.

Walk through the process with our team

Ask questions about data, security, and compliance

See how much you could save by switching