Everyone’s Talking About AI — But Regulators Want Something Else

Jul 1, 2025

If you attended the FINRA conference this year, you’d think we’ve entered the golden age of AI in compliance. It was the hottest topic in every hallway conversation, panel discussion, and vendor booth.

And yet, the numbers tell a different story. According to Thomson Reuters, less than 15% of legal and compliance professionals are actually using AI in their workflows.

Why the gap between hype and adoption? It comes down to one word: trust.

Ask the Harder Question: How Do You Ensure Consistency?

Lots of vendors claim to “use AI.” The real test isn’t whether a tool sounds intelligent—it’s whether it behaves consistently under pressure. Can it apply the same logic across thousands of pieces of content, over time, with the same risk calibration?

We recently ran a test against one of our competitors that also claims to use AI for compliance reviews. We gave both platforms a basic prompt and—unsurprisingly—got identical responses. That’s because many so-called “AI compliance tools” are just wrappers around general-purpose models.

These models can sound convincing, but we know from real-world testing (and countless academic papers) that hallucination rates in legal and compliance applications still hover around 60%. The same model that gets it “right” today might invent rules or misapply standards tomorrow.

And regulators are paying attention.

The Supervision Rule: You Must Be Able to Explain the Outcome

Under FINRA’s Supervision Rule (Rule 3110) and similar obligations under the SEC and NFA, financial institutions are responsible for overseeing any system—human or AI—that touches their compliance program.

That includes being able to explain and defend how a review decision was made.

If a firm adopts an AI solution it doesn’t fully understand—or can’t explain in a regulatory exam—a fine is not a matter of if, but when. It’s no different from employing an algorithmic trading strategy with no risk controls. Lack of explainability is not a technical glitch; it’s a supervisory failure.

Why Surveill Focuses on "Almost Boring" Consistency

At Surveill, we didn’t set out to build the flashiest AI. We set out to build the most defensible, predictable, and regulator-aligned marketing review solution on the market.

That’s why we designed Surveill around explainable guardrails—not unstructured prompts. Every output is tied to a rule, policy, or known precedent. We don’t guess. We don’t improvise. We deliver “almost boring” levels of consistency, because boring is what compliance needs when the SEC or FINRA comes asking.

That means:

  • Every flagged issue can be traced to a rule or standard.

  • Every decision is repeatable across teams and time.

  • No surprises. No creative reinterpretations of regulatory language.

Final Thought

AI in compliance shouldn’t be about sounding smart—it should be about being right, every time. If you're evaluating an AI vendor, ask not just what the model says, but how it ensures consistency, explainability, and audit readiness.

At Surveill, we’ve built a system that doesn’t chase headlines. We build trust—one accurate, consistent, and reviewable decision at a time.