From the Field

Why "Human in the Loop" Isn't Enough

A top New York law firm just apologized to a federal judge for AI-generated errors in a court filing. The fix isn't more oversight. It's better architecture.

A prestigious New York law firm made headlines this week for filing a document with a federal court that contained citations generated by AI — citations that turned out to be fabricated. Dozens of other U.S. lawyers have faced sanctions for the same thing.

The common response in the industry is some version of "we need a human in the loop." Have a person check the AI's work. Problem solved.

Except it isn't.

If the human has to re-verify every output the AI produced, the AI didn't save any time. If the human skips the verification — which is what keeps happening — you end up in federal court. "Human in the loop" only works when the human has a practical way to catch mistakes. With a language model that can fabricate a professional-sounding citation in half a second, most humans don't.

The real problem isn't oversight.
It's architecture.

A Different Kind of AI System

A well-built AI system shouldn't be able to fabricate information in the first place. It should only be allowed to report what it can verify from a real source. SAM is built that way — and these are the rules that make it work.

RULE #86 · SCRIPT-ONLY EXECUTION

No Improvised Actions

SAM cannot execute raw API calls or freeform commands. Every action — looking up a customer, pulling inventory, sending a draft — routes through a validated script that either returns real data or errors out.

The model can't invent a SKU because it can't generate the call that would return one. There is no path from "freeform text" to "live business action."

If the script returns nothing, SAM returns nothing. No guess. No fallback. No fabrication.
RULE #87 · READ-FIRST EXECUTION

The Procedure Comes First

Before SAM acts on any task, it is required to read the written procedure spec for that task. No improvisation. No "I think this is how it works." If the procedure doesn't exist, SAM stops and asks.

This is the structural difference between an AI that follows your business rules and one that politely ignores them.

Procedure absent → task halts. The architecture won't let SAM wing it.
SOURCE-BOUND DATA

Four Sources, No Fifth

For each customer deployment, SAM is bound to a specific, finite list of data sources. A spreadsheet. A manufacturer feed. A verified industry database. The customer's own website.

If a fact isn't in one of those sources, SAM doesn't fill in the blank. It flags the item and asks the operator. There is no fifth source called "make something up."

Retrieval returns empty → SAM returns empty. By design.
HARDWARE-ATTESTED IDENTITY

The System Knows Who It Is

Credentials never exist as plaintext on the SAM appliance. Every API key, every login, every authorized endpoint is released only to a hardware-bound identity tied to that specific machine.

An attacker who copies the files off the disk gets nothing usable. An AI that tries to call an unauthorized service gets blocked at the credential layer, not the prompt layer.

Policy at the orchestration layer — not the prompt layer. Prompts are suggestions. Orchestration is enforcement.

Case Example

A Forklift Dealer in Ohio

A forklift dealer in Ohio runs SAM as their daily inventory operator. SAM reads four — and only four — sources: the dealer's master spreadsheet, the manufacturer's direct feed, a verified industry database, and the dealer's own website.

When SAM drafts a new product listing, it pulls the make, model, hours, and condition from those sources. It does not generate a description from imagination. If a field is missing, SAM flags the item and asks the dealer. The dealer reviews and publishes.

This is what "human in the loop" looks like when the architecture is doing its job. The dealer isn't re-verifying every word against reality — because SAM was never allowed to invent a word in the first place. The human's role is to approve, not to fact-check.

That's the difference between an AI tool that saves time and one that creates a new category of risk.

Patent Pending

U.S. Provisional Patent Application

No. 64/022,921 — System and Method for Intelligent AI Orchestration via Dynamic Toggle Logic, Multi-Tiered Memory Persistence, and Hardware-Attested Identity Locking

Architecture Beats Oversight.

For small and mid-sized businesses, this matters more than it does for a big law firm.

You don't have an army of associates to re-check the AI's work. If your AI tool invents an invoice line item, a customer name, or a product spec, you are the one dealing with the fallout.

SAM is built on the assumption that the architecture has to do the work — not the overwhelmed human at the end of the chain.

See If SAM Fits Your Business