The Trust Gap
The AI Governance Gap Starts with What You Already Have
Most organizations I talk to think AI governance is a net-new problem. Something they'll get to eventually, once they have a better handle on what their teams are actually building. That instinct makes sense. It also happens to be wrong.
The gap doesn't start with what you don't have. It starts with what you already have and haven't governed yet.
I grew up around farming. When a farmer buys a new combine, he doesn't throw out the owner's manual for his tractor. He doesn't forget how to rotate crops. The new machine fits into a system that already exists, one built on years of hard-won knowledge about soil and weather and timing. If that system is broken, no piece of new equipment is going to fix it.
AI governance works the same way. If your SOC 2 controls haven't been revisited since before your team started shipping LLM features, the problem isn't the AI. The problem is the gap between what you said you do and what you're actually doing. Chapter one of AI governance isn't writing a new policy for foundation models. It's going back to your existing controls and asking whether they still reflect reality.
We ran a survey earlier this year, the AI Governance Trust Gap Index, across organizations of all sizes. When we asked people to rate how well their organization governs AI today, the average came back at about 3.5 out of 5. That sounds passable until you look at the distribution. A significant number of respondents scored themselves at a 1 or a 2. So you've got some organizations feeling confident and a whole lot feeling like they're guessing. The average hides the spread, and the spread is where the risk lives.
What was the most requested resource from those same respondents? Clear mapping of AI usage to SOC 2 and ISO controls. Not a new framework. Not a maturity model. Just help connecting what they're already doing to the standards they're already held to. That tells you something. The desire isn't to reinvent. It's to catch up.
And when we asked about leadership, the majority said their leadership had specifically requested clarity on AI governance. The ask is coming from the top. But the answer isn't coming from anywhere yet, because most teams are still treating AI governance like a someday project instead of a chapter-one priority.
Here's the thing about chapter one. It's boring. It's going back through your control matrix and asking, "Does this still hold?" It's looking at your access controls and wondering whether anyone reviewed them after you integrated that third-party AI API. It's asking whether your data classification policy accounts for training data or model outputs. None of this is glamorous. All of it is necessary.
The organizations that get this right won't be the ones who bought the fanciest AI governance platform. They'll be the ones who went back to the beginning and made sure the foundation was solid before they started building on top of it.
Govern what you have. Then govern what's coming.
Sources
- Striv AI Governance Trust Gap Index (February 2026)
- NIST CAISI RFI (Federal Register, January 2026)
- NIST AI Agent Standards Initiative (NIST, February 2026)
- CSA State of AI Cybersecurity 2026 (Cloud Security Alliance, April 2026)
- 2026 CISO AI Risk Report (Cybersecurity Insiders, January 2026)