A few weeks ago I published a piece about crystallized and fluid intelligence — psychologist Raymond Cattell’s framework for understanding two very different kinds of smarts.
Crystallized intelligence is what you know: the accumulated expertise, pattern recognition, and hard-won knowledge built over years. Fluid intelligence is how you think: the ability to reason through novel problems when the old playbook no longer applies.
The premise of the article is that as AI reshapes work, fluid intelligence becomes the differentiator. The environment has stopped being stable enough to coast on what you already know.
Then Madhu Mathihalli — our VP and GM of Product at Eightfold — put up a post on LinkedIn that made me see something I hadn’t connected before. Madhu wrote about watching the same question appear on every Infosec and vendor review questionnaire for 15 years straight: “Do you use our data to train your model?”
His point: that question was exactly right — 15 years ago. Today, it’s a machine learning-era question in a generative-era world.
And that’s when it clicked. What Madhu was describing wasn’t just an outdated question on a security checklist. It was crystallized thinking applied to a problem that has fundamentally changed shape. Years of knowledge built around how AI worked — train the model, validate it, version it, protect the data — and that playbook has been running ever since, even as the technology moved on ahead.
We didn’t just upgrade models. We changed what needs to be governed. The systems are no longer static artifacts that store and retrieve — they are active, adaptive, and continuously evolving. Which means the surface area of risk has expanded well beyond the data they were trained on. And almost none of our governance frameworks have caught up to that reality yet.
Leaders aren’t asking the wrong questions because they are careless. They are asking the wrong questions because they got very good at asking the right ones — and then the rules changed. That’s a fluid intelligence problem. And it’s one organizations are navigating in real time, whether they know it or not.

Why the old question made sense
In the early days of machine learning, AI was essentially a very sophisticated pattern-matching system. You fed it enormous amounts of data — millions of résumés, thousands of job descriptions, years of hiring outcomes — and it learned to recognize what “good” looked like based on the past.
The model was trained once, validated, versioned, and monitored. Relatively stable. More predictable. Far more deterministic than today’s systems.
In that world, “Do you use our data to train your model?” was exactly the right question. You were asking: are you putting our private records into a shared library that other organizations can access? That’s a completely legitimate concern. Data governance was the right framework. It matched the technology.
And here’s something important: that crystallized knowledge isn’t worthless now. Understanding how AI was built, what data it was trained on, and how it has historically behaved is still valuable context. You need that foundation to evaluate what you’re being told. You need it to recognize a non-answer when a vendor gives you one.
Crystallized intelligence is still the starting line. It’s just no longer the finish line because the technology changed, and our questions haven’t kept up.
Generative AI doesn’t just recall. It reasons.
Modern AI systems aren’t filing cabinets. They’re reasoning engines. They’re probabilistic, not deterministic — the same input doesn’t always produce the same output. They’re sensitive to prompts and system instructions in ways that can shift their behavior without any retraining at all. They update frequently. And with synthetic data becoming increasingly viable, the need for your specific data to build the underlying model is shrinking fast.
Because behavior is no longer fixed, governance can’t be either.
That sounds abstract, but it shows up quickly in real workflows — especially in hiring. Here’s a non-technical way to think about what that actually means in practice.
Imagine an interviewer speaking with a candidate who mentions they led a complex software migration at their last job. An old-school AI tool would scan that answer for keywords — “migration,” “led,” “software” — check them against a list of desired skills, and move on. It’s pattern matching. It’s looking for what it already knows.
A reasoning-based agent does something different. It hears “software migration” and follows up: you mentioned the migration was complex — what was the hardest part of keeping everything intact while the systems were moving? It doesn’t need the answer to be on a keyword list. It understands context. It asks the next logical question the way a skilled interviewer would. That’s the shift from recall to reasoning — and it changes everything about what governance needs to cover.
That kind of interaction isn’t scripted — it’s adaptive. This is exactly the kind of fluid intelligence Cattell was describing: not stored knowledge, but the ability to navigate a novel situation in real time. And when these systems operate this way, protecting the data that trained the model is only one small piece of responsible oversight. You need enough crystallized knowledge to evaluate the answers you’re getting — and enough fluid thinking to know what questions you haven’t thought to ask yet.
“Security today isn’t just about protecting data. It’s about governing evolving intelligence.” — Madhu Mathihalli, VP & GM Product, Eightfold
The questions we should all be asking instead
This isn’t a vendor problem or a security team problem. It’s an everyone problem. HR leaders, TA leaders, and ops leaders are all approving, deploying, and championing AI tools right now — which means we are all responsible for asking better questions about how they work.
Data protection isn’t being replaced here — it’s being expanded. The shift isn’t about swapping one question for another. It’s about widening how we think about control, because the thing we need to control has changed.
Ready to move beyond outdated frameworks? Join us at Cultivate US or Cultivate Europe to explore what responsible AI governance looks like in a generative-first world.










