You’re asking the wrong questions about AI — and it’s not your fault

AI has shifted from pattern matching to reasoning, but our governance questions haven't kept up. Here's what leaders should be asking instead.

You’re asking the wrong questions about AI — and it’s not your fault

4 min read

Key Takeaways

  • AI has evolved from static pattern-matching to adaptive reasoning and governance frameworks haven’t kept pace.
  • Asking outdated security questions isn’t carelessness; it’s what happens when expertise outlasts the technology it was built for.
  • Responsible AI oversight now means governing evolving behavior, not just protecting the data models were trained on.

A few weeks ago I published a piece about crystallized and fluid intelligence — psychologist Raymond Cattell’s framework for understanding two very different kinds of smarts. 

Crystallized intelligence is what you know: the accumulated expertise, pattern recognition, and hard-won knowledge built over years. Fluid intelligence is how you think: the ability to reason through novel problems when the old playbook no longer applies.

The premise of the article is that as AI reshapes work, fluid intelligence becomes the differentiator. The environment has stopped being stable enough to coast on what you already know.

Then Madhu Mathihalli — our VP and GM of Product at Eightfold — put up a post on LinkedIn that made me see something I hadn’t connected before. Madhu wrote about watching the same question appear on every Infosec and vendor review questionnaire for 15 years straight: “Do you use our data to train your model?”

His point: that question was exactly right — 15 years ago. Today, it’s a machine learning-era question in a generative-era world.

And that’s when it clicked. What Madhu was describing wasn’t just an outdated question on a security checklist. It was crystallized thinking applied to a problem that has fundamentally changed shape. Years of knowledge built around how AI worked — train the model, validate it, version it, protect the data — and that playbook has been running ever since, even as the technology moved on ahead.

We didn’t just upgrade models. We changed what needs to be governed. The systems are no longer static artifacts that store and retrieve — they are active, adaptive, and continuously evolving. Which means the surface area of risk has expanded well beyond the data they were trained on. And almost none of our governance frameworks have caught up to that reality yet.

Leaders aren’t asking the wrong questions because they are careless. They are asking the wrong questions because they got very good at asking the right ones — and then the rules changed. That’s a fluid intelligence problem. And it’s one organizations are navigating in real time, whether they know it or not.

Why the old question made sense

In the early days of machine learning, AI was essentially a very sophisticated pattern-matching system. You fed it enormous amounts of data — millions of résumés, thousands of job descriptions, years of hiring outcomes — and it learned to recognize what “good” looked like based on the past. 

The model was trained once, validated, versioned, and monitored. Relatively stable. More predictable. Far more deterministic than today’s systems.

In that world, “Do you use our data to train your model?” was exactly the right question. You were asking: are you putting our private records into a shared library that other organizations can access? That’s a completely legitimate concern. Data governance was the right framework. It matched the technology.

And here’s something important: that crystallized knowledge isn’t worthless now. Understanding how AI was built, what data it was trained on, and how it has historically behaved is still valuable context. You need that foundation to evaluate what you’re being told. You need it to recognize a non-answer when a vendor gives you one. 

Crystallized intelligence is still the starting line. It’s just no longer the finish line because the technology changed, and our questions haven’t kept up.

Generative AI doesn’t just recall. It reasons.

Modern AI systems aren’t filing cabinets. They’re reasoning engines. They’re probabilistic, not deterministic — the same input doesn’t always produce the same output. They’re sensitive to prompts and system instructions in ways that can shift their behavior without any retraining at all. They update frequently. And with synthetic data becoming increasingly viable, the need for your specific data to build the underlying model is shrinking fast.

Because behavior is no longer fixed, governance can’t be either.

That sounds abstract, but it shows up quickly in real workflows — especially in hiring. Here’s a non-technical way to think about what that actually means in practice.

Imagine an interviewer speaking with a candidate who mentions they led a complex software migration at their last job. An old-school AI tool would scan that answer for keywords — “migration,” “led,” “software” — check them against a list of desired skills, and move on. It’s pattern matching. It’s looking for what it already knows.

A reasoning-based agent does something different. It hears “software migration” and follows up: you mentioned the migration was complex — what was the hardest part of keeping everything intact while the systems were moving? It doesn’t need the answer to be on a keyword list. It understands context. It asks the next logical question the way a skilled interviewer would. That’s the shift from recall to reasoning — and it changes everything about what governance needs to cover.

That kind of interaction isn’t scripted — it’s adaptive. This is exactly the kind of fluid intelligence Cattell was describing: not stored knowledge, but the ability to navigate a novel situation in real time. And when these systems operate this way, protecting the data that trained the model is only one small piece of responsible oversight. You need enough crystallized knowledge to evaluate the answers you’re getting — and enough fluid thinking to know what questions you haven’t thought to ask yet.

“Security today isn’t just about protecting data. It’s about governing evolving intelligence.” — Madhu Mathihalli, VP & GM Product, Eightfold

The questions we should all be asking instead

This isn’t a vendor problem or a security team problem. It’s an everyone problem. HR leaders, TA leaders, and ops leaders are all approving, deploying, and championing AI tools right now — which means we are all responsible for asking better questions about how they work.

Data protection isn’t being replaced here — it’s being expanded. The shift isn’t about swapping one question for another. It’s about widening how we think about control, because the thing we need to control has changed.

Ready to move beyond outdated frameworks? Join us at Cultivate US or Cultivate Europe to explore what responsible AI governance looks like in a generative-first world. 

You might also like...

AI Hiring Insights
AI Hiring Insights
10 min read

The future of recruiting isn’t just faster—it’s fairer, more precise, and more human. Here’s what I’ve learned by using AI every day.

Autonomous AI Teammates
Autonomous AI Teammates
Jul 16, 2025 14 min read Jason Cerrato

Agentic AI is changing work fast. Discover what HR must do now to close experience gaps, evolve strategy, and share AI’s rewards.

Human-AI Collaboration Insights
Human-AI Collaboration Insights
Sep 10, 2025 14 min read Rebecca Warren

Discover why success depends on mindset, culture, and human-AI collaboration, and how SaaS can unlock lasting value.

Show More (7)Show Less
AI Integration Readiness
AI Integration Readiness
Aug 14, 2023 12 min read Sania Khan

Most leaders know they need AI but don't know exactly how to do it yet. These strategic planning questions will keep you ahead of the curve.

AI Talent Identification
AI Talent Identification
Apr 16, 2026 13 min read Eightfold AI

A new type of professional is emerging: the AI-native employee. Learn what separates them from everyone else.

Business Opportunities In AI
Business Opportunities In AI
May 18, 2023 13 min read Eightfold AI

Today’s business and IT leaders need data-driven AI and machine learning technologies to move their business goals forward — the key to their success will be determined by how they implement and manage the tools.

AI Candidate Sourcing
AI Candidate Sourcing
Mar 12, 2019 11 min read Eightfold AI

Artificial intelligence (AI) is transforming the hiring process at every step and helping companies find better candidates. Here’s how.

AI Adoption In HR
AI Adoption In HR
Aug 18, 2025 12 min read Eightfold AI

Leaders in agentic AI discuss work will look in the near future, and how HR must lead the way in AI adoption.

AI In HR
AI In HR
May 14, 2024 12 min read Jason Cerrato

People and AI working together have seemingly limitless potential. But only if organizations adopt and scale it the right way.

Impacts Of AI On Jobs
Impacts Of AI On Jobs
Jun 30, 2023 10 min read Eightfold AI

Experts agree that A.I. will play a significant role in the way we work, but won't outright replace people at the scale many fear.

Share Popup Title

Share this article