You’re asking the wrong questions about AI — and it’s not your fault

AI has shifted from pattern matching to reasoning, but our governance questions haven't kept up. Here's what leaders should be asking instead.

You’re asking the wrong questions about AI — and it’s not your fault

4 min read

Key Takeaways

  • AI has evolved from static pattern-matching to adaptive reasoning and governance frameworks haven’t kept pace.
  • Asking outdated security questions isn’t carelessness; it’s what happens when expertise outlasts the technology it was built for.
  • Responsible AI oversight now means governing evolving behavior, not just protecting the data models were trained on.

A few weeks ago I published a piece about crystallized and fluid intelligence — psychologist Raymond Cattell’s framework for understanding two very different kinds of smarts. 

Crystallized intelligence is what you know: the accumulated expertise, pattern recognition, and hard-won knowledge built over years. Fluid intelligence is how you think: the ability to reason through novel problems when the old playbook no longer applies.

The premise of the article is that as AI reshapes work, fluid intelligence becomes the differentiator. The environment has stopped being stable enough to coast on what you already know.

Then Madhu Mathihalli — our VP and GM of Product at Eightfold — put up a post on LinkedIn that made me see something I hadn’t connected before. Madhu wrote about watching the same question appear on every Infosec and vendor review questionnaire for 15 years straight: “Do you use our data to train your model?”

His point: that question was exactly right — 15 years ago. Today, it’s a machine learning-era question in a generative-era world.

And that’s when it clicked. What Madhu was describing wasn’t just an outdated question on a security checklist. It was crystallized thinking applied to a problem that has fundamentally changed shape. Years of knowledge built around how AI worked — train the model, validate it, version it, protect the data — and that playbook has been running ever since, even as the technology moved on ahead.

We didn’t just upgrade models. We changed what needs to be governed. The systems are no longer static artifacts that store and retrieve — they are active, adaptive, and continuously evolving. Which means the surface area of risk has expanded well beyond the data they were trained on. And almost none of our governance frameworks have caught up to that reality yet.

Leaders aren’t asking the wrong questions because they are careless. They are asking the wrong questions because they got very good at asking the right ones — and then the rules changed. That’s a fluid intelligence problem. And it’s one organizations are navigating in real time, whether they know it or not.

Why the old question made sense

In the early days of machine learning, AI was essentially a very sophisticated pattern-matching system. You fed it enormous amounts of data — millions of résumés, thousands of job descriptions, years of hiring outcomes — and it learned to recognize what “good” looked like based on the past. 

The model was trained once, validated, versioned, and monitored. Relatively stable. More predictable. Far more deterministic than today’s systems.

In that world, “Do you use our data to train your model?” was exactly the right question. You were asking: are you putting our private records into a shared library that other organizations can access? That’s a completely legitimate concern. Data governance was the right framework. It matched the technology.

And here’s something important: that crystallized knowledge isn’t worthless now. Understanding how AI was built, what data it was trained on, and how it has historically behaved is still valuable context. You need that foundation to evaluate what you’re being told. You need it to recognize a non-answer when a vendor gives you one. 

Crystallized intelligence is still the starting line. It’s just no longer the finish line because the technology changed, and our questions haven’t kept up.

Generative AI doesn’t just recall. It reasons.

Modern AI systems aren’t filing cabinets. They’re reasoning engines. They’re probabilistic, not deterministic — the same input doesn’t always produce the same output. They’re sensitive to prompts and system instructions in ways that can shift their behavior without any retraining at all. They update frequently. And with synthetic data becoming increasingly viable, the need for your specific data to build the underlying model is shrinking fast.

Because behavior is no longer fixed, governance can’t be either.

That sounds abstract, but it shows up quickly in real workflows — especially in hiring. Here’s a non-technical way to think about what that actually means in practice.

Imagine an interviewer speaking with a candidate who mentions they led a complex software migration at their last job. An old-school AI tool would scan that answer for keywords — “migration,” “led,” “software” — check them against a list of desired skills, and move on. It’s pattern matching. It’s looking for what it already knows.

A reasoning-based agent does something different. It hears “software migration” and follows up: you mentioned the migration was complex — what was the hardest part of keeping everything intact while the systems were moving? It doesn’t need the answer to be on a keyword list. It understands context. It asks the next logical question the way a skilled interviewer would. That’s the shift from recall to reasoning — and it changes everything about what governance needs to cover.

That kind of interaction isn’t scripted — it’s adaptive. This is exactly the kind of fluid intelligence Cattell was describing: not stored knowledge, but the ability to navigate a novel situation in real time. And when these systems operate this way, protecting the data that trained the model is only one small piece of responsible oversight. You need enough crystallized knowledge to evaluate the answers you’re getting — and enough fluid thinking to know what questions you haven’t thought to ask yet.

“Security today isn’t just about protecting data. It’s about governing evolving intelligence.” — Madhu Mathihalli, VP & GM Product, Eightfold

The questions we should all be asking instead

This isn’t a vendor problem or a security team problem. It’s an everyone problem. HR leaders, TA leaders, and ops leaders are all approving, deploying, and championing AI tools right now — which means we are all responsible for asking better questions about how they work.

Data protection isn’t being replaced here — it’s being expanded. The shift isn’t about swapping one question for another. It’s about widening how we think about control, because the thing we need to control has changed.

Ready to move beyond outdated frameworks? Join us at Cultivate US or Cultivate Europe to explore what responsible AI governance looks like in a generative-first world. 

You might also like...

AI Engineering Interviews
AI Engineering Interviews
12 min read

The traditional engineering interview—algorithm puzzles, syntax-heavy coding—once measured brilliance. But now, brilliance looks different.

AI Integration
AI Integration
Sep 10, 2025 14 min read Rebecca Warren

Discover why success depends on mindset, culture, and human-AI collaboration, and how SaaS can unlock lasting value.

Agentic AI in HR
Agentic AI in HR
Jul 16, 2025 14 min read Jason Cerrato

Agentic AI is changing work fast. Discover what HR must do now to close experience gaps, evolve strategy, and share AI’s rewards.

Show More (7)Show Less
Smart Hiring
Smart Hiring
Jan 15, 2019 12 min read Eightfold AI

Artificial intelligence has a range of uses for hiring, but they all boil down to efficiency: intelligent hiring saves time, money, and improves results.

Agentic AI Trends
Agentic AI Trends
Sep 22, 2025 12 min read Rebecca Warren

Agentic AI is transforming work faster than we can keep up. In this Talent Table recap, we share key takeaways from talent thought leaders.

AI Opportunities
AI Opportunities
May 18, 2023 13 min read Eightfold AI

Today’s business and IT leaders need data-driven AI and machine learning technologies to move their business goals forward — the key to their success will be determined by how they implement and manage the tools.

AI-Native Talent
AI-Native Talent
Apr 16, 2026 13 min read Eightfold AI

A new type of professional is emerging: the AI-native employee. Learn what separates them from everyone else.

AI in Recruitment
AI in Recruitment
Jun 09, 2021 8 min read Todd Raphael

“AI and machine learning aren’t magic,” Eightfold Co-CEO and Co-Founder Ashutosh Garg told a Geneva-based conference last week. But, he said, they’re necessary now for companies to hire quickly in a world where skill needs are changing fast and there’s a near-frenzy to recruit and retain.

AI Trends in HR
AI Trends in HR
Jul 26, 2023 16 min read Eightfold AI

AI is making big headlines. In this exclusive Q&A, HR expert Josh Bersin interviews our Co-CEO and Co-Founder Ashutosh Garg on the state of AI.

AI Solutions in HR
AI Solutions in HR
1 min read

As companies work to implement AI solutions, HR industry journalist Mark Feffer is watching with a careful eye and reporting on the outcomes.

Share Popup Title

Share this article