Sveriges mest populära poddar
The Digital Transformation Playbook

Relying On AI Can Weaken Your Critical Thinking

17 min19 mars 2026

How does frequent AI usage impact our critical thinking skills? A polished answer in three seconds feels like magic until you realise it might be dulling the very skills that keep you sharp. 

Google Notebook LM agents are used to dive into new research from SBS Swiss Business School that links frequent AI use to a measurable drop in critical thinking and unpack the mechanism driving it: cognitive offloading that skips the friction where learning happens. 

The twist that surprised us both is who is most at risk and why authority bias makes smooth interfaces look like truth.

At A Glance / TLDR:

  • key findings from a 666-participant SBS Swiss Business School study
  • how cognitive offloading moves from storage to judgment
  • why authority bias and polish increase AI trust
  • why younger digital natives are more vulnerable than older cohorts
  • education as a protective buffer measured by the Halpern assessment
  • productive friction for AI UI design and human-in-the-loop norms
  • assessment redesign that grades thinking process, not just product
  • practical habits to keep the judgment phase with the human

We walk through the study’s methods and findings in plain language, from ANOVA-backed evidence to interviews that map how users hand over the “mental driving.” Younger digital natives show higher AI trust and dependence, while older cohorts carry a built-in scepticism forged in clunky tech eras. Then comes the good news: education functions as a cognitive shield. Using the Halpern Critical Thinking Assessment as a lens, we explore how training in reasoning, hypothesis testing, and probabilistic thinking lets people treat AI as raw material rather than an oracle.

From there, we get practical. For technologists, we argue for productive friction: interfaces that surface contradictions, show uncertainty, and prompt verification. For policymakers, we outline human-in-the-loop requirements where it matters and better guardrails for developmental settings. For educators, we propose assessment redesign that grades the thinking process—version histories, prompt audits, oral defences—so students learn to critique AI rather than copy it. Finally, we share daily habits to keep your edge: treat the model like a fast intern, verify sources, ask for counterarguments, and never surrender the judgment phase.

If you care about staying sharp in an AI-saturated world, this conversation offers evidence, frameworks, and tools you can use today. 

Subscribe, share with a colleague who leans on AI, and leave a review telling us your rule for when to trust the machine.

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ [email protected]
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


Fler avsnitt av The Digital Transformation Playbook

Visa alla avsnitt av The Digital Transformation Playbook

The Digital Transformation Playbook med Kieran Gilmurray finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.