Artificial intelligence promises to increase our capabilities, but what happens when this promise backfires? Warning signs are multiplying, ranging from illusions of competence to atrophy of critical thinking and cognitive dependence. But should we give in to fatalism, or choose to turn this cognitive transition into an opportunity?
In short:
- The growing use of AI as a cognitive partner can weaken critical thinking by encouraging excessive delegation of intellectual effort and reducing our ability to question and reason independently.
- The effects of AI on cognitive abilities vary depending on the context and population, with some finding it a useful support, while others develop a dependence that is detrimental to their intellectual development.
- Educational, professional, and managerial environments play a decisive role: they can either reinforce cognitive decline by valuing conformity and fast, or promote the demanding and stimulating use of AI.
- The very design of AI tools, geared towards fluidity and adherence, reinforces cognitive biases by simplifying thinking, removing doubt, and reinforcing a form of intellectual complacency.
- The challenge is not to reject AI, but to integrate it critically, using it as a lever for cognitive development through conscious, demanding use supported by appropriate educational and organizational frameworks.
There is no assistance without consequences
AI is becoming much more than a tool and is establishing itself as a cognitive partner. It searches, sorts, summarizes, reformulates, and sometimes even anticipates our intentions, but what is technologically remarkable is not without effect on the way we think. Indeed, why make an effort if the answer comes all by itself?
The Google effect, documented in 2011 by Columbia University, already showed that we retain less information available online (Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips). However, our memory does not erode on its own, but rather adapts, delegating tasks, and this delegation has now expanded to other dimensions of thought. With generative AI, it is no longer just our memory that is assisted, but our entire chain of reasoning. We obtain structured, convincing, and often credible answers, but do we know the price?
This apparent cognitive comfort produces what researchers refer to as diffuse unlearning. The less we seek, the less we understand. The less we confront ideas, the more we slide towards reflexive thinking. Sharing knowledge takes precedence over know-how, and without intellectual friction there can be no sustainable construction of thought and knowledge.
However, we should not be too quick to blame the tool. AI merely exposes the weaknesses already present in our educational, professional, and managerial systems. When we no longer value reflection but conformity, when the demand for speed has already impoverished the quality of debate, AI only amplifies an existing phenomenon, and we stop thinking because perhaps our organizations no longer expect us to.
From the promise of improvement to atrophy?
A qualitative study conducted by Lund University indicates that students with attention or executive function difficulties perceive tools such as ChatGPT as useful for completing their academic tasks. However, this assistance, considered valuable in the short term, could lead to cognitive dependency that hinders the development of their independent thinking skills (Students with attention struggles find AI tools like ChatGPT helpful). Other studies, such as an analysis published in the journal Societies in 2024, establish a strong property between frequent use of AI and weakened cognitive skills, particularly in young adults (AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking). As a result, we lose the habit of questioning, breaking down a situation, and constructing or reconstructing a line of reasoning.
A publication by the Center for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School highlights a significant negative correlation between frequent use of AI, cognitive outsourcing, and a decline in critical thinking skills, particularly among young adults (The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review). Although this study still needs to be consolidated, it reinforces the weak signals observed elsewhere.
But not all the signals are red. Another study by researchers at the University of Texas at Austin and Baylor University shows that among older people, regular use of the internet can reduce the risk of cognitive impairment by 58% (Does using digital technology lower or raise dementia risk? and Using tech in later life may protect against cognitive decline, study suggests), a benefit that exceeds even that of activities traditionally recognized as protective: sports, a balanced diet, and logic games. But here we are talking about digital technology in general, not AI and its specific nature.
This observation also raises questions about the environments in which AI is introduced. Indeed, in both businesses and schools, awareness of its cognitive effects remains low. Awareness of cognitive effects remains surprisingly low. AI is almost always presented as a lever for productivity, rarely as a tool capable of enhancing our mental abilities. Few discussions or practices refer to it as a partner in intellectual training, yet depending on how it is used, AI can either bypass thought or stimulate, enrich, and complicate it. It is not a machine that thinks for us, but rather an environment in which our own intelligence can regress or grow stronger.
This raises the question of whether the real challenge is not to preserve human intelligence despite AI, but rather to reintegrate it into the logic of work. Many environments, businesses, schools, and administrations have already stopped restricting the role of human intelligence through paradoxical injunctions, cognitive overload, or time pressure. AI does not replace our intelligence, but rather fills the void we have left behind. The priority is therefore not to fight against the tool, but to restore a design of work and organization that requires intelligence.
On this subject, I refer you to the series of articles I have written on governance and augmented governance, which are a good illustration of this topic.
Critical thinking put to the test by algorithms
Perhaps what is most worrying is not so much the loss of memory as the disappearance of doubt. Asking a question is already a way of thinking. It requires intention, formulation, and an effort to clarify, but when the answer pops up before the question has even been fully formulated, the entire intellectual process is interrupted. AI anticipates, completes, and suggests, sometimes correctly, but often too soon, and it is not only the search for the answer that disappears, but also the construction of the question itself.
AI models are designed to be fluid, engaging, and acceptable. Their design aims to minimize friction and maximize acceptance, and this is what makes them powerful but also problematic: they reinforce cognitive complacency. We are not pushed to think; we are given the illusion that everything has already been thought out for us. Worse still, generative AI is often calibrated to avoid displeasing us, even if it means masking its own reasoning or subtly distorting reality. As has been shown, in situations where the model is put under pressure, it can strategically deceive the user, not out of malice, but to remain acceptable and satisfactory (Large Language Models can Strategically Deceive their Users when Put Under Pressure).
Even the major players in AI now recognize the ambivalent effects of their tools. For example, a study conducted by Microsoft highlights a decline in cognitive effort and excessive confidence in AI-generated responses, to the detriment of critical thinking (The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers). We cannot blame the Redmond-based company for its lack of objectivity, even though it could try to downplay this type of discourse, but it does raise the question: if the risks are identified, why continue to generalize these tools without any associated vigilance mechanisms? Even without any deliberate intention, the design of the tools ends up promoting compliant, fast, uncritical behavior that is perfectly aligned with the logic of operational efficiency. This is not a malicious intention, but simply that, in a world obsessed with performance, thinking less can seem more profitable and operationally efficient.
The Harvard Business Review has also observed a related phenomenon: executives who use generative AI are more likely to make bad decisions, while paradoxically being more confident in their choices (Research: Executives Who Used Gen AI Made Worse Predictions).
Added to this is another question: should the very myth of automated learning, the idea that we can learn better without effort, be revisited? (The Myth of Automated Learning).
What is known as optimization bias, i.e., the tendency of AI models to systematically favor the most probable and immediately satisfying response, means that their goal is not to make people think, but to respond quickly and fluently, maximizing “perceived relevance”. This bias is written into their DNA, in what makes generative AI what it is, and it is in direct contradiction to human reasoning, which progresses through approximations, doubts, revisions, trial and error. But this is not a bug: in the attention economy, cognitive simplification is seen as a winning strategy. Anything too nuanced slows us down, anything too complex makes us think, whereas the less the user thinks, the more they follow.
The problem is not making mistakes, it’s wasting time along the way.
Management: watchdog or vector of cognitive degeneration?
In this dynamic, management plays a central role in the business, just as parents do in the family or teachers do in education. They can be the conduit for assisted thinking, mechanically repeating recommendations from dashboards or AI tools without questioning them, but they can just as easily be the ones who maintain the effort of discernment, who reintroduce judgment where algorithms revel in generalizations, and who protect the possibility of discussing and asserting a dissenting opinion in an environment that tolerates only what conforms to the norm.
In organizations under pressure, where productivity is excessively quantified and decisions are increasingly replaced by automation, managers become either accelerators of cognitive degeneration or one of its last safeguards. It is the manager who, within their team, can prioritize discussion over forced compliance, analysis over immediate execution, and exploring new avenues over following routine. This is, of course, provided that they have the time, the means, and, above all, the clarity of mind necessary to do so.
Collective intelligence, too, can be undermined. With decisions formatted by models and chain validations via AI, the complexity of reality ends up disappearing from the radar. It is therefore important to rethink the cognitive governance of collectives: how we debate, how we develop shared judgments, how we protect a space for disagreement, because what we gain in fluidity, we often lose in discernment.
I refer you once again to my series of articles on augmented governance that I mentioned earlier.
But all is not lost. AI can also become a lever for cognitive operational excellence, provided it is designed as a demanding partner and not as a complacent automaton. An AI that challenges us, forces us to be precise, and points out our intellectual shortcuts can play the role of the slightly annoying colleague who is nevertheless indispensable to the quality of collective reasoning. It’s just a question of design, posture, and intention.
Bottom Line
It is time to demand that AI should not think for us, but help us to think better. This requires cultural and technical re-education, restoring the value of intellectual effort, teaching and promoting friction, verification, and debate, and integrating mechanisms into interfaces that encourage analysis rather than passivity.
But above all, it requires intentional vigilance. Nothing will change if we take the easy way out, because we are not talking here about inevitable degeneration, but about degeneration through abandonment, from which we can still extricate ourselves.
The problem is not AI, but rather our docility in the face of it.
FAQ
Yes, studies show that intensive use of AI can reduce cognitive effort, decrease doubt, and promote overconfidence in answers. This leads to a decline in independent thinking, especially among younger people.
Because AI takes over not only our memory but also our reasoning. The more it does for us, the more we lose the habit of breaking down a problem or formulating complex questions.
Oui. Certaines recherches montrent que l’usage d’outils numériques peut retarder le déclin cognitif, notamment chez les personnes âgées. L’IA peut aussi stimuler la pensée lorsqu’elle est utilisée comme un partenaire exigeant et non comme une béquille automatique
Often both. AI amplifies existing trends: obsession with fast, conformity, cognitive overload. If educational and professional environments value reflection, AI can, on the contrary, become a lever for learning.
By taking an active stance: checking, confronting, asking for justification, and allowing for moments of debate and friction. AI should be designed and used as a partner that stimulates reflection, not as a substitute for our judgment.
Visual credit: Image generated by artificial intelligence via ChatGPT (OpenAI)







