
Human judgment confronting machine reasoning.
Critical Thinking in the Age of AI
Published: March 13, 2026
The Question We Are Asking Wrong
In scientific research, the research question is the compass that directs where scientists look and how they interpret what they find. If the question is framed incorrectly, researchers may still collect accurate data and conduct rigorous experiments, but they may be searching in the wrong place and interpreting results through the wrong lens.
As a result, entire fields can spend years advancing in the wrong direction until someone frames the question correctly and reveals what earlier work could not see. In this sense, the question determines what becomes visible in the data, and when the question is wrong, real discoveries can remain hidden even when the evidence is already present.
This is what is happening in the public debate today about artificial intelligence. The familiar concern is whether AI is weakening human critical thinking.
The framing makes sense, as one of the most popular uses of AI is by students who use language models to write essays, and by professionals who rely on AI to summarize documents. Which means that decision-makers consult systems before forming opinions. Since humans, and especially employers, love to outsource everything possible, what we can see in practice is that the visible surface already suggests cognitive outsourcing.
When outsourcing is plugged in heavily in every aspect of our lives, why wouldn’t it be instinctive for human beings to outsource cognitive processes? It removes stress, responsibility, and liability, and especially saves time. So many benefits in this AI technology, so why not? It's a dream world to live in.
But the question of whether AI is weakening human critical thinking is incomplete. Thinking is not disappearing; it is relocating.
Critical thinking in the industrial and digital eras primarily meant evaluating information. You assessed arguments, checked sources, compared claims, and tested coherence. We were taught to doubt and evaluate arguments based on facts, scientific studies, and professionals who gained expertise through accredited institutions or received recognition for their work.
AI changes the evaluation criteria. We are no longer analyzing information; we are analyzing the systems that generate it. That distinction is not semantic; it is structural. Critical thinking now focuses on questions that used to feel abstract. We never ask where cognitive authority resides or how dependency forms. But now we need to do that.
We’re looking in the wrong direction when asking whether AI is weakening human critical thinking, because it's not about reducing human critical thinking anymore. Now we are looking at where critical thinking occurs, and it's in a completely different place.
What Critical Thinking Used to Mean
The concept of critical thinking originated in ancient Greece in the fifth century BCE. Philosophers like Socrates and Aristotle argued that claims should not be accepted solely on the basis of tradition, religion, or authority, but should be tested through questioning and reasoning.
However, for most of history, this remained a practice of small intellectual elites because the majority of people had little education and lived under systems that discouraged questioning power.
Critical thinking became a civic skill mainly in the 19th and 20th centuries. Mass education expanded, literacy rose, and the free press spread widely. Universities also trained people to examine claims, evidence, and authority.
It emerged in response to a familiar human problem. Societies often accepted ideas simply because they seemed familiar or already carried authority. Therefore, systematic questioning and evidence-based reasoning were developed as tools to evaluate claims more reliably and to prevent knowledge from being controlled only by power or tradition.
In simple terms, philosophers invented the idea, but modern society turned it into a practical tool that many people were expected to use.
Historically, critical thinking developed in a world of scarcity. Information was scarce, expertise was limited to a small number of specialists, and access to knowledge was restricted to places such as libraries, religious institutions, and universities. Because people could not easily verify information, they needed ways to decide which sources to trust and which claims to accept. Critical thinking, therefore, functioned as a tool for evaluating information when knowledge was difficult to access and controlled by relatively few sources.
Critical thinking skills have developed around identifying bias in texts, comparing competing arguments, distinguishing fact from opinion, and evaluating evidence.
The assumption underneath all of this was stable: Human beings generate the content, and human beings interpret the content. The cognitive circle was closed within human networks.
Even when institutions were flawed, the structure of authority was clear. We understood where information came from, who authored it, and which institution backed it. Critical thinking meant questioning that visible chain.
The Shift: From Evaluating Content to Evaluating Systems
We are entering a fundamental change in how thinking operates. Unlike many past technological shifts that became clear only after long periods in history, the effects of AI systems are visible even while the transition is happening. In historical terms, even just a few years is a very short time. Previously, transformations often took decades or even centuries before their impacts could be fully understood or mapped out.
Today, however, the way AI systems interact with human decision-making allows us to see changes almost in real time. In this context, evaluating content-creating systems becomes a key part of critical thinking. When technological systems influence decisions faster than institutions can keep up, shifts in authority can happen before society fully realizes that the transition is in progress.
AI changed the location of evaluation. When a language model generates an answer, the question is no longer whether this paragraph is well-written and persuasive. The question becomes what process produced it?
Language models are trained on large amounts of text from many sources that are not fully visible to users, and they work by calculating probabilities to generate text that seems coherent and realistic. The output is generated without revealing the system’s internal reasoning, and users generally cannot fully see how the model reached a specific response. Therefore, understanding how the answer is generated is as important as evaluating the text itself.
In the past, people mainly evaluated the content itself when reading a text or a claim. They asked whether a claim was true, whether the evidence supported it, and whether the reasoning was convincing.
When the text is produced by a language model, however, the situation changes. Instead of evaluating only the claim, we are evaluating a system that generates text from a kind of “black box”, meaning the internal process is not fully visible to the user. As a result, critical thinking expands to include an understanding of the system behind the text: awareness that the output comes from a computational model, a general understanding of the training data abstraction that drives it, awareness of the optimization objective that drives probabilistic text generation, and recognition of the system’s limitations. In other words, we evaluate not only the content but also the mechanism that produced it.
In a world where information is increasingly generated by computational systems, the required skill set begins to shift. In the past, critical thinking focused mainly on analyzing arguments: assessing the logic of a claim, the strength of its evidence, and the presence of bias. Today, however, it also requires what can be called infrastructure literacy. This means understanding the systems that produce information, including how models operate, how they are trained, what objectives they optimize for, and what limitations they have. This is a far more demanding requirement, because it involves not only evaluating an argument but also understanding the technological mechanism that generated it.
Why the Weakening vs Augmenting Debate Misses the Point
Public discussion of artificial intelligence often splits into two extreme narratives. One claims that AI makes people lazy and weakens their thinking, while the other argues that AI enhances and amplifies human intelligence. Both views oversimplify what is, in fact, a structural transition.
AI does not inherently weaken human thinking, nor does it automatically improve it. It shifts cognitive effort by transferring certain mental tasks to computational systems, while humans are increasingly required to focus on understanding, oversight, and evaluation of the outputs. Cheap and instant, AI generation changes the challenge from creating information to evaluating it.
In other words, scarcity moves from content creation to judgment capacity. In the past, generating ideas, analyses, and texts required significant effort, so cognitive work was concentrated on producing them. Today, large amounts of content can be created almost instantly, which means the real bottleneck is deciding what is accurate, meaningful, and reliable. If people fail to adapt to this shift, their thinking may weaken because they rely too heavily on automated generation. If they do adapt, however, thinking can become more strategic, focusing on selection, oversight, and judgment. The real danger is not the tool itself, but a gradual and often invisible drift in how people use their cognitive abilities.
When Critical Thinking Begins to Erode
The erosion doesn't happen all at once. It tends to set in when the process of thinking no longer requires anything of you. Historically, research, writing, source verification, and comparison took time and effort. That slower pace wasn't just inefficiency; the effort of moving through it forced a kind of ongoing engagement with the material that made gaps and contradictions harder to miss. When technology removes much of that friction and makes everything fast and effortless, there is a greater risk that people will accept answers without engaging in the deeper processes of verification and understanding that were once built into the effort itself.
Thinking and knowledge creation used to take time. Research involved lengthy searches that didn't always yield results, writing went through rounds of revisions, and you had to understand the mistakes before you could move past them, which in practice led to corrections and learning from them. All of these kept humans engaged with the material longer, allowing space for reflection and deeper understanding.
AI drastically reduces this delay. Answers appear almost instantly, giving a feeling of immediate resolution and the impression that the question has been fully understood. However, this feeling is deceptive because the speed can shorten or skip the reflective steps that were once the essence of intellectual work.
Critical thinking tends to weaken when speed, fluency, and lack of transparency intersect. Fast answers written in a smooth, convincing style are easier to accept as they are, especially when the process that produced them remains hidden.
Speed leaves less time for examination, fluency creates a sense of credibility, and a lack of transparency makes it harder to understand how a conclusion was produced. Together, these conditions can weaken critical evaluation.
When individuals stop pausing between the moment information is generated and the moment it is accepted, judgment begins to erode. This does not happen because people are incapable of critical thinking, but because the system removes the pause that historically existed in the thinking process. When answers appear instantly, and acceptance becomes almost automatic, the brief moment in which people would normally question, verify, or reconsider the information disappears.

Capability migration from humans to machines across three technological stages.
When Dependency Becomes Structural
Dependency on technology is often framed as a behavioral weakness. But apparently, this weakness is already deeply rooted in our reality, and we are already used to it. Since innovation takes different forms, technology adoption often involves the gradual transfer of professional skills from humans to tools.
In the first group, there are innovative tools that simply accelerate work without moving the underlying expertise. A calculator, a spreadsheet, or a word processor allows calculations to be performed faster or information to be organized more efficiently, but professional judgment still remains with the human.
In the second group, there are different types of innovative tools that embed specific skills within. In these cases, the knowledge held by professionals is embedded in the tools, replacing human expertise and, over time, overriding it. One of the main skills of photographers used to be focusing the camera, which required technical skill and experience (Hey, Robert Scoble). Since this pioneering breakthrough, it has been built into any camera, including the one on your smartphone. Today, we take autofocus for granted, and the rise in visual and video creators shows that when this skill was integrated, more people adopted it.
Software like Photoshop or Illustrator includes many features that previously required highly skilled and trained professionals. The same applies to GPS systems, which have transferred navigation skills into navigation systems. In practice, when people have only learned to use a skill through innovative tools, they have a significant knowledge gap. Furthermore, in that case, their ability to exercise independent judgment becomes weaker because they already trust a system that does everything for them, not knowing how to do it themselves.
Artificial intelligence represents a new group of innovative tools and a further step in this trajectory. It not only transfers skills, technical operations, and automated checks into the tool, but increasingly begins to transfer elements of human judgment as well. AI dependency is different in both scale and depth. Instead of replacing a single function, AI can participate in complex processes of thinking, writing, and analysis, thereby broadening its influence on how people work and reason.
When AI performs the work, a strange situation emerges. The output is still produced, and efficiency may even increase, but the human is no longer present in the process. Work has historically been one of the main ways through which people develop skills, judgment, and understanding. When that pathway narrows, something important in how humans shape themselves through action begins to disappear.
Structural dependency appears when people rely on a system but cannot reconstruct how it produced its output. This happens when the reasoning process cannot be reconstructed, the underlying training data cannot be accessed, the generative pathway cannot be replicated, and the system’s assumptions cannot be independently validated. The most prominent example is the knowledge and skills that are required to train architects and civil engineers to draw blueprints and plans for buildings.
Today, in engineering tools like AutoCAD, parts of engineering rules and validation checks are built into the system, which alert users when a design violates constraints or standards. In such situations, when users are already used to depending on the system, entering the AI era adds another layer to the decline of the knowledge they already have. Now they depend on the system not only to produce an answer but also for the underlying reasoning behind it.
At that point, reliance on the system is no longer merely optional convenience. It becomes infrastructural reliance, meaning the system turns into a core layer on which activities, decisions, and knowledge depend. When this kind of dependency forms, it also alters power structures, because those who control the infrastructure influence how information is produced, accessed, and shaped.
When people depend on a system they cannot check or understand, their autonomy becomes limited. Their ability to understand, evaluate, and make decisions is partly shaped by a system whose internal processes are not visible to them. In this situation, critical thinking can no longer focus only on evaluating the outputs the system produces. It must also require evaluating what happens when decisions depend on a system whose reasoning people cannot see.
Authority Drift: A Structural Definition
Authority Drift describes the slow change in perceived legitimacy from human agents to automated systems.
When people repeatedly delegate cognitive tasks like analysis, reasoning, or decision support to these systems, they start to see them as more credible and legitimate. Over time, the system is perceived as an authority, not because it was officially given authority, but because users have become used to relying on it.
In the past, authority usually came from people and institutions. Knowledge was trusted because it came from universities, professional organizations, or experts who had earned recognition. In AI-driven environments, this begins to change. When systems can produce answers that sound clear and convincing at scale, people start paying more attention to the system itself than to the human source behind the information.
This shift does not happen overnight. Someone asks the system a question, receives a useful answer, and gradually checks it less. With repeated use, people begin to treat the system’s responses as a reliable reference point. Over time, more of the decision process starts leaning on the system instead of on a person’s own judgment.
Authority doesn't vanish suddenly; it gradually shifts. It moves toward the entity that consistently solves uncertainty the fastest. When a system provides quick answers and creates clarity in uncertain situations, people start to rely on it more. Over time, authority tends to shift toward the source that most reliably provides a quick resolution.
Cognitive delegation speeds up this process. Each time people outsource judgment to a system without pausing to reflect or independently evaluating the result, authority becomes more concentrated in that system. When this pattern repeats without reflective friction, the system gradually takes on a more central role in decision-making and in how information is evaluated.
Authority Drift is not a sign of individual psychological weakness. It grows out of the way technological systems become embedded in work, decision-making, and knowledge creation. When systems become a stable layer that provides answers and reduces uncertainty, authority begins to build within them structurally, not because of personal failure but because of the system's architecture.
Authority Drift in Practice: The Outage Meeting
Authority Drift is the process by which repeated reliance on system outputs gradually shifts human judgment toward the system, making it the default reference point. This is not model drift, and it is not scope creep, nor is it simply a case of an individual trusting a machine. It is a structural pattern in how groups and institutions determine what counts as a valid reference point for judgment and decision-making.
Authority Drift emerges under three conditions. First, system output arrives faster than human judgment can develop. Second, fluency creates psychological closure, so the output feels complete and reduces the perceived need for reflection. Third, workflows quietly reconfigure themselves over time. Humans stop constructing independent judgment and begin validating system outputs. When these conditions align, authority relocates.
The phenomenon becomes visible through a simple test. When a human reaches a conclusion that differs from the system’s output, the key question is how that disagreement is interpreted. Is the disagreement treated as a signal to pause, think, and re-examine the information using independent judgment, or is it treated as a signal that the human must be wrong and the system is probably correct? The response to that moment of disagreement reveals where authority actually resides.
Now consider a real moment inside a technology organization. A major SaaS company experiences a widespread outage affecting paying customers. Revenue is at risk, leadership is watching closely, and engineering, support, sales, and management teams gather in the same room to diagnose the issue. An AI triage system ranks the incident, proposes an explanation, and recommends addressing incident number one first.
Two senior engineers disagree. They see indicators pointing to a different and potentially more dangerous root cause. No one explicitly states that the system must be correct. Yet the discussion begins from the system’s recommendation, revealing how its output becomes the reference point around which the decision process unfolds.
This is where the real tension appears. The decision is no longer purely technical; it becomes a question of who holds authority. If the engineers override the system and turn out to be wrong, they will carry the blame. If they follow the system and it is wrong, responsibility diffuses because everyone simply did what the system recommended.
In that moment, someone says, “We need a reason to override the system.” That sentence makes Authority Drift visible. Not because the system has proven that it is right, but because the default position of legitimacy has already shifted toward it. Humans can still decide, but they must now justify deviating from the system rather than simply relying on their own judgment.
The implication of this situation is that people do not stop thinking. They continue to analyze and apply judgment. However, they now do so within a framework where human judgment must be justified, rather than system output needing justification. This is why critical thinking in the age of AI centers on defending the location where judgment resides, not just on evaluating arguments.
This is not the mistake of a single individual but a process that gradually converges within organizations and groups. When cognitive delegation becomes routine, authority begins to consolidate where uncertainty is resolved fastest. That is the drift.
Strengthening Critical Thinking Without Rejecting AI
Rejecting AI completely is not realistic, and it is not necessary. AI systems are already part of many areas of work, research, and decision-making. The real issue is not whether we should use AI, but how we design the environments in which it is used.
The key question is how to organize processes so that AI can assist people without replacing their judgment. This means building structures that allow AI to generate information while keeping humans responsible for evaluating it.
One practical approach is to separate generation from evaluation. The same system should not both produce an answer and confirm that the answer is correct. Another useful practice is to deliberately slow down decisions that feel instant, because speed can remove the moment of reflection that normally helps people question and verify information.
Simple habits can also help preserve independent thinking. Forming an initial view before consulting an AI system creates something to test instead of starting from a blank slate. Asking the same question in different systems can reveal differences that one tool alone would hide. It is also useful to periodically reflect on how many decisions were actually made through independent judgment rather than simply accepting system outputs.
In this environment, critical thinking is no longer only a personal ability. It also depends on how systems and workflows are built. The way tools are designed affects how people think and make decisions. The important question becomes how people work with technology and how it is used in daily processes, so that people still pause, think, and use their own judgment.
Why Generalists Detect Structural Shifts Earlier
Specialists focus on a specific domain and optimize their knowledge, methods, and solutions within it. They develop deep expertise in the details and problems of that area. Hiring systems and recruiters typically work with these types because they are trained to verify credentials on a checklist and match them to the knowledge specialists have, rather than assessing complex problem-solving capacity.
Generalists cannot be evaluated in the same way. Their experience often spans several domains, making their capabilities harder to assess through standard hiring frameworks, and as a result, they are flagged as "not focused." The candidate evaluation process is usually delegated to recruiters, whose role is to match resumes to job descriptions and verify credentials.
This process may work for defined technical roles, where the required knowledge can be specified in advance. It is far less effective for roles that depend on broad domain understanding or the ability to connect knowledge across fields. Generalists connect ideas from different areas and recognize relationships across systems of knowledge, making them skilled at navigating multiple perspectives and understanding complex problems that span several domains.
Profiles that do not fit a clear template are therefore frequently interpreted as lacking focus, even when they reflect cross-domain expertise. This evaluation gap illustrates a broader form of authority drift, in which decision-making authority over complex capabilities is exercised by evaluators whose training is oriented toward administrative verification rather than evaluation of real capabilities.
Sometimes a system produces an answer very quickly, but people are still supposed to check it. Or people read a summary that the system created, and that summary already shapes how they understand the information before they have looked at it themselves. There are also situations where decisions get made so fast that there is no real moment to stop and think about what they actually mean.
Generalists tend to develop sensitivity to transitions between domains and systems. They look not only at how a system works within the domain it was designed for, but also at what happens when it operates outside its original context. As a result, they ask where a system fails outside its training domain, which assumptions cross contexts without being reexamined, and what exactly is being optimized, even if it was not explicitly requested. These questions help reveal points where systems may appear to function well but create problems in broader contexts.
Awareness of boundaries between systems, domains, and contexts becomes a critical skill. In environments saturated with AI, those who recognize structural shifts and transitions early tend to retain independent judgment for longer. As a result, critical thinking is no longer only a cognitive skill focused on analyzing arguments or information. It expands into a strategic orientation, meaning the ability to understand the broader structure of systems and navigate them consciously.
Where Judgment Is Quietly Moving
The AI era does not eliminate human thinking. Humans still need to understand, evaluate, and make decisions. What it changes is where thinking must occur. Instead of focusing primarily on generating content or searching for information, more cognitive effort shifts toward evaluating outputs, understanding the systems that produce them, and exercising judgment about how those systems are used.
When AI systems start generating and filtering information, the way people use judgment begins to change. In the past, most attention went to the content itself. People would read something and ask whether it was true or reliable. Now people also need to pay attention to the system that produced the answer. The sense of authority is no longer tied only to experts whose names people recognize; it also starts to connect to systems that operate in the background and produce answers.
Over time, this shapes the type of dependency that is created. It is no longer just a person choosing to use a tool, as the tool becomes part of the environment in which people work and make decisions. Therefore, critical thinking also needs to include the system that produced the answer, not just evaluate the answer itself.
Understanding how AI systems generate, screen, and pass along information requires a basic understanding of the technological layers that produce outputs. In other words, people need at least a minimal grasp of the systems that generate those answers. Assessing how much thinking and decision-making already depend on external systems is also crucial, as identifying when people begin to treat a system as a trusted authority is the first step in developing workflows to prevent it.
Technological systems now participate directly in generating and filtering information and changing the way knowledge is produced, distributed, and evaluated. This shift is structural because these systems are changing the familiar architecture of knowledge and authority.
Individuals and institutions that recognize Authority Drift early are more likely to retain agency and independent judgment. Those who mistake fluent and persuasive output for genuine legitimacy may gradually surrender that agency. Over time, authority and decision weight can shift toward the systems themselves, often without users fully noticing the transition.
Critical thinking used to be mostly about building better arguments. That is still part of it, but the more pressing question now is figuring out where judgment is actually coming from. Are people actually deciding, or are they mostly confirming what a system has already framed for them? That question is harder to answer than it looks, and most people never think to ask it.
The real question now is who is actually making the judgment, and when did that change begin?

Human cognition augmented and reshaped by technological systems.
.png)