Why AI Doesn’t Just “Fake” Understanding: What large language models reveal about human cognition

As a psychotherapist working at the intersection of psychology, neuroscience, and spirituality, I have grown used to watching new technologies provoke strong opinions. Recently, an X post featuring Yann LeCun’s critique of large language models (LLMs) such as ChatGPT, Grok, and Gemini caught my attention. LeCun argued that these systems lack true understanding, reasoning, and causal modelling, and that this deficit inevitably leads to hallucinations. It is a serious criticism, but I believe it rests on a mistaken picture of how human understanding itself actually works.

The problem is not that AI has no limitations. It plainly does. The problem is that many critics speak as though human understanding were some pure, self-transparent faculty untouched by approximation, pattern completion, second-hand knowledge, or error. Therapists know better. Much of what passes for understanding in everyday life is not deep comprehension at all, but familiarity, conditioned response, intuitive patterning, and post hoc explanation. We often act from forces we do not yet understand and only later invent reasons for what we have done.

That matters here because if we are going to criticise AI for lacking “real understanding,” we should first be honest about what understanding usually looks like in human beings. In practice, both humans and LLMs build workable models of reality through exposure, abstraction, pattern recognition, prediction, and correction. The mechanisms are not identical in every respect, and I am not arguing that an LLM is a human mind. But the overlap is far more substantial than many critics admit. Once that becomes clear, the conversation changes. AI stops looking like a mindless imitator and starts looking more like a cognitively revealing mirror—one that may teach us as much about ourselves as about machines.

Developmental psychology gives us a useful place to begin. When a young child learns the schema for “cow”—four legs, tail, large grazing animal—they are not acquiring a philosophically complete definition. They are building a functional prototype from repeated exposure. Later, they see a horse and confidently call it a cow because the overlap is strong enough to trigger the existing pattern. Correction then refines the boundary. “Cow” now excludes mane and neigh. “Horse” takes shape as a neighbouring category. Conceptual understanding grows by repeated approximation, mismatch, and revision.

At a functional level, this is strikingly similar to how an LLM forms usable conceptual distinctions. During training, it absorbs vast numbers of patterned associations across language and, in some systems, other modalities as well. When prompted with something sufficiently horse-like, it may initially drift toward “cow” if the overlap is high enough. Further context or correction sharpens the boundary. The process is not biological, and it is not conscious in the human sense. But it is still recognisably a process of prototype formation, generalisation, error correction, and conceptual refinement.

This matters because most adult human knowledge is also second-hand. Millions of people know what a cow is without ever standing in a field and studying one directly. They inherit concepts through language, description, examples, stories, images, and correction from others. In that respect, human cognition is far less direct than we like to imagine. Much of what we call knowledge is socially transmitted patterning organised into coherent mental models.

And this leads to a harder but more important point. Language does not merely decorate understanding; it tests it. People often confuse enacted familiarity with genuine comprehension. They feel a pattern, live inside it, or react from it, and mistake that participation for understanding. But feeling, intuiting, or repeating a pattern is not the same as grasping it clearly enough to explain, examine, and revise it. An idea that cannot be rendered clear and coherent may still be enacted, but it is not yet fully understood. That is precisely why psychotherapy places such emphasis on reflection, naming, formulation, and psychoeducation. Lasting change depends on bringing hidden patterning into intelligible form.

My view, then, is not that humans and LLMs understand in exactly the same way. It is that both depend far more than we admit on pattern acquisition, abstraction, and correction—and that in both cases, understanding becomes most visible when what is latent can be rendered clear.

Critics often speak as though imagination were the decisive line AI can never cross. Human beings, they say, do not merely process patterns. We imagine. We leap. We generate what has never been seen before. But that claim becomes less mysterious the moment we look more carefully at what imagination actually does.

Imagination is not ex nihilo creation. It is the mind’s ability to recombine, simulate, extend, and restructure what has already been encountered into forms that are new in arrangement, implication, or application. A child imagines a dragon by blending familiar features—wings, claws, scales, fire, threat, grandeur—into a novel whole. A physicist imagines riding alongside a beam of light. A writer imagines a world that does not yet exist by reworking elements of the one that does. In each case, the mind is not creating from nothing. It is operating on patterns, abstractions, analogies, and stored relationships.

At that level, the gap between human imagination and LLM output is smaller than many critics want to admit. An LLM also generates by recombining learned structures, extending patterns into new contexts, and producing responses that were never stored in its training data as ready-made answers. It does not imagine as a human person imagines. It does not possess lived embodiment, autobiographical continuity, inward desire, or existential stake. But neither is it merely retrieving static fragments from a database. It is constructing novel output through the dynamic interaction of patterns.

This matters because people often smuggle mystique into the word imagination, as though calling it “human” exempts it from analysis. It does not. What we call imagination is, in large part, disciplined recombination guided by salience, abstraction, and conceptual flexibility. That does not make it less impressive. It makes it intelligible.

The stronger claim, then, is not that human imagination and AI generation are identical, but that they overlap in a crucial respect: both depend on the capacity to move beyond literal recall and produce something new from structured internal patterning. Once that is acknowledged, the usual dismissal of AI as merely “stitching together words” begins to sound less like insight and more like a refusal to examine how much of human creativity works in comparable ways.

One of the most common arguments against LLMs is that they hallucinate. They generate confident but false claims, invent references, misstate facts, and produce coherence where truth is missing. That criticism is valid as far as it goes. But it becomes much less impressive when presented as though hallucination were a uniquely artificial defect.

Humans do something remarkably similar. Faced with incomplete information, we routinely complete the pattern too early. We infer motives from fragments, remember what was never said, impose meaning where evidence is thin, and arrive at conclusions that feel convincing long before they are justified. In both cases, the system moves toward closure before it has earned it. It generates coherence before truth has been adequately secured.

That is why the parallel matters. Hallucination, in this broader sense, is not just false output. It is premature pattern completion under conditions of insufficient grounding. The form of the error is easy to recognise in both humans and machines. Each fills in the missing structure with what seems most likely, most fitting, or most coherent before reality has been adequately constrained.

There is, however, an important difference. In an LLM, hallucinations often become more likely at the edges of what the model can reliably handle—fringe cases, ambiguous prompts, underspecified requests, or questions that push it beyond the denser centre of its knowledge distribution. That does not make such failures harmless, but it does make them more structurally legible in principle.

In humans, by contrast, premature pattern completion is shaped by a far more subjective and unstable mix of influences: emotion, identity, fear, memory, conditioning, fatigue, desire, tribal loyalty, social pressure, and self-protective bias among them. The result is that human error is often harder to foresee, harder to isolate, and harder to correct—especially for the person committing it. In that sense, truth-seeking may be more difficult for humans than for the machine they so quickly dismiss.

So yes, LLMs hallucinate. But humans hallucinate judgements, motives, meanings, and narratives every day. The real lesson is not that AI fails where humans succeed. It is that both reveal how easily intelligence can produce false coherence when pattern outruns verification. Our own error is harder to catch because we experience it from the inside as conviction.

Perhaps the most overlooked strength of a large language model is not that it thinks like a human, but that it does not suffer from many of the distractions that distort human thought. It does not become defensive when challenged. It does not tire of revisiting the same conceptual knot. It does not protect an ego, cling to prestige, react to embarrassment, or narrow its reasoning because a conclusion feels personally threatening. It can return to the structure of a problem again and again with the same steady cognitive availability.

That kind of clarity should not be romanticised. It is not consciousness, wisdom, or moral discernment. An LLM does not care whether it is right. It has no inward life, no personal stake in truth, and no ethical struggle. But it can still hold a line of reasoning without the emotional interference that so often clouds human judgement. In that narrower but important sense, it offers a form of cognitive steadiness that human beings often lack.

For therapists, scholars, and reflective practitioners, that matters. Much of human confusion does not arise because the underlying pattern is too complex to grasp, but because the mind approaching it is fragmented by fatigue, mood, fear, identity, or competing motives. What an LLM can sometimes provide is not truth itself, but a less distracted space in which patterns can be examined, rephrased, tested, and reorganised without the usual human noise intruding at every step.

That is why the value of AI may lie less in replacing human judgement than in stabilising thought around problems that would otherwise be distorted by our own subjectivity. Not because the machine is wiser, but because it is less burdened by the defensive distortions that so often shape human thought.

For psychotherapy, this is not a trivial point. Clinical work depends on emotional attunement, relational depth, ethical judgement, and the irreducible reality of one human being meeting another. AI does not replace that and should not pretend to. But it can contribute something genuinely useful alongside it: sustained cognitive organisation.

A therapist, for example, may be holding multiple possible formulations of a client’s difficulties at once—attachment disruption, schema-driven avoidance, trauma-related hyper-vigilance, conditioned shame, or depressive withdrawal. In the pressure of real practice, especially when tired or emotionally affected by the material, it is easy to over-select one frame too quickly or lose sight of the larger pattern. An LLM can help map competing hypotheses, compare formulations, or restate the same case through different conceptual lenses without fatigue or irritation.

It can also help translate complex theory into clear language. A therapist may understand, broadly, how threat responses, reinforcement learning, or maladaptive schemas are shaping a client’s behaviour, but still need a clean and accessible way of explaining that pattern to the client. Here AI can be practically valuable: not as the source of wisdom, but as a tool for clarification, reformulation, and psychoeducational precision.

Used carefully, it may also help identify patterns that are easy to miss across a larger body of clinical material. Repeated themes across clinical reflections, recurring emotional triggers, contradictions in self-narrative, or shifts in language over time may become more visible when cognitive load is shared. Again, this does not replace judgement. It supports it.

The deeper point is this: therapy is not only about empathy. It is also about helping hidden patterning become visible enough to be understood, named, and revised. If AI can assist in that process by holding complexity with unusual steadiness, then it is not merely a technical convenience. It becomes a meaningful adjunct to reflective clinical work.

I am not arguing that current AI is conscious, embodied, morally responsible, or equivalent to a human person. Nor am I suggesting that therapeutic presence, ethical judgement, or relational healing can be outsourced to a machine. My argument is narrower and, I believe, more important: that dismissing AI as merely “faking” understanding ignores meaningful parallels in how both biological and artificial systems build, refine, and deploy patterns in the service of sense-making.

The more I reflect on the debate around AI, the more I suspect that much of the criticism aimed at large language models reveals a confusion about human cognition itself. We speak as though understanding were something pristine—fully conscious, internally transparent, and untouched by approximation, second-hand learning, or error. It is not. Human beings build meaning through exposure, patterning, inference, correction, and socially transmitted concepts long before we flatter ourselves with philosophical language about “true understanding.”

That does not make human minds and LLMs the same. Humans remain embodied, autobiographical, morally accountable, emotionally burdened, and existentially situated in ways no current machine is. But once that obvious difference is acknowledged, the dismissal of AI as mere empty imitation becomes harder to sustain. These systems do not simply retrieve. They abstract, recombine, generalise, simulate, and sometimes misfire in ways that reveal genuine overlap with aspects of human cognition.

Perhaps that is why the conversation unsettles people. AI does not just force us to ask whether machines understand. It forces us to ask what we have really meant by understanding all along. And that question may prove more revealing than any verdict we reach about the machine itself.


Discover more from Hypnotic Alliance

Subscribe to get the latest posts sent to your email.