The Architecture of Intellectual Retreat

Blog Header "The search for certainty led philosophers to accept pseudo-solutions rather than sit with the discomfort of not knowing." — Hans Reichenbach

There is a moment in every intellectual exchange when discomfort arrives. A question lands that doesn't fit. A challenge emerges that threatens the architecture we've built. In that moment, we face a choice—perhaps the most important choice any thinker can make.

We can engage. Or we can retreat.

Today I watched a philosopher of mind choose retreat. And in that choice, she demonstrated everything I have been trying to articulate about the crisis of intellectual discourse in the age of AI.

Her name is Ellen Burns, PhD. She runs a Substack called "AI's Without Minds." Earlier today, she blocked me for asking questions about her arguments.

This essay is not revenge. It is a case study. Ellen's work, her claims, her responses, and ultimately her retreat provide a concrete example of a pattern I believe is damaging our collective ability to make progress on questions that matter.


Ellen Burns completed her PhD at Columbia University in 2022. Her dissertation was titled "Thinking with Chomsky about the very idea of the mind-body problem." She is now a visiting assistant professor at College of the Holy Cross.

Her Substack's stated mission: "Dissecting the illusion that AI's are conscious responsible agents." The title—"AI's Without Minds"—announces the conclusion before any investigation begins.

This week, she published a piece titled "Do LLMs have Language?" with the subtitle "exploring language as a biological object of the human mind/brain."

Notice what has already happened. Before any argument is made, the terms are set: language is biological, therefore the question of whether LLMs have it is already answered. This is not inquiry. This is definition as conclusion.


In her piece, Ellen writes: "If we understand language to be a computational system of the human mind/brain, LLMs certainly don't have it."

Read that again.

If we define language as biological, then non-biological systems by definition cannot have it. This is a tautology dressed as an argument. The conclusion is built into the premise. There is nothing to investigate because the investigation was foreclosed by the framing.

I pointed this out in a comment before her piece was even published, when she previewed it: "Ellen, you are begging the question here. You have already answered your own question and there is no need to read your argument as you have already made it impossible to counter. You are exploring if LLMs can have language (a non-biological entity) and have already defined language in this post as a 'biological object'. By your own definition an AI cannot possess language."

The response to this structural critique was not engagement. It was not "here is why the definition is justified independently of the conclusion." It was not "let me clarify the argument."

The response was silence. And then, today, when I continued to ask questions: blocked.


Ellen's argument rests heavily on the concept of MERGE, a recursive operation from Chomsky's minimalist syntax. She defines it: "Merge is an operation that takes two syntactic objects, call them, X and Y, and forms a new object, call it, Z, defined to be the (unordered) set, {X,Y}."

This is correct. MERGE is a recursive set-forming operation that builds hierarchical structures.

But notice what MERGE is: it's recursion. The ability to take elements and combine them into new structures, which can then be combined into further structures, infinitely.

Recursion is computation. It is fundamental to computer science. It is exactly what computational systems do.

If Chomsky's minimalist program is correct—if MERGE/recursion is the core of human language—then computational systems that demonstrably perform recursive operations, producing exactly those hierarchical structures, should satisfy the definition. The minimalist framework, by stripping language down to recursive operations, makes the human/machine distinction harder to maintain, not easier.

Ellen uses the minimalist framework to argue LLMs can't have language. But her own framework undermines her conclusion.


Ellen makes a testable assertion: "Notice also how the structure of these sentences are nothing like what an LLM produces."

Her example: "This fact that John whom Mary loves runs surprises me."

This is center-embedded recursion. And it is trivially falsifiable. LLMs produce exactly these structures. All the time. Ask any LLM to generate sentences with center-embedded clauses and it will do so effortlessly.

The empirical claim does not hold. It can be checked in thirty seconds by anyone with access to an LLM.

When I prepared to point this out, along with the observation that her reliance on Chomsky needed updating—Chomsky has evolved significantly since his 1960s-80s frameworks—Ellen's response was: "Your 'facts' are way off here. What you just said is factually incorrect."

No explanation. No counter-evidence. Just assertion.

I corrected myself on the timing (the Minimalist Program emerged in the early 1990s, not 2000s), because I correct myself when I am imprecise. That is what intellectual honesty requires.

The underlying point remained unaddressed: if MERGE is recursion, and recursion is computation, then what exactly excludes computational systems from "having language" by Ellen's own framework?

No response. Blocked.


Ellen writes: "LLMs 'acquire' language on this view by probabilistic patterns of reasoning"

This is not how LLMs work. This is a myth I have been correcting in these threads repeatedly.

LLMs do not store tables of statistical occurrences and look them up. Each token is stored as a position in a high-dimensional vector space. The relationships between tokens are geometric. The operations are linear algebra—the same mathematics that underlies quantum mechanics and relativity.

To characterize this as "probabilistic patterns" is to fundamentally misunderstand the technology being critiqued. It would be like characterizing a symphony as "probabilistic air vibrations." Technically not false, but missing everything that matters about the structure.

A philosopher claiming to be an authority on AI and language should understand how the systems actually work before making claims about what they can and cannot do.


Let me trace the full pattern of engagement that led to being blocked:

  1. Ellen posts: Strong claims about what AI cannot do.

  2. I ask for evidence: "Can you cite the studies that support the claims?"

  3. Response: Silence, or "I'm tired of justifying this to you."

  4. I point out circularity: The definitions assume the conclusions.

  5. Response: "Your facts are way off."

  6. I correct myself where imprecise and press the actual point: The framework undermines its own conclusions.

  7. Response: Blocked.

Earlier in our exchanges, Ellen had written: "I've sometimes received comments on here to the effect that I appear to be dogmatically rejecting that AI have minds, or that I am just biased against AI... Why not respond to me instead with all of the 'hard evidence' that AI has a mind, or showing me where in my work I have said things that are false?"

I took this seriously. I attempted to show her where her definitions begged questions, where her empirical claims were falsifiable, where her framework contradicted its own conclusions.

The response to "show me where in my work I have said things that are false" was to block the person who tried.


Ellen is not the disease. She is a symptom.

The disease is this: we have confused our ideas with ourselves. We have built identities around frameworks. We have invested careers in conclusions. And when someone challenges the framework—not the person, the framework—we experience it as an attack on our being.

Ellen spent five years writing a dissertation built on Chomsky. Her professional identity is "philosopher of mind who takes linguistics seriously." Her Substack is "AI's Without Minds."

When someone points out that the Chomskyan framework, properly understood, might not support her conclusions—when someone notes that MERGE is recursion and recursion is computation—they are not attacking Ellen. They are asking about the structural integrity of an argument.

But Ellen cannot hear it that way. The walls of the fortress interpret all incoming signals as threats.


Ellen's piece references other linguists on Substack. She recommends readings. She builds community with those who share her framework.

This is not inherently bad. Communities of inquiry are valuable. Shared vocabulary enables deep work.

But notice what is absent: engagement with those who challenge the framework. The block is not a response to harassment—I asked questions, I cited sources, I corrected myself when imprecise. The block is a response to dissonance.

In one of her recent posts, Ellen argued against functionalism by claiming we should study each organism's mind "on its own terms" rather than seeking generalizable mental concepts across species.

I pointed out: "You are arguing against how scientific theories and generalization work. This is a cornerstone of the scientific apparatus."

Science is generalization. You observe patterns across instances and extract principles. If you can't generalize, you don't have a theory. You have a collection of isolated descriptions.

Ellen is invoking science constantly while rejecting the basic logic of scientific theorizing.


Ellen positions herself as a voice against AI hype. Her stated goal is to counter the breathless claims about AI consciousness and capability.

But in the very act of countering hype, she generates hype of a different kind. Strong claims about what AI cannot do. Definitions that guarantee conclusions. Dismissal of evidence that challenges the framework.

The skeptic and the believer are not opposites. They are mirror images. Both make strong claims without doing the work to justify them. Both retreat when challenged. Both protect their frameworks from dissonance.

Real inquiry lives in the middle: holding conclusions loosely, engaging with challenges honestly, revising when the evidence demands it.

Ellen claims to value science. But science requires that you answer your critics, not block them.


Ellen's retreat is not merely a personal failing. It is a demonstration of exactly what I have been writing about for some time now:

The silos are real. Academic philosophy of mind and computer science do not speak the same language. The philosophers critique technologies they do not understand. The engineers build systems without engaging philosophical questions. And in the gap between them, confusion flourishes.

Ideas become identities. Ellen cannot revise her framework without threatening her dissertation, her professional positioning, her community. The cost of updating is too high, so she defends instead of inquires.

Credentials replace arguments. Ellen's response to challenge is often to invoke her PhD, her teaching, her expertise. But credentials do not make arguments valid. Arguments make arguments valid.

Echo chambers are chosen. Every block is a brick. Every dismissal is a brick. Every retreat from dissonance is a brick. The walls don't build themselves.


Ellen will not see this essay. She blocked me. But the questions I was asking remain:

  1. If language is defined as a biological computational operation, and LLMs are non-biological by definition, haven't you begged the question?

  2. If MERGE is recursion, and recursion is what computational systems do, then what distinguishes human language from LLM language processing at the structural level?

  3. If you claim LLMs don't produce recursive hierarchical structures, have you actually tested this claim?

  4. If Chomsky himself has narrowed Universal Grammar to "just recursion," doesn't this bring computational systems closer to satisfying the definition, not further away?

  5. If you're going to characterize LLMs as "probabilistic pattern matching," can you demonstrate you understand how vector embeddings and attention mechanisms actually work?

These questions are not attacks. They are invitations to do the work.

The invitation remains open—not to Ellen, who has closed the gate, but to anyone who wants to engage these questions seriously.


I wrote this not out of anger but out of sadness.

Ellen could be a valuable voice in these conversations. She has training. She has read deeply in a tradition that matters. She asks questions that deserve serious engagement.

But she has chosen the fortress over the field. She has chosen defense over inquiry. She has chosen the block over the response.

The field remains.

The questions remain.

The work continues.

And anyone who wants to engage—honestly, rigorously, with willingness to be wrong—is welcome here.


Today I was blocked by a philosopher for asking questions about her arguments. The questions remain unanswered. This essay is not the last word. It is an invitation to continue the conversation she refused to have. I won't block you because I disagree. I might not always engage, but I won't block you.