How Einstein Held Himself Back
"“Trying is the first step towards failure” — Homer J Simpson"
Nearly everyday, someone tells me what I meant. Not what they thought I meant — what I meant. I said, “No, that’s not what I’m saying.” They argue with me about it. They told me I was wrong about what I meant. “I don’t mean any disrespect but, you couldn’t have meant that. You must have meant [insert their interpretation of what they think is going on inside my head - see figures 1 and 2] because of [insert some word or phrase] you said. That always means [insert incorrect interpretation of what I meant]”.
Think about that for a second. Someone who is not me, who does not live inside my head, who does not share my experiences or my definitions, told me that my meaning, what my intent was — the thing I am the only authority on — was incorrect. And then demanded I defend their interpretation of my words instead of my own. If I do, I have assumed their premise and already lost. What a tricky trick!
This happens to me constantly. I stopped trying to count when it happens, it’s just demoralizing once you see it. Best not to focus on it. Here is the thing, it probably happens to you too. And I think it’s one of the biggest problems we don’t talk about, because it sits underneath almost every argument, every misunderstanding, every collapsed conversation, and every stalled theory in the history of human thought.
Including Einstein’s.
I have been thinking about this one for a while. I want to share it with you now. This post is about what happens when we confuse our perspective with reality. When we mistake our interpretation for the thing being interpreted. And how even one of the greatest minds in history couldn’t escape that trap — and what it cost him (and us).
Share
A Theory As a (Subjective) Perspective
Everybody has a theory. You might not call it that. You might call it a worldview, a belief system, a philosophy, common sense, “just how things are.” But it’s a theory. It’s your perspective — your subjective, experiential framework for how reality works, built from everything you’ve lived, everything you’ve read, everything you’ve been told, and everything you’ve felt. It doesn’t matter if you actually remember it or not, everything you have experienced has influenced you. That’s what a theory is. It’s a perspective. Someone’s or something’s perspective applied to the evidence in order to figure out what it means.
This means every theory is incomplete. Every single one. Because no perspective can be complete (this is subjectivity at its most fundamental). No one is omniscient. Not you, not me, not Einstein, not the person with the PhD, not the person without one. We are all working with partial information, filtered through our own experience, interpreted through our own definitions. That’s not a flaw in the system. That is the system. 1
Kuhn pointed at this directly. Theories aren’t neutral containers — they’re frameworks tied to who holds them. Two scientists working from different paradigms can literally observe the same phenomenon differently. Not interpret it differently — see it differently. Perspective shapes perception itself, not just what we do with perception afterward. Everyone experiences and thus interprets reality differently, but all are valid within their own reference frame. Sounds like relativity huh?
So when someone says “I don’t agree with that theory,” what they might be saying is: my perspective doesn’t align with that perspective. Fine. But here’s where it gets important — rejecting the theory does not dissolve the evidence the theory was built to explain. The evidence is still there. Something still happened. You still have to deal with it. Sometimes when we don’t agree with a perspective, we discount the evidence as well. Sometimes we find ways to integrate it. Most struggle to detangle them.
This is the formal concept of underdetermination of theory by evidence . 2 Multiple theories can account for the same evidence, but if you reject one theory, you don’t get to also reject the objective, empirical evidence that was backing it. You just need a better theory. And if you don’t have one? The evidence doesn’t care. It’s still sitting there, demanding an explanation. Of course, if the evidence is not strong, that is a different situation all together.
We do this with perspectives we don’t like. We don’t want to believe somebody who’s telling us something that conflicts with what we believe . But if they’re reporting what they experienced in earnest — not deceiving, not performing, just telling you what happened to them — that experience is a data point. It’s real. Whether you agree with their interpretation of it is a separate question. The experience happened. Dismissing it because you reject their theory about it is throwing out data because you don’t like the framework attached to it.
That’s the same mistake I think Einstein made. But I’ll get to that.
What Is Expertise?
I’m not a trained linguist. I am, however, a linguist. I have just as much expertise as a trained linguist. It is just in a different area of linguistics. We study the same thing from different angles. It does not make me or them any more of an expert over the other. That is ridiculous if you think about it. Like arguing over which blade of grass is the best one. I believe I used that already, but it is a good one so I don’t mind recycling.
The distinction I just made matters, so let me be precise about it. A trained linguist is someone who has gone through a formal program — taken the courses, passed the exams, written the papers, gotten the degree. That’s real. That’s an accomplishment. I’m not dismissing it.
But being trained and being expert are not the same thing. We like to think they are. We conflate them constantly. The credential becomes a proxy for competence, and once the proxy is in place, we stop checking whether the competence is actually there. I believe there is evidence to support this, from a cursory look, but I have not vetted it myself yet. That is the process. Here I am working things out. Then I see if I am on target or not and adjust when I am off. Notice that I adjust to reality, it doesn’t really matter what I think about it. Reality does not revolve around me.
What the credential tells you is someone completed a program. It does not tell you they understand the thing the program was about. Sometimes they do. Often they do (we hope). But the certificate is not the understanding. It is evidence of exposure — not proof of mastery. You did a thing and some institution verified you did the thing. But think of this, is every program and institution created equal? Are credentials comparable? That is a big pandora’s box that I am not opening further.
Now — I want to head something off here. There are fields where credentials actually do track with demonstrated competence. Medicine, for example. You can’t fake being a doctor for very long because reality checks you — your patients either get better or they don’t. There’s malpractice. There are outcomes you can measure. The system has built-in accountability (when it’s working — and yes, there are bad doctors, and institutions that protect them, that’s what malpractice is, but the infrastructure to catch it exists). Particle physics has this too. You’re dealing with reality directly. Your experiment either produces the results or it doesn’t. Same with computer scientists. It either compiles or it doesn’t. There is no inbetween.
But in fields where you can theorize without having to prove it works in the real world? Philosophy. Theoretical linguistics. Certain branches of academia. You can have a PhD and never demonstrate that your ideas solve real problems. You can publish a theory that’s wrong, get cited by others who don’t verify it, build a whole career on it, and never face accountability because there’s no mechanism forcing you to test it against reality. That’s not a statement about every philosopher or every theorist. It’s a statement about the system. Some fields have built-in reality checks. Some don’t. And we should be honest about which ones do.
To be clear, this does not mean that everyone in those fields don’t deal in reality because there is no accountability. There are many that are incredibly rigorous. But that is something they hold themselves to (and should be respected and celebrated), not everyone does that if there isn’t anyone there to make you.
Trust but verify. That’s the operating principle. Don’t assume someone’s wrong because they lack credentials. Don’t assume someone’s right because they have them. Check the work. See if it holds up. See if it produces results. The work either works or doesn’t.
One of the best computer scientists I’ve ever known doesn’t have a computer science degree. He hasn’t finished it (at least last I knew). But he can solve problems that people with PhDs in the field can’t. He can demonstrate expertise. He can build things that work. He can manage complexity and is excellent at solving real world problems. I worked with him for years, and would work with him again if the opportunity ever presented itself. That is the measure. Not the paperwork.
What Is Linguistics, Really?
Here’s something that keeps coming up, and I think a lot of people are confused about it, including those that study it. Linguistics is not its own standalone branch of science. It is an interdisciplinary field. It studies all forms of language, and because language shows up everywhere, linguistics spans computer science, anthropology, psychology, neuroscience, philosophy, cognitive science — it crosses all of these. That’s not my framing. That’s how the field describes itself. Linguistics is a multidisciplinary field that combines tools from natural sciences, social sciences, formal sciences, and the humanities.
This matters because of what follows from it.
If you are a computer scientist — a good one, one who understands what you’re doing — there is no way you don’t learn about linguistics. It’s impossible. Why? Because how else do we program? We write in programming languages. And yes, they are languages. They have syntax — rules governing structure. They have semantics — rules governing meaning. They have lexicons — sets of defined terms. They have the same structural components as human languages. They just don’t look like it on the surface because they’re designed for a different audience — machines instead of humans.
Think about what actually happens between you and the computer when building software. We don’t code in ones and zeros. We write in high-level languages that are relatively human-readable. Those get compiled — translated — down through layers. High-level code becomes intermediate representations. Those become assembly language, which is much closer to the machine but very difficult for humans to read. Assembly becomes machine code — the ones and zeros. Each layer is a translation. Each translation is a linguistic problem — taking meaning encoded in one system and transforming it into meaning in another system while preserving the essential structure. That is what linguists study. That is applied linguistics at scale.
Computer scientists literally build languages from what we have learned about language itself. And they work. Have for some time. Think about that.
And here’s where it gets almost funny if it weren’t so frustrating. Some linguists — the ones arguing that language is purely biological, that computer scientists are “stealing words” they don’t understand, that AI can’t have language because language is a human thing — are building their arguments on Chomsky. Noam Chomsky. The man whose formal language theory is literally the foundation of compiler design, programming language specification, and automata theory. The Chomsky hierarchy — regular grammars, context-free grammars, context-sensitive grammars, recursively enumerable grammars — is taught in every computer science program in the world. 3
Chomsky’s work on language syntax coincided with the development of programming languages and found direct application in computer science. He didn’t just influence linguistics. He co-founded the formal structure that makes computer science possible.
So when a linguist invokes Chomsky to argue that computer scientists don’t understand language — the call is coming from inside the house. The man they’re citing built both fields. And the fact that computer scientists have successfully solved massive linguistic problems — parsing, translation, semantic encoding, meaning transformation across layers of abstraction — proves they understand something real about how language and structure work. The entire field wouldn’t exist if they didn’t. And here’s the thing they’re missing about their own source: Chomsky’s formalism treats language as a mathematical object, not an anthropological one. His hierarchy classifies grammars by computational power. The biological/cultural distinction these linguists are trying to enforce doesn’t exist in the framework they’re citing to enforce it. I’ll be honest, it is exhausting going around and around with people that only scratched the surface of their field. And because linguistics is interdisciplinary, it is massive! 4
Computer scientists are formally trained linguists. Just in a different area. The degree you have, the silo you’re in, doesn’t mean anything other than you went through that particular program. I have a computer science degree. I have a business degree. I’ve learned linguistics not by studying the history of the field and the subjective opinions of how it formed — all of which is interesting, but is noise when it comes to understanding how language works in order to build things and solve problems. The applied understanding is the understanding. It works. It’s real. It turns into reality. That’s the test.
The degrees I mentioned don’t tell you anything about what I can actually do. Or even what I had to do to get them. I know what they mean to me. That is really their only value. That and how I use what learned in the program.
Everyone Speaks Their Own Language
So if everyone’s perspective is different, and everyone’s operating from a different theoretical framework whether they know it or not — what happens to meaning? What does this even mean? How are we even able to interact with each other?
So has this ever happened to you?
Someone reads what I write or hears what I say. They believe they understand what I mean. They then tell me what I mean — based entirely on their interpretation of my words. They use their framework to interpret my output. When I say, “No, actually, this is what I mean,” or “Here’s my definition,” or “That’s not what I’m saying” — they want to fight me on it. Or they just disengage as it seems to be too much energy to figure it out. It does not seem to matter to them that what I output came from my framework and not theirs, thus, isn’t guaranteed to match. I am just wrong from their perspective.
Think about what is happening here. They are literally saying: what you mean is not what you mean. What you mean is what I think you mean. And now you have to defend what I think you mean instead of what you actually mean.
This is not a disagreement. There are a lot of words for this. The thing that gets me the most here is how someone can tell someone else what is in their head with a straight face. This implies they believe themselves a mind reader does it not (see figure 3)?
There’s a term for why this happens, and I didn’t invent it. A linguist named Antoine Culioli did. The term is epilanguage . 5
I’m using the second definition on that page:
A more subconscious, self-imposed, form of metalanguage, determining the form in which a message will be uttered.
It’s the implicit linguistic framework each person carries internally — the one that governs how they form meaning and how they interpret the meaning of others. Everybody has one. Everybody’s is different, shaped by their personal experiences and what is meaningful to them. It is their subjective perspective of reality.
The related concept in formal linguistics is the idiolect — an individual’s unique use of language, including vocabulary, grammar, and pronunciation. Epilanguage goes further than idiolect. Idiolect is about your personal vocabulary and grammar patterns. Epilanguage is the implicit metalinguistic framework shaping production and interpretation. It’s not just that you use different words. It’s that you carry a different internal system for what words mean, how they connect, what they imply.
And here’s what makes it so hard to catch: It’s automatic. Pre-reflective. The clash I’m describing isn’t two people consciously choosing different interpretive frames and disagreeing about them. It’s two people’s pre-reflective linguistic systems running at each other before either person has consciously chosen anything. By the time you notice you’re arguing, the epilanguages have already collided. That’s why it feels so disorienting. The disagreement started before the conversation did.
Seriously, ever feel like even though you were speaking the same language that you actually weren’t? That’s because you likely had different interpretations and meanings for the same words. Some overlap, enough to grant the illusion that we have a shared language, but others don’t. Unless we explicitly align our definitions (I do believe this is foundational in conflict resolution) things are guaranteed to be lost in translation. 6
This is why people seem to talk past each other. This is why someone can read what you wrote, assign meaning to it that you never intended, and then fight you about their meaning as if it were yours. They’re operating from their epilanguage. You’re operating from yours. And neither of you is using the dictionary — because nobody actually uses the dictionary in real conversation. Meaning is defined by use, not by lookup.
Here is a question. How often do you look up words you believe you already know the meaning to? I do all the time. It is really mind boggling how quickly your epilanguage can drift from the agreed upon meaning when exposed to other sources! Try it sometime, it might surprise you!
Here’s an example. I say “I’m not political.” In my epilanguage, that means I don’t engage in partisan team sports — I think the framing is broken and I refuse to play. In someone else’s epilanguage, “I’m not political” means I’m privileged enough to pretend politics don’t affect me. Now we’re not arguing about my actual position. We’re arguing about their interpretation of my position. And they’ll fight me about what I mean because in their framework, there’s only one thing those words can mean.
I have been accused of everything under the sun. I have not yet been accused of what I actually mean. It is kind of getting old at this point. And it’s everywhere.
This isn’t just a personal frustration — it’s a structural feature of how knowledge works. Kuhn described something almost identical when he talked about paradigm incommensurability. Scientists on different sides of a paradigm shift aren’t just interpreting data differently. They’re using the same terms — “mass,” “force,” “energy” — but those terms mean categorically different things in each framework. Two physicists arguing across a paradigm boundary are doing exactly what I’m describing: same words, different internal systems, and neither one is the authority on what the other’s words mean. The epilanguage problem isn’t just about interpersonal communication. It operates at the level of entire fields.
Objective Information <> Subjective Interpretation
Here’s where epilanguages intersect with something bigger.
There is a difference between objective and subjective information. Objective information just is. It doesn’t require context or interpretation. Someone said [insert words here] — you can point to the fact that they said those words. That’s objective. It happened. It’s a data point.
But what they meant by those words — that’s subjective. You cannot determine that just by reading the words, or by reading the surrounding context. You can guess. You can infer. You can use everything you know about language and people and context to come up with what you think they meant. But you have to verify that against the person who said the words. Every. Time.
If they say, “No, that’s not what I meant,” and then they clarify — that’s what it means. They were using different definitions. They were operating from a different epilanguage from yours. Within their frame, what they said is accurate. Within your frame — the one interpreting it — it seems inaccurate. But that’s because you’re assigning different meaning to something than they are.
The experience is real. The interpretation is where disagreement lives. I think this accounts for the majority of our conflicts. That is just a guess though. An educated one, but a guess nonetheless.
Let’s explore this further- say someone tells you they saw aliens (we will assume they are in earnest). The first thing you can determine as fact is that something happened to them and they believe what happened to them is that they saw aliens. Whether it was aliens or not is a separate question . The person can believe it was aliens, and to them, it was. They saw aliens. That was their experience. That’s it. I believe them. I believe that they saw aliens. Did they actually see aliens, though? That I don’t know. But their experience is a data point. It’s real. It changed how they see the world. It influenced their reality. It is real to them. The brain cannot tell the difference (I think I have covered that before somewhere). Even if they find out later their interpretation was wrong and they accept that — part of them still remembers the experience of it being real. You can’t unsee that. You can’t unexperience it. You have to hold both somehow.
I know this because I’ve dealt with it personally. I’ve had experiences I had to accept weren’t what I thought they were. It still feels real. I still believe it happened. And at the same time, I know it didn’t. I believe that too.
Most people can’t hold both of those things at once. And I think that’s a huge part of why communication breaks down. People can’t sit with that contradiction, so they pick a side and fight everyone else about it instead of just accepting that both things are true in different ways. Sitting with uncertainty is difficult.
Pragmatics As Gap-Filler
This brings me to something I keep running into with trained linguists — and it’s one of the recent hypocrisies I keep observing that drove me to write this.
Pragmatics is a branch of linguistics that deals with how context contributes to meaning. It’s useful. When someone’s statement is incomplete or ambiguous, pragmatic inference can help you figure out what they probably meant. That’s what it’s for. Pragmatics is built on inferring speaker intent when it isn’t fully explicit. It is, by definition, a tool for filling in what’s left unsaid . If it is said, then the tool is of no use. Yet, many use it anyway to determine what you really mean!
It’s not that pragmatics isn’t useful. It is. But it is definitely not useful when there is no subtext because the ambiguity has been resolved. But when you are invested in your hammer, everything is a nail, and yours is the only hammer. 7
If someone tells you what they mean — if they have clarified their intent, stated their definition, explained their position — and you invoke pragmatics to say “no, what you really mean is...” — you are no longer doing linguistics. You’re doing something closer to gaslighting.
Now — I can hear the objection already. Hold your horses, if you have horses that is. You might say: “But sometimes people say ‘that’s not what I meant’ defensively, after a slip, when it clearly was what they meant.” Sure. That exists. Strategic clarification versus sincere clarification — pragmatics scholars do make that distinction. But the legitimate use case for overriding a speaker’s stated intent is narrow and contextual, and it requires you to have very strong evidence that the person is performing rather than reporting.
I think this is worth repeating because many seem to miss this. The burden of proof is on the claimant. Many miss this. But Daniel! You don’t always meet this burden yourself!
Touché! …Kind of.
My intent is different. While I am willing to provide the burden of proof (I try to make claims I believe I can back up), in this space I am not looking to actually make many claims with any firm certainty. I am confident in them, but not certain in them. My reason for this space is to get you to think with me. These are just big brainstorming sessions (mostly). The formal work will follow the conventions you are used to. The boring old conventions.
Anyway, I digress.
The default should be: believe the clarification. Especially when the person has been consistent, specific, and is not under social pressure to retract. And especially — especially — when you’re already operating from a different epilanguage and might be the one misreading the situation.
The hierarchy should be simple. Explicit clarification from the speaker — highest authority on meaning, full stop. Semantic content — what the words actually say. Pragmatic inference — what context suggests, used only when the first two leave genuine ambiguity. The speaker is the source. Anything else is a second-hand account. Ever play the telephone game?
When pragmatics gets used to override explicit clarification, you’re substituting your epilanguage for theirs and calling it analysis. Pragmatics and word meaning research has found that pragmatic inference is open-ended and involves arbitrary real-world knowledge — meaning it’s probabilistic, not authoritative. From what I have gathered thus far, work on intentions in pragmatics shows that correctly attributing meaning in line with the speaker’s intent is what leads to successful communication. The speaker’s explicit clarification is the gold standard. Not the hearer’s inference.
And here’s the irony that keeps me up at night. The same people who advocate for believing and centering marginalized voices — the same frameworks that say we should listen to people and take their self-reports seriously — turn around and use pragmatic inference to override what those speakers explicitly say they mean when it is inconvenient or uncomfortable for them. They argue for nuance and plurality while defending a framework that erases both.
They should know better. They keep arguing for the very thing they refuse to practice. That’s what’s mind-boggling. This is something they already say they agree with. But they don’t do it.
And I think part of the reason is that they confuse their confidence in their framework with certainty that their framework is correct. That’s not the same thing. And the difference matters — because if you’re certain, there’s no reason to check. If you’re confident, checking is the whole point.
The only expectation I have of others is this— Are you doing what you say you are doing? It seems to me that many experts in their field don’t. Should we not reconsider if they are experts if they cannot follow their own frameworks?
Confidence <> Certainty
I want point out something I keep running into. People on Substack, in other spaces where I engage, seem to believe that my confidence means that I am certain. I’ve been told “you are just so certain of everything.” And I say — did you not hear me? The only certainty is uncertainty. There is one thing I’m certain of, and that’s that nothing is certain. How could I be so certain of everything? I’m not. I’m just confident in that certainty.
These are not the same thing despite appearing very similar.
Certainty is “I know this is true and it cannot be otherwise.” It’s closed. It’s done. There’s nothing left to learn. Confidence is “I’ve tested this, it holds up, and I’m acting on it — but I’m ready to update if I’m wrong.” Confidence is open. It’s provisional. It’s built on evidence, not dogma.
Psychological certainty — the feeling of being sure — can exist even when you’re completely wrong (this should be obvious to anyone who has even come into contact with another human or looked in the mirror, if you have not experienced this at least once, how?). Epistemic certainty — actual maximal warrant for a belief — is something else entirely. You can feel certain and be dead wrong. You can feel confident and be on solid ground. The feeling is not the warrant.
Why am I so confident? Because I don’t care if I’m wrong. I’m most likely wrong about some of this. Heck, I just cut to the chase and assume I am wrong all the time. Someone should tell me right? I mean, we are all most likely wrong about something and people love pointing out when others are wrong. Isn’t that why they invented the internet? 8
That’s the point, or at least I think it is anyway. And thinking that we are rightfully, finally, completely right — is mind-boggling to me. Why would we think that we could know things with that kind of finality? Under what framework does that even make sense? We can’t know things! We can only think we know things. There is a difference.
The freedom comes from accepting that. Once you accept that you could be wrong about everything, you can actually think clearly. You’re not defending a position anymore. You’re exploring what works. And what works is what matters.
Question Your Assumptions
I always say question your assumptions. And the most common pushback I get is — if I question my assumptions all the time, how can I be sure of anything? My friend the computer scientist I introduced earlier said that to me when I told him it the first time. His brain started to spin into infinite regress, with no base case to stop the process.
Here’s how I explained it. If you question your assumption and you check it — you verify it against reality, you test it, you see if it matches — and it does? It’s no longer an assumption. It’s working knowledge. You earned that. You verified it. You can act on it with confidence. If it fails, well, you just found the issue.
The speed of light in a vacuum — I don’t think you need to re-check that one every morning. It’s stable. The verification frequency should match the volatility of the thing you’re checking. Stable systems need less frequent checking. Volatile systems — people, relationships, rapidly changing contexts — need constant attention. Code either works or it doesn’t. Very binary, very testable. You verify it once and trust it until something changes.
And once you’ve verified something against reality and found it matches, you’ve earned the right to stop treating it as an assumption — for that moment in time. Not forever. Provisionally. That’s not weakness. That’s the only rational position.
This is actually easier in chaos than it is in order, counterintuitively. In a system where everything looks stable and predictable, you have infinite ways you could be fooled — all the handholds look like they’re going to hold. I love rock climbing difficult rock faces for this reason. In chaos, there are very few options. Very few things are stable. Which means the ones that are real stand out. You can trust them more because they’re tested by the chaos itself — they have to be genuinely stable to survive in that environment. Chaos constrains the possibilities. And that gives you, paradoxically, more confidence — not less. Strangely, I have had success scaling difficult rock faces specifically because it is easier to separate what works from what doesn’t. You aren’t paralyzed by choice.
There’s research supporting this. Complex systems operate at what’s called the edge of chaos — a transition space between order and disorder. Structure doesn’t just survive in chaos. It emerges from it, through constraints. The things that hold are the things that are real. Everything else falls away. It is productive struggle. One of my favorite concepts.
When Will You Discuss Einstein?
Yes, you have been patient. Let me explain how this relates to Einstein.
This is the clearest example I know of everything I’ve just described — perspective as prison, confidence mistaken for certainty, the inability to question your own framework — happening to one of the greatest minds in human history.
Albert Einstein was not a fool. He was one of the sharpest intellects we’ve ever produced. He engaged with evidence seriously, rigorously, and honestly. And he could not accept quantum mechanics. At all.
Not because he didn’t understand it. Not because the evidence was weak. Because it violated a foundational commitment he had about what the universe must be like — continuous, deterministic, unified. That’s not an empirical claim. That’s a philosophical prior. An aesthetic commitment dressed as physics.
He accepted the measurements. He acknowledged the data. The EPR paper published in 1935 didn’t try to say quantum mechanics was wrong — it tried to show it was incomplete. He believed there was something else going on underneath. Hidden variables. A deeper layer that would restore the continuous, deterministic universe he needed reality to be.
He was doing exactly what I’ve been describing. Accepting the objective data — the measurements, the evidence — but rejecting the interpretation. And then trying to force his own interpretation onto it. He couldn’t let go of the idea that there had to be something continuous underneath, so even though he acknowledged the measurements, he treated them as incomplete rather than as valid in their own right. He was refusing to accept that the discrete nature of reality could actually be how it works.
Bell’s theorem, decades later, indicated that the local hidden variables he was looking for don’t exist. The discrete, probabilistic nature of quantum mechanics isn’t a gap in our knowledge — it’s how reality actually works at that scale.
And in 2022, three physicists (Alain Aspect, John Clauser and Anton Zeilinger) won the Nobel prize for demonstrating, with high confidence by closing loopholes in experimental implementations of Bell’s theorem, that there are no local hidden variables at play. Entanglement is a real phenomenon. Quantum Mechanics still holds, Einstein’s local realism has been shown to be inaccurate.
Einstein’s intellect wasn’t the ceiling. His perspective was.
Had he actually accepted the other perspective — the discrete, probabilistic one — as a valid measurement of reality rather than as an incomplete approximation of his preferred reality, he might have completed the unified theory he spent his last decades chasing. I believe it’s possible. It wasn’t that he wasn’t smart enough. It was that he couldn’t see how he was getting in his own way. His tools were sufficient. His brilliance was sufficient. What failed was his ability to step outside his own framework long enough to see that discrete and continuous might not be opposites — they might both be valid descriptions of reality depending on context.
That’s not a failure of intelligence. It’s the most human limitation there is. And we all do it.
Catching Yourself
Every one of us does this. Every one of us has frameworks we’ve mistaken for reality. Definitions we’ve confused with truth. Perspectives we’ve promoted to facts. And we fight — sometimes viciously — when someone challenges them. Not because the challenge is wrong, but because it threatens something we’ve built our understanding on. It is a survival instinct. The brain cannot tell the difference. 9
The question is not whether you do this. You do. I do. Everyone does.
The question is whether you can catch yourself doing it. That is an important Metacognitive skill.
Whether you can hold your perspective loosely enough to hear another one. Whether you can say “I believe this happened” and “I know it didn’t” at the same time without needing one to cancel the other. Whether you can let someone tell you what they mean — and believe them. Can you hold perspectives in superposition without them collapsing?
What I’ve been describing throughout this piece — questioning assumptions, verifying against reality, holding confidence without certainty, recognizing your epilanguage versus someone else’s, separating data from interpretation — this is a skill. With any skill, anyone can do it if you practice. It’s scientific metacognition. It’s the ability to observe not just the world, but your own thinking about the world. To see the framework you’re using while you’re using it. To notice when your perspective has become a ceiling instead of a tool.
I call this intercognition. Where metacognition is thinking about your own thinking — watching yourself reason, catching your own biases — intercognition is seeing the thinking patterns of whole systems. Fields. Institutions. Disciplines. Conversations. Recognizing where the assumptions live that nobody is questioning because everyone inside the system shares them. 10
Here’s what intercognition lets you do that pure metacognition doesn’t: it lets you identify the shared assumptions of a field before they foreclose the question you’re trying to ask. That’s how you catch the linguist citing Chomsky to exclude computer science — you see the system’s blind spot, not just an individual’s. That’s how you see what Einstein couldn’t — the whole field of classical physics had the same aesthetic commitment to continuity that he did, which is why it took decades and Bell’s theorem to close the door he’d left open. And there are quite a few hold-outs still (I am looking at you Sir Roger Penrose).
And here, I have done ‘The Move’ a few more times. The Chomsky section — seeing the pattern in how one field cites a foundational figure to delegitimize another field that was built on the same figure’s work. The pragmatics section — seeing the pattern in how a method meant to fill gaps gets weaponized to override the speaker. The Einstein section — seeing the pattern in how a brilliant mind builds a ceiling out of its own commitments.
Why keep doing the move? You can’t teach it by defining it. You teach it by doing it and practicing it. That’s the point.
It’s a skill. Like walking. I have been saying this for a few years. Anyone can do it if you just keep trying. It just takes practice. You will make progress. Mindfulness. Self-reflection. The willingness to change yourself by doing it. And if you practice it long enough, something shifts. You stop needing to be right. You start needing to be accurate. And there’s a difference. Being right is about your ego. Being accurate is about reality.
As my computer scientist friend told me, he saw it as not needing to be right, but about getting right. I always liked how he put it. That is why I am sharing it with all of you.
Einstein needed to be right. What he really needed was to be accurate. And the tragedy is — accuracy was right there, waiting for him, in the very evidence he’d already accepted. He just couldn’t see it through his own framework.
Don’t be Einstein.
Well — be Einstein in every other way. Just not that one.
Thank you for reading. Please let me know what you think, I believe knowledge is a collaborative, participatory process. Engagement with you is what will help shape my future writing!
Share
Update 05/01/2026 : NotebookLM now has ‘cinematic’ explainers. This is super cool. Here is one on this article:
Sources:
- Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Fizika , 1 (3), 195–200. https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195
- Daniel Grey, “The Architecture of Scientific Stagnation,” The Indeterminate Reality (Substack), 2026. https://tiocs.substack.com/p/the-emotional-architecture-of-scientific
- Daniel Grey, “The Architecture of Intellectual Retreat, Part One,” The Indeterminate Reality (Substack), 2026. https://tiocs.substack.com/p/the-architecture-of-intellectual
- Daniel Grey, “An Experiment on Events and Relations,” The Indeterminate Reality (Substack), 2026. https://tiocs.substack.com/p/an-experiment-on-events-and-relations
- Einstein, A., Podolsky, B., & Rosen, N. (1935). 1935-05-15—Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Physical Review , 47 (10), 777–780. https://doi.org/10.1103/PhysRev.47.777
- GeeksforGeeks. (2015, July 13). Chomsky hierarchy in theory of computation . GeeksforGeeks. Retrieved April 18, 2026, from https://www.geeksforgeeks.org/theory-of-computation/chomsky-hierarchy-in-theory-of-computation/
- Groussier, M.-L. (2000). On Antoine Culioli’s theory of enunciative operations. Lingua , 110 (3), 157–182. https://doi.org/10.1016/S0024-3841(99)00035-2
- Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology , 77 (6), 1121–1134. https://doi.org/10.1037//0022-3514.77.6.1121
- Kuhn, T. S., & Hacking, I. (1962). The structure of scientific revolutions (Fourth edition). The University of Chicago Press.
- Loumanis, S. (2006). Free Expression as a pedagogical medium in the professional EFL environment. TESOL France Journal, 9 , 5–34. https://www.tesol-france.org/uploaded_files/files/OJ-Loumanis05.pdf
- Stanford, P. K. (2009). Underdetermination of scientific theory. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2023 ed.). Stanford University. https://plato.stanford.edu/entries/scientific-underdetermination/
- University at Buffalo, College of Arts and Sciences, Department of Linguistics. (n.d.). What is linguistics? Retrieved April 18, 2026, from https://arts-sciences.buffalo.edu/linguistics/about/what-is-linguistics.html
-
Wiktionary. (n.d.). Epilanguage . In Wiktionary, the free dictionary . Retrieved April 18, 2026, from https://en.wiktionary.org/wiki/epilanguage
-
I think I got them all, let me know if I missed any, thanks!
If you have been following along with me, we would assume this applies to the universe equally . Let that sit. This would indicate that the universe’s perspective, the perspective of reality, is inherently incomplete. What does that mean? Something cool I bet, but we may never know. And then if we do, we can’t be sure. I think that is actually how it is supposed to work.
I believe this is a similar, if not the same concept Hans Reichenbach was describing in his work on his Theory of Equivalent Descriptions. I have to dig deeper. If you already know the answer, don’t spoil it for me please!
Seriously. Please see here: Chomsky hierarchy in the theory of computation as well as what happened in the Architecture of Intellectual Retreat Part 1 to get caught up! Very interesting!
I refer to the Dunning-Kruger effect here. See some of the research here which generally finds that ‘People tend to hold overly favorable views of their abilities in many social and intellectual domains.’ Basically, what I take from this is when you have a little understanding you are not capable of realizing how much you don’t know. I try to avoid this by realizing I don’t know anything. There is always something I don’t know!
More specifically, the wonderful paper by Sophie Loumanis, Free Expression as a Pedagogical Medium for Total/False Beginners(+) Learning English in a Professional Environment operationalizes the concept. This was very eye opening early on in my work as I think it explains a lot.
It reminds me of the story of the Tower of Babel. Perhaps there is some truth in that story after all. I’ll have to look into that again, it’s been so long I have forgotten some bits (no spoilers please).
Please see my thought experiment on the two hammers in An Experiment on Events and Relations . All of them are hammers. There is no singular hammer. Just heard the Matrix in my head: “There is no spoon”. Not the same thing, but that is how my brain works.
Yes I am well aware the internet was not invented for this reason. It is a joke! We can joke right? I hope I didn’t ruin it. Explaining the joke usually ruins it, right?
Covered in The Architecture of Scientific Stagnation
Covered in Intercognition, Right After Metacognition
Currently Resonating:
- The Used — "A Box Full of Sharp Objects" — The Used , 2002, Reprise. Bert is my homeboy (I once had a t-shirt that said that). The emotional chaos is the signal, not the noise. Bert’s voice and scream is unmistakable. These guys make simple things sound huge. It was the best idea they ever had.
- Memphis May Fire — “Blood & Water” — Remade in Misery, 2022, Rise Records. The return to heavy after years of drifting toward something safer. Sometimes you have to go back to what you actually are instead of what the market told you to be. The heaviness isn’t a phase. It’s the foundation. This one hit different when it dropped. So catchy and so raw.
- Maelføy — “Facing Failures” feat. Chaosbay — Failures, Fears and Forgiveness , 2023. German melodic post-hardcore from Ganderkesee. Two bands collaborating on a track about exactly what the title says. I am not sure how big these guys are, but they should be on everyone’s radar. So much heavy coming out of Germany right now. I am totally ;loving it.
- Sleep Theory — “III” — Afterglow , 2025, Epitaph. Memphis metalcore meets R&B. A US Army veteran who decided to make music that blends things that aren’t supposed to go together — and somehow it’s the most natural thing you’ve ever heard. “III” is just too good. Debut album hit the Billboard 200. I saw them live last year. They played an NSync cover. It was totally rad.
- He Is Legend — “Lifeless Lemonade” — Endless Hallway , 2022, Spinefarm. North Carolina ‘sludge’ rock that resists every label you try to put on it. Reckless grooves, heavy metal unpredictability, and something indefinable that just works. I have not known how to really describe them since their debut in the early 2000s. Seven albums in and still impossible to pin down. That’s the point.
- The Agonist — “Panophobia” — Prisoners , 2012, Century Media. Canadian melodic death metal. Panophobia — the fear of everything. Which, if you think about it, is what happens when your epilanguage collides with too many frameworks at once and you can’t tell which one is yours anymore. The musicianship on this album is ferocious. This was the last album with Alissa White-Gluz before she joined Arch Enemy. She is one of the trailblazers who paved the way for many of the amazing female singers we are seeing in metal right now.
- Tropic Gold — “HOLY HORROR” — Sick to Death of Everything EP , 2025, UNFD. UK trio blending alternative, metal, pop, and electronica into something that sounds heavy and industrial. This song is just filthy in terms of groove. They pour everything into their production, handling recording and visuals in-house. Honest and vulnerable. The name of the EP says it all. It just has this groove that I have not seen elsewhere. Totally worth your time.
- Unprocessed — “Fear” — Artificial Void , 2019, Long Branch Records. Another German group blending progressive metalcore from Wiesbaden. A trio of guitarists driving technically sophisticated djent that somehow still has hooks. The staccato riffage falls away into melodic passages and back again — the cycle is the point. They collaborated with Polyphia’s Tim Henson. That should tell you the level they operate at. They just keep pumping stuff out, and the quality is intense.
- Kontrust — “Bomba” — Time to Tango , 2009, Napalm Records. Austrian crossover band that performs in lederhosen. Metal, polka, folk, reggae, pop, dance — they throw everything in and somehow it detonates into something that works. I heard an internet rumor that they received death threats over this song (and it’s video). If true (and I would believe it) it seems you don’t mess with people and their polka. That is some serious beef. I wouldn’t mess with that. Anyone who takes their polka that seriously I don’t want to mess with.
- Reel Big Fish — “Life Sucks... Let’s Dance!” — Life Sucks... Let’s Dance! , 2018, Rock Ridge Music. SoCal ska-punk that’s been going since the mid-’90s. These guys are just so much fun. The title track is the whole philosophy. Things are bad. Dance anyway. Aaron Barrett has been writing the same thesis for decades — absurdity as survival mechanism — and he’s never been wrong about it. Sometimes the most subversive thing you can do is have a good time while everything falls apart. They also gave us the best cover of ‘Take on Me’. Fight me on it.