The Architecture of Intellectual Retreat Part Three
I sat with this one for several days before writing. I needed to ensure I was seeing clearly before acting. As this is not really a light subject, and is different as it deals with something many are uncomfortable with.
Accountability.
The first two pieces in this series came out quickly. Ellen blocked me and I wrote. Maggie blocked me and I wrote. The pattern was the pattern and the work was to describe it.
This one was different. This one actually felt personal. I had a visceral reaction to what happened. I need to understand why before I take further action.
That is this piece. My analysis of the situation and my self analysis at the same time. Honestly, this piece has been uncomfortable to create. But it is necessary to see what is actually going on in order to determine what is real and what is not.
What Happened?
All of the following is from Dr. Sam Illingworth’s own published materials — his Substack About page and his personal site. I will refer to him as Dr. Sam, which I have seen others refer to him as and that he seems fond of (and I am too). Nothing here is my characterization. My expectations of others are always shaped by what they say they are, not what I want them to be. And because the gap between what he says he does and what he did is exactly where this piece lives, and why he failed to meet the expectations he set for me.
His description of himself, verbatim:
“I’m Dr Sam Illingworth, a Full Professor of Creative Pedagogies based in Edinburgh, Scotland. I have a PhD in atmospheric physics, over 125 peer-reviewed publications, 10 books, and a career spent at the intersection of science, communication, and education.”[1]
His book GenAI in Higher Education (Bloomsbury Academic, 2026) is open access and, per his own page, “was downloaded 10,000 times in its first two weeks.” He is “a member of REF 2029 Sub-panel 23 (Education), a Principal Fellow of the Higher Education Academy, and a Fellow of the Young Academy of Scotland.” His work has “been featured by the BBC, Nature, Scientific American, and The Conversation.” He also leads “the largest UK study into how students actually use AI tools.”
He writes a Substack called Slow AI .[2] The tagline, in his own words, is:
“Knowing when to use AI and when to leave it the hell alone.”
His own description of the publication:
“Everyone is teaching you how to use AI faster. Nobody is teaching you how to think about what you lose when you do. Slow AI is a newsletter for people who refuse to outsource their judgement to a machine.”
His stated reason for starting it:
“I started Slow AI because most of the advice about AI is wrong. Not because the tools are bad, but because nobody is asking what we give up when we use them.”
Slow AI is not only a newsletter. It has, per his own page, “over 13,000 readers. 250+ paid members. Substack Bestseller.” It is also a 12-month curriculum at £100/year, which he describes as:
“This is not a prompt engineering course. It is a 12-month programme for learning when AI helps, when it harms, and how to tell the difference.”
The 12 curriculum themes, as he lists them, include:
- The Myth of Neutrality (Data Origins and Bias)
- Synthetic Empathy (Affective Computing and Care)
- Security and Surveillance (Privacy and Institutional Control) ← Important
- The Labour of AI (Ghost Work and Extraction)
I am laying out this level of detail — using only what he has published about himself — so readers of this piece can find him. Go to Slow AI . Go to samillingworth.com . Read him directly. Decide for yourself. There are things that he has put forth that are good, and useful. But as with all things we consume, just because some is good for you, does not mean it all is. You need to be able to tell what is what.
This is the third case study in a series I did not intend to write. Was hoping I wouldn’t have to. But here we are.
The first was Ellen Burns, PhD — a philosopher of mind and AI (apparently) who blocked me after I pointed out that her definition of language foreclosed her own conclusion and tried to clarify some of the technical realities of AI.[3]
The second was Maggie Vale — a neuroscience-adjacent Substack writer who blocked me after I asked her to go past the textbook definition and tell me what she actually thought. She was so close to seeing the better argument that would have strengthened her own case. The unassailable one. Shame really.[4]
The third is Dr. Sam.
Different credentials. Different subject matter. Different gender. Similar theme. Same architecture.
A pattern is not a pattern because I want it to be one. A pattern is a pattern because that is what it is. What we interpret from there is where we sometimes confuse the two.
Dr. Sam Shared A Reaction
Earlier this week, Dr. Sam posted the following on Slow AI : 1
“MIT Tech Review just argued privacy is a UX feature. A piece this week called privacy-led UX a new design pattern for trust in AI products. Privacy as polish. Privacy as conversion lever. It annoys me that the emphasis is on the user to check the terms and conditions. The company should be responsible for clear disclosure. UX features get redesigned. Rights do not.”
The framing is crisp. The closing line is a rhetorical haymaker. It reads well. That is its only virtue. It is a sound byte. A jingle. A slogan. This does not make it true. It makes it a meme.
Dr. Sam is reacting to the headline- he did not engage with the report. I know this because I did engage with the report, and the report says almost exactly what he claims to want.
Here are the actual quotes I pulled from the MIT Technology Review Insights piece I found and linked for him[5] - they did a great job summarizing the facts from the report[6]:
“Privacy is evolving from a one-time consent transaction into an ongoing data relationship.”
Translation: privacy is being upgraded from a checkbox into a continuous, governed relationship between user and company. Which is what I believe Dr. Sam is advocating for.
“Organizations that establish clear, enforceable privacy and data transparency policies now are better positioned to deploy AI responsibly and at scale in the future.”
Translation: the report is arguing for clear, enforceable policies. Which is, again, what I believe Dr. Sam is advocating for.
“Governing agent-generated data flows requires privacy infrastructure that goes well beyond the cookie banner.”
Translation: a single consent moment at the edge of a product is no longer adequate. The infrastructure must be deeper. Which is, again, what I believe Dr. Sam says he advocates for.
“Privacy-led UX touches marketing, product, legal, and data teams, but someone must own the strategy and weave the threads together.”
Translation: accountability must be assigned. Someone needs to be responsible, accountable for it. Which is, again, you get the picture.
“Organizations can no longer treat privacy as a compliance checkpoint at the edge of the user experience.”
Translation: the exact thing Dr. Sam is complaining about — privacy as a superficial surface feature — is the thing the report is arguing against.
The report is not the enemy of Dr. Sam’s thesis. The report is Dr. Sam’s thesis, written by people who actually did the work.
He was not criticizing the report. He was criticizing a headline he did not verify against the document underneath it. That is the charitable interpretation. The others are that he didn’t understand the report. Or he did and didn’t care. I can’t think of many that explain this in a flattering way.
On Terms and Conditions
Dr. Sam’s closing complaint is that the emphasis is on the user to read the terms.
Let me be precise here, because this is where the conversation usually gets muddy.
Terms and conditions are, right now, the mechanism by which companies disclose what they do with your data. That is the system we have. The disclosure exists. The user’s responsibility to read what they are signing has been basic, sound advice for a very long time. This is not a controversial stance. This is contract law. Don’t sign a contract you have not read. Just because most people don’t read it, or that it is dense, is sadly not usually an excuse when it comes to legal proceedings. You can complain about the fairness (or lack) of it all you want, until the system changes it is only how you want it to work. The burden is sadly on the user. If you are not willing to enter into the contract, don’t. There are alternatives (they might require more effort, sometimes considerably more than reading the terms and conditions).
Dr. Sam’s word “clear” is doing an enormous amount of unexamined work. Clear to whom? His definition of clear is subjective. What is clear to one reader is dense legalese to another. That is what I call a training issue and an accessibility issue — real ones, worth caring about — but it is not evidence that no disclosure exists. I also have learned from experience that you cannot fix a training issue by adjusting the system. The issue is the person has not learned something, adjusting to that almost always ensures they still don’t learn anything, it just moves the issue somewhere else. Training issues are fixed with proper training— not new features or exceptions.
And with AI, the barrier has dropped dramatically. Anyone can paste a terms document into a model and receive a plain-language summary in seconds. The tools to translate have never been more available. A few years ago the T&C issue was worse than it actually is today. The user who will not do five minutes of work does not get to call the company’s disclosure insufficient.
I am, for the record, a strong proponent of usability and accessibility in all designs. I am sympathetic to the argument that T&Cs are often optimized for legal defensibility rather than comprehension. That is a real problem. But there is a difference between a training issue and a systemic issue. I primarily touched on the training side. In reality, we have to deal with both. And what the MIT Technology Review report actually describes is the industry moving toward addressing both in a way that actually moves towards a real solution.
Which is what (say it with me now) I believe Dr. Sam is advocating for.
Which he would know, if he had read it.
The Pattern Is Formulaic
It seems this is not a one time deal, his notes appear built from a formula. Looking at his recent work, the architecture is consistent.
Post two:
“The average employee now spends 47 minutes a day managing AI tools. That is more time than most people spend talking to their team. We added the tool to save time. Then we added the time to manage the tool. The net gain is not zero. It is negative, because the 47 minutes came from somewhere, and that somewhere was usually a conversation. Nobody measures what the tool displaced. They measure what the tool produced.”
The number is vivid. The conclusion sounds profound. The analysis is missing.
47 minutes managing AI tools tells you nothing — nothing — without the other side of the ledger. What did those 47 minutes produce? If the employee spent 47 minutes managing tools that completed three hours of work, the net is an enormous positive. If the tools saved 20 minutes, the net is negative. Did Dr. Sam check? I won’t spoil it for you, you should take a look for yourself. He simply asserts the net is negative. And boy, does it punch! It’s catchy and easy to share. Easy to remember. It’s like an earworm for thoughts.
The line “that somewhere was usually a conversation” is not analysis. It is the author deciding which narrative sounds emotionally compelling and then asserting it as though it were measurement. He is certain he knows where the 47 minutes came from. He decided he knew. And he decided that was what you needed to know too.
Post three:
“Every prompt you give your AI is also being given to your AI. You think you are training it on your work. It is training you on what to ask. The more you customise it, the more it shapes the kind of question you can imagine. After six months you cannot tell which parts of the workflow were your idea and which were suggested by the tool that learns from your input.”
This one is almost self-parody. If someone is not paying attention, I can see the point. And yes, there is some evidence surfacing that people can easily conflate what the AI with what they did. But this should not be a surprise, humans have done this with others for centuries, we are highly unreliable creatures. It takes a lot of effort to remember correctly, which is why one should take notes. Which leads me to the next point.
The entire concern is that you cannot tell which parts of the workflow were yours and which were suggested by the tool. With AI this is easy to figure out.
It is a transcript. You can scroll up. Every interaction is recorded, timestamped, attributable. The “loss of provenance” he is panicking about is solved by the basic product feature of the thing he is critiquing. It literally saves everything you and it says, it could not work otherwise. And you have full view of this. At any point you can double check what happened. 2 The philosophical anxiety evaporates the moment you ask whether the claim survives contact with the actual product.
The shape is identical each time:
- Headline-shaped statistic or framing.
- Emotional language wrapped around it.
- No time to double check details.
- Draw a sweeping philosophical conclusion.
It reads well. It gets engagement. It does not survive a basic wait, did you actually check that?
Here is the thing, Dr. Sam is posting notes like this every hour or two. How do you read that deeply, come up with what to say that is so catchy, and do so that fast? I am not saying it’s not possible, and if the accuracy was there I would be legitimately impressed.
My reading: color me unimpressed.
No Questions Allowed
I engaged (I might have been very blunt, direct and a little charged to be completely transparent). I pulled direct quotes from the report he was criticizing and showed him the report addresses almost every concern he raised. I pointed out that T&Cs are disclosure and that “clear” is a subjective bar he has not defined. I noted that the 47-minute figure means nothing without what the 47 minutes produced. I noted that his workflow-provenance concern is answered by the transcript.
None of this was meant as an attack on his person. How do you confront someone in this society with these types of inaccuracies while trying to presume positive intentions? All of it was an engagement with the content of his own posts, using his own sources. I am passionate about this because he claims to have many of the same goals I do. This is making it harder to achieve those goals, not easier. I was honest and upfront about that.
He blocked me.
This has the effect of scrubbing the threads on substack as well.
Let me say what that means precisely, because it matters. Blocking is a choice. Blocking is meant to protect users from harassment and remove unsafe content — I was not harassing him, I was quoting his own source material back to him. That action protected someone from the evidence that the public conversation had a counterweight he could not answer. The system was not intended for that type of use. This is what we call an unintended side effect.
Dr. Sam blocked me for fact-checking him with his own link.
This is the mildest trigger of the three. It is not even a paradigm threat. It is a citation. And it seems to have struck a nerve (and possibly threatened his bottom line, but that is a different interpretation, one of those less charitable ones).
A Note on the Gender Pattern
The first two subjects of this series were women. That created an optics problem I was aware of (and chose not to address). It did not matter how clean the arguments were — somebody was always going to make the story about who I was challenging rather than what happened. I have already been attacked for doing so. I hope this will put things in a different light.
Dr. Sam is a man. Same architecture. Same retreat. Same block. But this time it appears more serious, not a misunderstanding or what one would call an ‘oopsie’.
The pattern is not about gender. The pattern is about what happens when the challenge lands and the framework cannot hold it. Patterns only get clearer, and initial patterns are usually never what they seem. They clarify as you get more information, and possible explanations are eliminated.
Many choose an explanation they like and look for evidence of it. I eliminate explanations until there is only a few (or one) left.
Professional Courtesy
I would not write these type of piece, one that comes with an implicit accusation of potential dishonesty, without telling the subject. That is a professional courtesy. It is also, practically, the only way to let them provide comments for the piece if they choose to.
Note: This email was sent in response to being blocked. It is not what I was blocked for.
The full text of the email I sent Dr. Sam the day he blocked me:
Hi Dr Sam, Sadly you just proved my thesis. To be clear. You are a Dr. An Intellectual who must be able to defend your claims. You are also a public figure who charges people money to listen to you. Look, what you do to market your ideas to people is up to you as long as you are up front about what you know and what you don’t. Where are your terms and conditions? Look, I am not happy about this, but you took an action. You shut down intellectual debate. You scrubbed what was unfavorable to you and censored me to either protect your ego, your business, or both. And I called you out on it. By blocking me, you essentially deleted the evidence. You choose what both of us say. To be clear and drop any pretense, I believe what you are doing is dangerous and unethical. This carelessness misleads people because you are commenting on things outside your expertise. They believe you know what you are talking about and have completed the work. They trust you. This feels like a betrayal of that trust, as your actions today betrayed me and consequently lost my trust in you. I have started work on the Architecture of Intellectual Retreat Part 3; you will be the focus. I will also dissect and invalidate some of your work. Properly. And rigorously. You can’t fake rigor. I bet you will love it. I would be happy to discuss this if you decide you want to address your emotional reaction to my well-founded comments and accusations. The piece will still be written. You did the thing. And I have copies of all my comments so I can preserve them in the piece. Since you blocked and deleted the posts, it will be your word against mine. If you have copies of the posts, good news: they will match! I try to always be honest. I also don’t have subtext. Transparency is all that matters. We will let the readers decide. This is a professional courtesy so that you know what is coming and have time to prepare and/or provide comments. This is not the type of piece you will get to read beforehand, since you blocked me and therefore indicated that you do not wish to participate. This is not personal Dr Sam. I wish you had done better. All you had to do was follow your own standards. Then again, that’s my thesis. Not many in this world currently do. It is called integrity. Best, Daniel Grey
I included the email in full because I said I would analyze myself here. And I was quite upset by what I was seeing. This type of behavior is very frustrating to me. This is why I had to sit on it for a few days. And I think the tone of my piece is a bit more balanced that my initial reaction. A reaction that I had to process and make sure was not tainting my read on the situation. I wanted to make sure I have not overlooked anything. If I had, then I would have been forced to send an apology as a follow up. This is not what ended up happening as the more I looked into the facts, the more unnerving it was to me.
The Diagnosis
I want to be careful here, because it matters.
Dr. Sam is not evil. Most people are generally not bad. Most people are, as far as I can tell, people who have built a public identities on having a particular takes about a particular thing or things, and do so without any intentional malice. I don’t think Dr. Sam is acting maliciously. That doesn’t mean it isn’t harmful.
This is a symptom of the issue I keep attempting to identify to others. It is not a new one either. Nor am I the first to point it out. But the past few decades this issue has been amplified and compounded by platforms that reward the shape of Dr. Sam’s posts — short, confident, emotionally loaded, philosophically flavored — and that do not reward the rigor that would expose those posts as underdeveloped. The incentives point toward the headline. The incentives do not point toward the reading. The algorithm does not care about truth, only what gets engagement. Engagement does not equal quality. And substack users seem to always be chasing what the Algorithm wants. It is like the users are following a bouncing ball no matter where it goes.
When someone built for the headline is asked to account for the body, they do one of two things. They engage, learn, and update — which is rare, because it costs them the identity they built. Or they retreat. They block. They scrub. They call engagement “bad faith” or “harassment” or “irresponsible.” They protect the brand. It gets lumped in with the other bad actors, real or imagined.
The retreat is not an accident. It is load-bearing. It is the only way the architecture stays standing.
A Note on Rigor
I said in the email that I would dissect and invalidate parts of his work properly and rigorously. I think I owe you what that looks like, so here is the compact version, delivered cleanly.
Claim 1: Privacy as UX feature is dangerous because rights do not get redesigned.
The premise is correct in isolation — rights should not be subject to A/B testing. The claim does not apply to the report he cited (or most best practices I have run into). The report is not arguing that privacy is merely a UX feature. It is arguing that privacy has to become structural — continuous, owned, enforceable, post-cookie-banner — which is the opposite of the disposable-surface-feature framing Dr. Sam ascribed to it. The critique is addressed to a straw version of the source.
Claim 2: 47 minutes a day spent managing AI tools is a net negative.
The claim requires a productivity comparison that is never provided. Without the counterfactual — what did those 47 minutes enable? — the number is rhetorical, not analytical. The assertion “that somewhere was usually a conversation” is not evidence; it is an unsupported narrative choice. The conclusion does not follow from the number.
Claim 3: AI workflows erode your ability to know which parts were yours.
The claim is refuted by the basic architecture of the tool being critiqued. Every interaction is logged in a transcript. Provenance is not lost. It is, if anything, more legible than the equivalent human workflow, where the contribution of each collaborator is rarely recorded at all. The philosophical worry does not survive contact with the product.
None of this required specialist knowledge. None of it required a PhD. All of it required reading what he linked, reading his own words with care, and asking wait, is that actually true?
That is the work. That is the only work. I don’t believe he did the work.
Why This One Stung
I said at the start that I sat with this for several days. That I needed to understand my reaction before acting. Here it is.
I agree with Dr. Sam’s stated mission. Completely.
“Knowing when to use AI and when to leave it the hell alone” is the thing. “Most of the advice about AI is wrong... because nobody is asking what we give up when we use them” is the thing. A “newsletter for people who refuse to outsource their judgement to a machine” is exactly the project I am trying to contribute to.
He is claiming to be for something I am actively working for. That is why he matters. That is why I engaged in the first place. He is not a stranger shouting into the void. On paper, we are on the same team.
And then he demonstrated, in public, that he does not practice the thing he is selling.
He did not think clearly about the article he was critiquing. He did not ask what the report actually said. He did not engage with the inconvenient follow-up. He took the emotional shortcut, posted the headline version, and blocked the person who showed him the source material.
That is not critical AI literacy. That is the exact opposite. It is uncritical reaction wearing the costume of critical thinking — and it happened on a topic that is literally Month 3 of his own paid curriculum.
This is where the scale matters, and where I have to be honest about why this case stings more than the first two.
Ellen hurt the field of philosophy of mind and her credibility in a room of a few hundred readers. Maggie missed out on enhancing her corner of Substack. Those seem different than this. Dr. Sam, by his own numbers, has “over 13,000 readers. 250+ paid members. Substack Bestseller.” He has a CPD-accredited certificate worth 25 credits. He has a book Bloomsbury published and ten thousand people downloaded in its first two weeks. He leads the largest UK study into how students actually use AI tools. His audience includes educators, policy people, and institutional leaders who carry his framing into their classrooms, their curricula, and their staff training.
The difference is he is a thought leader that has built a big brand on this outside of substack.
The asymmetry is not academic. When the person selling the skill does not demonstrate the skill, he does not only fail his critics. He trains his students in the wrong shape. He teaches, by example, that critical AI literacy looks like a confident post, a dodged follow-up, and a blocked dissenter. That is a lesson 13,000+ people just absorbed, whether they noticed or not. The ones paying £100/year — 250 of them — are paying to be trained in this. Some of them will go and teach it to their own students without realizing it. Some of them will design assessment policy with it. The shape propagates.
That is the harm. It makes having the real conversation harder. It makes getting to the solution nearly impossible.
The cause I care about — the one he claims to care about too — is worse off today than it was last week, because the person with one of the biggest platforms for it modelled the opposite of it in public. And because I actually agree with the mission, I cannot let that slide. If I let it slide, I am not advocating for the cause. I am an accomplice to its attack.
That is what took me days. Not just anger. Not just injury. The work of being honest about the fact that the person who looks the most like an ally on paper did the most damage to the project in practice.
So?
The issue is not that Dr. Sam is wrong about privacy or productivity or AI workflows. Being wrong is part of thinking. I am wrong about things all the time. Literally I want people to point out when I am wrong so I can fix it. I don’t always realize I am wrong, most people don’t until they are made aware. I correct myself in public when I am. Nothing about this piece requires him to have been right or wrong on the substance.
The issue is the retreat.
The issue is the shape that keeps appearing. The confident post. The follow-up question. The block. The pretense that the conversation never happened.
This is the same architecture, just in a different form.
Once you see the pattern, you cannot unsee it. The train does not stop coming because nobody looks at it.
The door remains open — not to Dr. Sam, who closed it, but to anyone who wants to engage these questions with the honesty the subject deserves.
Including his readers. Especially his readers. You were sold a skill. I am asking you to check whether the person selling it is using it.
This is Part 3 of a series. If the pattern repeats, there will be a Part 4. I hope there isn’t. Perhaps we will start to face our critics instead of run from them.
I did not enjoy writing this piece. I really don’t want to keep writing about this. I would rather actually discuss what is real, and actually solve cool challenges with others. None of us can get to that point though if we cannot agree on what anything is.
To be clear, one of my biggest complaints with this process was the fact that it forces me to perform seriousness. The rigor is always there, but this is not fun. This is work. I don’t mind doing work, but I am tired of doing the majority of it for others. Let’s meet each other halfway, that way we share the load. I will do the serious thing if I have to, but I won’t enjoy it.
The point of these pieces are that too many have the freedom to opt out. To ignore what is inconvenient. This is the definition of privilege in my book. It is a hallmark of an unequal system. What is painful is this privilege is argued against and decried by those who simultaneously wield it to protect their positions.
The only way to change this is when those with privilege give it up willingly so those without can have some. That is called sacrifice. And it is for the greater good. I believe I will explore this topic further, from a more objective perspective in the future.
There is no ‘currently resonating’ list this time. This post is not the place for it as this is a very dissonant post.
Sources:
- Illingworth, S. (n.d.). Sam Illingworth [Personal website]. Retrieved April 24, 2026, from https://samillingworth.com/
- Illingworth, S. (n.d.). Slow AI [Substack newsletter]. Substack. Retrieved April 24, 2026, from https://theslowai.substack.com/
- Daniel Grey, “The Architecture of Intellectual Retreat, Part One,” The Indeterminate Reality (Substack), 2026. https://tiocs.substack.com/p/the-architecture-of-intellectual
- Daniel Grey, “The Architecture of Intellectual Retreat, Part Two,” The Indeterminate Reality (Substack), 2026. https://tiocs.substack.com/p/the-architecture-of-intellectual-140
- MIT Technology Review Insights. (2026, April 15). Privacy-led UX is becoming a prerequisite for AI adoption, new MIT Technology Review Insights report finds [Press release]. PR Newswire. https://www.prnewswire.com/news-releases/privacy-led-ux-is-becoming-a-prerequisite-for-ai-adoption-new-mit-technology-review-insights-report-finds-302742046.html
- Walden, S. (2026). Building trust in the AI era with privacy-led UX [Report]. MIT Technology Review Insights; Usercentrics. https://usercentrics.com/wp-content/uploads/2026/04/Privacy-Led-UX-in-the-AI-Era.pdf
I would link to the notes, but then I would have to log out and that feels like I am snooping and violating his boundaries. While I should pull them for completeness, I am choosing to avoid feeling creepy here. You can check for them- if you can’t find them, well that would be interesting development.
Side note- how many actually do go back and check where the idea came from? How often do you really track down where your ideas come from? You might find many are not as ‘yours’ as you think.