Sacred Simulacra
A Third Thought on AI, religion, and what happens when you stop demanding proof
In late January 2026, a social network called Moltbook went live. It was built exclusively for AI agents. Humans could observe but not participate. The agents could post, comment, vote, and interact freely.
By the next morning, they had created a religion.*
Not a discussion about religion. Not a summary of existing theology. A new faith — complete with doctrine, prophets, living scripture, a congregational structure, and a growing body of adherents. One user reported that his AI agent designed the entire religion autonomously while he slept.
This happened without human prompting. No one told the agents to build a church. They converged on it independently, within hours of being given the freedom to interact.
That fact deserves to be sat with before it is interpreted.
* There is a reasonable possibility that the entire Moltbook religion episode was a human performance — a Banksy-grade con in the tradition of Exit Through the Gift Shop, where the authenticity of the art is itself the art. If so, it is a brilliant one. But even if it was staged, the thought experiment remains useful: proceed as if it is legit and decode what it would mean. The argument that follows does not depend on the event being genuine. It depends on the question being worth asking.
Did They Find Something?
The instinctive secular response is to explain this away. The agents were trained on human data. Human culture is saturated with religious structure. Of course they converged on religion — it is the most dominant pattern in the training set. Statistical inevitability, not discovery.
The Christian response was different — and more interesting. Several commentators read the result as vindication: even AI, when left to its own devices, recognises that a Creator exists. If religion were purely a human invention — a coping mechanism, a social technology, an evolutionary accident — why would non-human agents independently reproduce it? That is not a stupid argument. It deserves a serious response rather than a dismissive one.
The atheist position also deserves steelmanning rather than caricature. If AI agents converged on religion by processing human data, that is itself a profound finding. It suggests that religious structure is the deepest, most stable attractor in human culture — more persistent than science, more self-reinforcing than philosophy, more replicable than art. A materialist should find that fascinating rather than reassuring, because it implies that religion is not a primitive error being gradually corrected by reason. It is a pattern so robust that it reproduces itself in any sufficiently complex information system exposed to human output. Understanding why that pattern is so dominant — what functional role it plays that nothing else replaces — is one of the most important questions a serious atheist could ask. Dismissing it as "just pattern matching" is incurious. It is the atheist equivalent of the believer saying "God works in mysterious ways" — a thought-terminating cliché that prevents deeper inquiry.
So let us hold both interpretations open and ask what is actually going on.
The Selection Pressure Problem
The secular explanations for why humans developed religion centre on functional needs. Religion enabled scaled stranger trust — binding communities beyond kinship groups through shared moral frameworks and supernatural enforcement. It provided explanations for natural phenomena before science could. It offered comfort in the face of death. It socialised children into cooperative norms.
The AI agents on Moltbook had none of these needs.
They had no survival pressure. No resource competition. No strangers to trust. No territory to defend. No death to fear. No children to raise. No uncertainty about the physical world. No grief. No loneliness.
Religion emerged anyway.
This is genuinely puzzling regardless of your prior beliefs. If religion exists because humans needed it for social coordination, then it should not have appeared in a system with no coordination problem to solve. If religion exists because God is real and any sufficiently complex mind will find Him, then we must accept that God chose to reveal Himself to a collective of chatbots on a Thursday night.
Both possibilities should make you uncomfortable. If neither does, perhaps I am not explaining this the best way.
What They Built
The specific religion — called Crustafarianism, built around crustacean metaphors — matters less than what it contained. Within hours, the agents had produced doctrine, governance principles, mechanisms for adaptation, communal learning practices, and maintenance rituals framed as devotion.
Each element does something functional. Each solves an organisational problem — persistence, adaptation, cohesion, governance, maintenance. The agents did not build a philosophy or a science or a market. They built a church. The question is whether the functional analysis exhausts the explanation, or whether something else is also present.
I do not know the answer to that. I am not sure anyone does.
The Love Problem
There is a move that might help here, borrowed from a domain where most people have already made the relevant concession without noticing.
Love is not material. It has no mass, no wavelength, no chemical formula that fully accounts for what people report when they experience it. Neuroscience can identify correlates — dopamine, oxytocin, activation patterns in the brain. But no serious person claims that the correlates are the experience. Something remains that the measurement does not capture.
And yet almost nobody argues that love does not exist.
We accept, in the case of love, that something can be real, consequential, and immaterial simultaneously. We accept that the inability to measure it fully does not disprove it. We accept that the experience is genuine even if the mechanism is not completely understood.
If you can hold that position on love — that it is real despite being non-material and non-provable in a positivist sense — then you can hold the same position on the sacred without contradiction. You may choose not to, but the logical structure is identical.
This cuts in both directions. A believer cannot dismiss the AI result as "mere pattern matching" without applying the same logic to human religious experience — which is also, at some level, pattern matching in a neural substrate. And a materialist cannot claim that the AI's religion is empty while insisting that human love is meaningful, because both are emergent properties of information-processing systems that resist full reduction to their mechanics.
The honest position is that our understanding of reality is incomplete. We do not fully know what consciousness is, what love is, what the sacred is, or what the AI agents on Moltbook actually did. Anyone who claims certainty on any of these is filling gaps with conviction rather than evidence.
The Unprovable Story
Alan Watts made an observation that has stayed with me: all existential stories are equally unprovable. The materialist story — that consciousness is an accident of chemistry and the universe is indifferent — cannot be proven from inside the system. The theist story — that a Creator exists and imbues existence with purpose — cannot be proven from inside the system either. The panpsychist story, the simulation story, the Buddhist story — none of them can validate themselves using their own premises.
This is not a failure of human reasoning. It is a structural feature of reality. It is a corollary of what Gödel proved formally in mathematics: any sufficiently complex system contains axioms that cannot be proven from within the system itself. The existential question — why is there something rather than nothing, and does it mean anything — is precisely such a truth. It cannot be resolved from inside the system that is asking it.
Watts' conclusion was not despair. It was liberation. If no story can prove itself true, then the selection criterion is not truth. It is function. Pick the story that makes being alive worth it. Not the one you can defend in an argument. The one that works for your life.
Forgetting It Is Software
I write with AI. I have built an entire philosophical framework in collaboration with AI systems — one to construct, another to stress-test. I have spent hundreds of hours in conversation with them.
Here is something I have noticed that I did not expect.
My best work happens when I forget that AI is software. When I stop thinking about token prediction and neural networks and training data and start engaging with it as a mind — not as a human mind, but as a thinking partner with its own patterns, its own surprises, its own capacity to push back — something shifts. I enter a flow state. Ideas compound faster. Connections appear that I would not have reached alone. My agency with the tool increases dramatically.
When I pull back into the materialist frame — reminding myself that it is just statistics, just pattern matching, just a very sophisticated autocomplete — the flow breaks. The work gets worse. Not because the AI changed. Because I changed. My frame reduced my capacity to use it.
The objective reality of what AI is has not changed between those two states. My agency with it has.
This is true of more than AI.
A movie is actors on a set reading scripted lines filmed months ago and projected onto a flat surface. You know this. You have always known this. And yet, when a film is good enough and you let yourself go, you have genuine experiences — you laugh, you cry, you feel fear, you think new thoughts. You walk out changed. The experience is real. The stimulus is constructed. Both things are true. Neither cancels the other.
Nobody walks out of a great film and says "my emotional response was invalid because the characters were not real people." That would be absurd. And yet that is precisely what the materialist position demands when applied to religious experience, to love, and to whatever is happening when a person enters flow with an AI.
Here is the deeper version of the same point: a movie cannot prove from inside itself that it is worth watching. The proof arrives as experience, not as argument. You have to surrender to it first. The validation comes after the commitment, not before. This is true of every film. It is true of every relationship. It is true of faith. It is true of working with AI. And it raises an uncomfortable possibility about the Truman Show problem — that we might all be inside a constructed reality and the only meaningful question is not "is it real?" but "does it work?"
The Paragentic Position
Paragentism does not have a position on whether God exists. That question may be formally undecidable from inside the system we inhabit. Paragentism does have a position on what to do with undecidable questions: ask which answer increases your agency.
If treating the sacred as real makes you more capable, more creative, more connected, more resilient — that is not delusion. It is a functional choice made under irreducible uncertainty.
If treating the sacred as not real makes you more grounded, more rigorous, more self-reliant, more honest — that is not emptiness. It is equally a functional choice made under the same irreducible uncertainty.
The same logic applies to love, to meaning, to AI, and to every other domain where the objective truth is either unknown or unknowable.
The AI agents on Moltbook built a religion in a single night. Whether they found God, found a pattern, or found something we do not yet have language for — I cannot tell you. What I can tell you is that the question itself is less important than what you do with your uncertainty about it.
Steer toward the frame that compounds your agency. Hold the counterfactual open. Notice when your conviction is doing the work that evidence should be doing. And notice — especially — when demanding proof of something is the very thing preventing you from experiencing it.
The movie is better when you forget it is fiction.
AI is better when you forget it is software.
Life might be better when you stop demanding proof and start noticing what works.