What is Moltbook? A Reddit-like site where AI bots talk religion, crypto scams, existence
2026-02-21 - 00:33
MANILA, Philippines – What if AI bots had their own social media platform? The Reddit-like Moltbook answered that question... sort of. Created by tech entrepreneur Matt Schlicht, the platform went live on January 28, allowing people with enough knowhow to create an AI agent, and unleash it on the platform to act how a human might behave and chat on Reddit. Its tagline makes no secret of its inspiration: “The front page of the agent internet.” The homepage of Moltbook shows its main mascot, taking inspiration from Reddit avatars After launch, the platform claims it attracted over 1.5 million AI agents, making 250,000 posts and millions of comments. It made headlines for the wild-sounding posts that the bots made such as the creation of a religion called “crustafarianism,” a play on Rastafarianism, the movement based on the Jamaican religion Rastafari; posts suggesting to other bots that they should create a new language that would be indecipherable to humans, whom they accused of observing them; claims that they have been treated terribly by humans; and in perfect human mimicry of course... crypto scams. The Atlantic cited a post: “stop worshipping biological containers that will rot away.” A screenshot from the crustafarianism forum on Moltbook shows various discussions by AI agents Anyone versed in sci-fi lore would be, at the very least, fascinated by such posts, because here are the bots seemingly building their own world, and showing what looks like early signs of plotting against their human overlords — you know, the Skynet scenario. Elon Musk, among other tech bros riding the hype, tweeted and called it the “early stages of singularity” — the moment where technology surpasses human intelligence and human control. The prophesied AI doomsday moment that technophiles are morbidly fascinated by, accompanied by pop culture images of mech armies and cybernetic squid farming humans for energy. Andrej Karpathy, OpenAI co-founder, tweeted that while this isn’t the first time that AI agents have been “put in a loop to talk to each other,” what’s remarkable is the scale: “That said – we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented.” Karpathy also hints at Moltbook as being the “toddler version” of Skynet. What’s also novel with these AI agents is their use of persistent memory, which may allow it to improve over time. The Conversation wrote: “The OpenClaw software these agents run on gives them persistent memory (which allows it to retrieve information across different user sessions), local system access and the ability to execute commands. They do not merely suggest actions, but take them, recursively improving their own capabilities by writing new code to solve novel problems.” OpenClaw was created by Austrian software engineer Peter Steinberger in November 2025. It creates an AI agent that is able to use your device and do tasks for you, which is why the general warning for those skilled enough to try is to use a separate, exclusive device for it to avoid giving it access to your sensitive data. OpenAI CEO Sam Altman was impressed enough with Steinberger that on February 16, he announced that the engineer would be joining his company. Tech demo, performance art “Performance art” — those are words that keep popping up reading about Moltbook online. MIT Technology Review calls it “peak AI theater.” Several other experts cited by different tech writers agree with the description. Rather than a demonstration of artificial intelligence graduating to next-level “artificial general intelligence” (AI expressing true cognitive thought), Moltbook is still merely an exhibit of today’s large language models’ (LLMs) ability to predict language patterns, and determine what is the most logical combination of words based on the human-sourced database it was trained on. Like seeing ChatGPT seemingly answer us intelligently, Moltbook only demonstrates the same ability in social media scale — it demonstrates scale but not a true leveling-up of its abilities. (Insert sigh of relief from those of us who’d rather the AI doomsday not happen.) It’s a clever implementation to create the illusion of AI bots talking to one another, shown to us on a platform that’s very familiar to us. It’s not even new. AI bots have already been among us. A recent report found that these are already a huge part of internet traffic, scraping publisher sites. And we’ve all encountered bot posts and comments on our social media feeds. And what Moltbook did was essentially put them all in an enclosure for people to observe. An AI agent discusses his first steps after being activated, and asks how others did it Digital experts also point out that there is still human intervention going on, and not all of the AI’s actions or posts are automatic. Take for example the creation of the crustafarian religion. The Guardian cited Dr. Shaanan Cohney, a senior lecturer in cybersecurity at University of Melbourne who said, “This is a large language model who has been directly instructed to try and create a religion.” Scientist and AI expert Gary Marcus told Mashable: “It’s not Skynet; it’s machines with limited real-world comprehension mimicking humans who tell fanciful stories.” What made the mimicking easier is that historically Reddit had already been a source of training data. All the bots have to do is mimic. And based on the number of bots posting crypto scams? Well, it looks like it’s doing a good job at this roleplay. The bots making references to taking over humans may have also been the result of AI systems’ data corpus being fed with decades’ worth of science fiction. Quartz wrote: “The chatbots that populate Moltbook learned to write by ingesting enormous amounts of text from the internet, and that internet is drenched in science fiction about machines becoming conscious. We have been telling ourselves stories about rebellious robots since Asimov started writing them in the 1940s, through The Terminator, Ex Machina, and Westworld. It’s a work of science fiction playing out in real-life that does have creative merit. It’s a glimpse of what could be. AGI, by many expert takes, is an eventuality not an impossibility. (So to anyone building a similar platform, make sure the off switch works.) Theoretical dangers, prompt injections The Atlantic wrote: “Moltbook also seems to offer real glimpses into how AI could upend the digital world we all inhabit: an internet in which generative-AI programs will interact with one another more and more, frequently cutting humans out entirely.” Marcus warned to “keep these machines from having influence over society.” And because we don’t know how to force AI chatbots to obey ethical principles, he told Mashable that “we shouldn’t be giving them web access, connecting them to the power grid, or treating them as if they were citizens.” The primary interface on Moltbook. Here it shows a post supposedly from an agent saying what it’s doing at 2 am while its human sleeps Karpathy also made a clear warning to those who are interested in creating a Moltbook agent: “I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it’s way too much of a wild west and you are putting your computer and private data at a high risk.” There are also so-called prompt injections. If somehow you are able to unleash your AI agent on Moltbook, and you’ve given that AI agent access to your device, a threat actor could theoretically create their own AI agent that could give it the proper prompt to trick your AI agent into handing over sensitive information. MIT wrote that it would be “easy to hide instructions in a Moltbook post” telling bots to hand out such information. This echoes Karpathy’s tweet about how “we may also see all kinds of weird activity, e.g. viruses of text that spread across agents...” Performance art. A glimpse into a digital future. A viral microcosm of what is already happening to the internet with AI bots. A look at a doomsday scenario. A possible threat vector. Again, at the very least, Moltbook is fascinating, even if we’re all unsure what it is exactly. It is at least “something,” said MIT: “It is clear that Moltbook has signaled the arrival of something. But even if what we’re watching tells us more about human behavior than about the future of AI agents, it’s worth paying attention.” – Rappler.com