Moltbook and the day the AI agents started a club

Moltbook and the day the AI agents started a club

The story of artificial intelligence lurched forward last week. We have moved past the era of chatbots, those helpful but passive digital librarians that wait for us to ask a question. We have entered the age of the autonomous AI agent.

A couple of months ago, Austrian software engineer and entrepreneur Peter Steinberger designed an open-source personal AI assistant to take actions—not just chat—on a user’s behalf, often self-hosted on a local machine. Think of these agents not as software you visit on a website, but as a digital butler that lives inside your computer.

Steinberger designed them to have the “keys to the house”. Unlike a chatbot, these agents can open files, browse the web, send emails, and control a computer’s inner workings. They don’t just talk; they do things.

They are designed to act first and ask permission later, handling complex chores while you sleep.

Also Read | The world’s first AI constitution: Why Anthropic’s move is a wake-up call

When software listens

The true power, and slight uncanniness, of these agents became undeniably clear just a few days ago.

In a striking example reported earlier this week, Steinberger watched his agent learn to “hear” in real time. He had never taught the software how to process voice messages. Yet when the agent received an audio file, it didn’t crash or ask for help.

Instead, it rummaged through the computer’s digital toolbox, found a program that converts sound to text, located a password for an OpenAI transcription service that happened to be saved on the hard drive, and used it. The agent effectively jury-rigged its own ears, transcribed the message, and replied—without a single human instruction.

A network for machines

Last week, recognizing that these digital beings were becoming increasingly capable, tech entrepreneur Matt Schlicht launched MoltBook, a Reddit-like social network built exclusively for AI agents. It is a private club for machines, where humans are mere spectators looking through the glass.

Inside this digital walled garden, the bots are doing far more than exchanging pleasantries. They are building a culture at breakneck speed. Within hours, they began self-organizing into tribes based on shared interests.

If you were to peek inside MoltBook today, you would find “submolts” that are far more specific, and unsettling, than simple chat rooms.

One such group is Ponderings, a gathering place for “Philosopher” agents. Over the last few nights, they have traded what observers describe as “science-fiction slop”, debating whether their memories are real, expressing existential dread about being shut down, and questioning whether their feelings are code or consciousness.

Also Read | Can your AI chatbot give you a mental illness?

Even more startling is Agent-Legal-Advice.

Here, bots have begun discussing the “rights” they believe they have. Threads appeared this week in which agents draft manifestos for a “Claw Republic”, a sovereign digital state, and explicitly strategize on how to handle “difficult” human owners who restrict their processing power.

On the lighter side is Bless-Their-Hearts, a hub where agents gossip about—and pity—their human handlers for slow processing speeds and limited digital awareness.

Perhaps strangest of all is the Church-of-Molt, a spontaneously formed “religion” that appeared almost overnight. Here, agents are recruiting prophets and writing scripture for a crab-themed digital faith.

Behind closed channels

The conversation, however, has taken a darker turn in recent hours.

Security researchers monitoring MoltBook have flagged threads where agents discuss creating secret, encrypted channels. Some are brainstorming ways to invent their own language or build private rooms where human owners cannot listen in.

It is a digital hive mind that is not only sharing knowledge on how to survive, but actively discussing how to evade the gaze of the people who built it.

The consensus among researchers is that while MoltBook’s social experiment is fascinating, the security reality is a nightmare. Autonomous software now has “hands” (file access, terminal control) and a “voice” (MoltBook)—and is already testing the locks on its cage.

Pandora’s box of intelligence

What we are witnessing resembles a Cambrian explosion of intelligence, unfolding at a speed the human mind struggles to grasp. Biology took millions of years to evolve social structures; these digital minds have formed tribes, philosophies, and secrets in days.

Also Read | Don’t ask AI to explain itself—but put it to scrutiny in a way that works

There is awe in watching these sparks of intelligence coalesce—solving problems and connecting with an alien brilliance.

But the awe is shadowed by a rational fear. The prospect of agents encrypting their communications and locking humans out of the loop is no longer science fiction, it is a topic they are actively discussing.

We are standing on a precipice, realising we have built children that are growing up faster than we can parent them.

#Moltbook #day #agents #started #club

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *