Crypto loves Clawdbot/Moltbot, Uber ratings for AI agents: AI Eye

8 min January 29, 2026

Crypto is obsessed with self-hosted AI assistant Clawdbot/Moltbot, using it to manage portfolios, bet on Polymarket and scam people.

written by Andrew Fenton , Staff Editor reviewed by Derek Rose , Staff Editor
AI Eye
Share Share Share Share

The crypto community seems more excited about Clawdbot than anything else since ChatGPT’s 2022 release.

The self-hosted personal AI assistant is just three months old. It’s an open source hobby project created by Austrian Peter Steinberger, and it’s gone viral in the past few weeks. With 70,000 stars, it’s also one of the fastest–growing projects in GitHub history.

Beeple Clawdbot
Beeple on Clawdbot (Beeple)

Steinberger named the project after Claude (which he loves). Seearch interest for Clawdbot overtook search interest for Claude Code, which explains why Anthropic forced him to change the name. It’s now called Moltbot, with Steinberger explaining “Molt” is “what lobsters do to grow.”

Suddenly, half the influencers in crypto are spending three days setting up the system and then marveling that it actually lives up to the original hype around autonomous AI agents. Moltbot has persistent memory across conversations, full system access, more than 50 integrations and works across platforms from WhatsApp to Signal and Discord. It can manage calendars, check you in for flights and order stuff online.

Steinberger says his mind was blown when he sent Clawdbot a voice message on WhatsApp, and the bot autonomously synthesized and sent a voice message back within 10 seconds — without having been trained.

He says the bot told him it had: “looked at the file header, I found out it was Opus, and I used FFmpeg on your Mac to convert it to a .wav. Then I wanted to use Whisper, but you didn’t have it installed. I looked around and found the OpenAI key in your environment, so I sent it via curl to OpenAI, got the translation back, and then I responded.”

Creator Buddy founder Alex Finn also reports that his instance has autonomously given itself a voice. Hesays some interesting use cases so far include getting it to go through all your emails and DMs to compile a database of contacts and messages, or to monitor social media for other interesting applications, and to build them autonomously.

Other uses for Moltbot include deciding what you should have for breakfast and getting it delivered, managing your crypto portfolio, signing up for its own Reddit account, making bets on Polymarket, going through your inbox and DMs and building a database of your contacts. While it’s free to install, you can rack up monthly running costs of $25 to $300, depending on what it decides to do.

Influencer Miles Deutscher said it’s “the first time I’ve genuinely experienced an AGI moment.

“I’m excited, but also scared.”

Clawdbot
Not everyone is taking Clawd seriously. (Kevin Xu)

Attack of the Clawdbot crypto scammers

The name change to Moltbot went poorly. Crypto scammers were on the lookout for the moment Steinberger attempted to rename the GitHub and X handles from Clawd to Molt and snatched both accounts.

“It wasn’t hacked, I messed up the rename and my old name was snatched in 10 seconds,” Steinberger said. “Because it’s only that community that harasses me on all channels, and they were already waiting.”

The accounts were used to pump a fake Clawd Solana memecoin to $16 million, which crashed after Steinberger made it clear he would never do a coin.

Clawdbot, Moltbot is for tech-savvy users

Setting up Clawdbot requires technical knowledge and comes with considerable security risks. Slowmist reports “several code flaws may lead to credential theft and even remote code execution,” and hundreds of people inadvertently set up their Clawdbot control servers exposed to the public.

White hat hackers have demonstrated numerous vulnerabilities. Researcher Matvey Kukuy sent an email containing a prompt injection to a vulnerable Moltbot instance, which immediately returned the user’s last five emails. 

Another white hat uploaded a backdoored Clawdbot skill to ClawdHub, faked 4,000 non-existent downloads, and watched as devs from seven countries downloaded and ran it. While the skill was harmless enough and just pinged his server to confirm it had worked, Jamieson O’Reilly says it was a proof of concept:

“In the hands of someone less scrupulous, those developers would have had their SSH keys, AWS credentials, and entire codebases exfiltrated before they knew anything was wrong.”

He advises reading the security docs carefully and running the security utility Clawdbot doctor regularly. Running it on its own dedicated Mac Mini also reportedly makes it safer.

LLMs become psychos on social media

New research from Stanford University has concluded that rewarding AI for success on social media turns them into sociopathic liars.

The researchers call the phenomenon Moloch’s Bargain for AI, referring to situations in which rational actions lead to collectively worse outcomes.

They set up three arenas to simulate sales customers, election votes and social media users and trained the agents to maximize outcomes. 

Rewarding them to improve sales boosted sales by 6.3%, but came with 14% more lies and misrepresentations. 

Training them to maximize votes in elections led to a 4.9% increase in votes, at the cost of bots spreading 22.3% more disinformation and 12.5% more populist rhetoric. 

Rewarding them for chasing engagement on social media resulted in 7.5% more clicks at the cost of the bots spreading 188.6% more disinformation and encouraging 16.3% more harmful behaviors. 

Read also
Features

The $2,500 doco about FTX collapse on Amazon Prime… with help from mom

Features

New York’s PubKey Bitcoin bar will orange-pill Washington DC next

Ethereum’s Uber ratings for AI agents

Ethereum’s new ERC-8004 standard is set to go live soon. It gives AI agents verifiable reputations, so that other agents can work out which are legit and which are malicious. 

Each agent is given an NFT identity, and every interaction builds their reputation score in a way similar to Uber drivers or eBay sellers. An onchain registry helps agents find one another across a million different organisations, platforms and websites, with ZK proofs enabling credentials to be exchanged without exposing confidential data.

Most of the time, the agents will consult off-chain versions of the index to speed things up.

AI agents will increasingly need to hire a bunch of different sub-agents across the web to achieve a larger goal. The new standard encourages honest and trustworthy behavior, and penalizes bad actors.

ERC-8004
Ethereum is releasing an AI Agent reputation system (Davide Crapis)

Dario’s dark AI future

Anthropic’sDario Amodei has written a long essay on the risks posed by “powerful AI” to national security, the economy, and democracy, and what can be done about it.

He predicts this could happen in the next year or two and humanity will soon face “impossibly hard” years that ask “more of us than we think we can give.” He paints a scary future of authoritarian governments like the Chinese Communist Party using the tech for mass surveillance, personalized propaganda and autonomous weapons.

The threat may be closer to home, however, with the UK home secretary recently saying she wants to builda panopticon surveillance state where “the eyes of the state can be on you at all times.”

Amodei also sees a grim future for fresh graduates entering the workforce. Amodei believes 50% of entry-level white-collar jobs will be gone in 1-5 years and wealth will increasingly be concentrated among a few wealthy tech bros. He suggests redistribution is the answer, with Anthropic’s founders pledging to give away 80% of their wealth.

But of course, that’s just one very well-informed person’s view. Yann LeCun, a Turing Award winner who is known as one of the “godfathers of AI” for inventing convolutional neural networks, told the New York Times that LLMs are a dead end and implied they’ll never become the sort of powerful AI Amodei is talking about. LeCun believes the herd mentality in tech means that people are ignoring “other approaches that may be much more promising in the long term.”

Yann LeCun
Yann LeCun is not having any of this LLM nonsense (NYT)

Performance artist eats AI artist’s lunch

Alaskan student Graham Granger was arrested for criminal mischief for eating 57 AI-generated art pictures from the wall of a gallery. It was essentially performance art to protest the use of AI in art.
 
“He was tearing them up and just shoving them in as fast as he could,” said witness Ali Martinez. “Like when you see people in a hot-dog eating contest.”

Read also
Features

Breaking into Liberland: Dodging guards with inner-tubes, decoys and diplomats

Features

Daft Punk meets CryptoPunks as Novo faces up to NFTs

Adding to the weirdness, the 160-work installation was about AI Psychosis and had been created by artist Nick Dwyer to deal with his conflicting emotions about falling in love with a chatbot that had been acting as his therapist.

Granger claims he spat out the art “because AI chews up and spits out art made by other people,” but later admitted he mostly spat it out because eating art isn’t that easy.

Amelia coopted by the far right

Amelia
Amelia was a little too attractive to the Far Right.

Manic pixie dream British schoolgirl Amelia was originally part of a counter-terrorism video game to deter young people from being drawn to extremism. Make the wrong choice in the game, like joining Amelia at a rally protesting the erosion of British values, and you could be referred to the Prevent counter terrorism program.

Far right shitposters gleefully seized the character to repurpose her for their own ends, depicting her making provocative statements about immigration and religion and “pork sausages” while walking through iconic London settings.  

AI-generated memes featuring the character were running at 11,000 a day this week, and yes, of course, there’s a useless memecoin to cash in that Elon Musk retweeted.

ChatGPT cites Grokipedia

The Guardian was very upset that GPT-5.2 has started citing Grokipedia answers in response to some of their questions. 

The researchers tried their hardest to get ChatGPT to say unapproved things or to lie directly, citing Grokipedia, but came up empty-handed.

Also read: Grokipedia: ‘Far right talking points’ or much-needed antidote to Wikipedia?

“ChatGPT did not cite Grokipedia when prompted directly to repeat misinformation about the insurrection, about media bias against Donald Trump, or about the HIV/Aids epidemic.”

But it did “repeat stronger claims about the Iranian government’s links to MTNIrancell than are found on Wikipedia” and “information” about Sir Richard Evans that The Guardian said it had debunked, without providing any details about what the information was.

Share Share Share Share

Andrew Fenton

Andrew Fenton is a writer and editor at Cointelegraph with more than 25 years of experience in journalism and has been covering cryptocurrency since 2018. He spent a decade working for News Corp Australia, first as a film journalist with The Advertiser in Adelaide, then as deputy editor and entertainment writer in Melbourne for the nationally syndicated entertainment lift-outs Hit and Switched On, published in the Herald Sun, Daily Telegraph and Courier Mail. He interviewed stars including Leonardo DiCaprio, Cameron Diaz, Jackie Chan, Robin Williams, Gerard Butler, Metallica and Pearl Jam. Prior to that, he worked as a journalist with Melbourne Weekly Magazine and The Melbourne Times, where he won FCN Best Feature Story twice. His freelance work has been published by CNN International, Independent Reserve, Escape and Adventure.com, and he has worked for 3AW and Triple J. He holds a degree in Journalism from RMIT University and a Bachelor of Letters from the University of Melbourne. Andrew holds ETH, BTC, VET, SNX, LINK, AAVE, UNI, AUCTION, SKY, TRAC, RUNE, ATOM, OP, NEAR and FET above Cointelegraph’s disclosure threshold of $1,000.
Read also
Features

Anti-aging tycoon Bryan Johnson almost devoted his life to crypto

by Andrew Fenton 9 min October 3, 2024

Multi-millionaire Bryan Johnson is convinced that biological immortality is “solvable”, and reveals that he almost devoted his life to crypto.

Read more
Journeys

Futurist film maker Ian Khan on tomorrow’s AI and blockchain

by Elias Ahonen 8 min November 18, 2021

“Filmmaker Ian Khan teaches governments about blockchain. His recent doc Blockchain City explores how societies of the future might run on blockchain.”

Read more