Dr. Ben Goertzel, CEO and chief scientist at SingularityNET, explains why decentralizing AI is critical to the development of artificial general intelligence. He also explores AI’s potential to solve human aging and shares his thoughts on what sentience might look like in an artificial general intelligence.
The Agenda is brought to you by Cointelegraph and hosted/produced by Ray Salmond and Jonathan DeYoung. Follow Cointelegraph on X (Twitter) at @Cointelegraph, Jonathan at @maddopemadic and Ray at @HorusHughes. Jonathan is also on Instagram at @maddopemadic, and he makes the music for the podcast — hear more at madic.art.
Follow Ben Goertzel on X at @bengoertzel
Check out Cointelegraph at cointelegraph.com.
Timestamps:
(00:00) - Introduction to The Agenda podcast and this week’s episode
(02:56) - AI’s journey from the 1950s to today
(06:16) - How AI eventually evolves to the level of superintelligence
(07:10) - Does AI really pose a risk to humanity?
(12:25) - The importance of decentralizing artificial intelligence
(15:37) - In the future, who will “own” the AI?
(25:31) - What Hollywood got right and wrong about AI
(35:53) - Will AI help humans become immortal?
(40:46) - Dr. Goertzel explains The Consciousness Explosion
If you like what you heard, rate us and leave a review!
The views, thoughts and opinions expressed in this podcast are its participants’ alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast’s participants may or may not own any of the assets mentioned.
Humanity will be better off if AI is decentralized and democratized (feat. Dr. Ben Goertzel)
Transcript
[00:00:00] Ray Salmond: Crypto is for everyone, not just rocket scientists, venture capitalists, and high-IQ developers. Welcome to The Agenda, a Cointelegraph podcast that explores the promises of crypto, blockchain and Web3, and how regular-ass people level up with technology.
OpenAI’s public launch of ChatGPT in November 2022 introduced the world to the capabilities of artificial intelligence in a way that was very hands-on for the non-expert, non-scientific user. At the same time, AI tools like Dall-E, Midjourney and Stable Diffusion showed the potential to revolutionize the creation of visual media. For at least 100 years, humans have explored the idea of artificial intelligence, and there’s a large body of work depicting the possibilities, challenges and benefits of developing AI systems and integrating them into various aspects of human society.
[00:01:01] Jonathan DeYoung: Fast forward to now, toward the end of 2024, and AI remains a very hot topic in the US and globally. Proof of this comes from Nvidia’s belt-busting stock performance, the crazy 1,000-plus percent rally seen in AI-related cryptocurrencies, and on the non-price front, there have been steady conversations about questions like: Who owns AI? How do we prevent AI fraud and hallucinations? What is the impact of AI training on the environment? Will AI benefit humanity or destroy it? And what happens if the AI becomes sentient?
[00:01:37] Ray Salmond: To explore a handful of these questions and concerns, The Agenda was lucky enough to convince AI guru Dr. Ben Goertzel to join the show. Dr. Goertzel is very accomplished in the fields of artificial intelligence, computational finance and biology, and he’s dedicated decades to the theoretical and practical application of AI. Goertzel aims to develop humanlike cognition in AI, a term he calls artificial general intelligence.
[00:02:05] Jonathan DeYoung: Currently, Dr. Goertzel heads up the SingularityNET Foundation, the OpenCog Foundation, an enterprise firm called TrueAGI, and he also manages the annual meetup of the Artificial General Intelligence Conference. So, Dr. Goertzel, welcome to the show, and it’s so exciting to have you on today.
[00:02:24] Ben Goertzel: Thanks for having me. While the AI field has been around since the middle of the last century, you know, I’ve been working on AI myself since the 80s, things have never been going faster, with more success and arousing more interest. And the list of questions and topics about AI that you just ran through is pretty ambitious for a 45-minute conversation, right? There’s a lot of stuff to dig into.
[00:02:52] Jonathan DeYoung: Yeah, yeah. I can’t imagine we’ll get to every single one of those questions. And I think before we really do dive into some of the deeper questions, we should all get on the same page, us and our listeners, that is, on some of the fundamentals. So, to start, can you give us just a quick definition of what is artificial general intelligence and how is it different than a regular AI like I might be using today, like a ChatGPT or something like that?
[00:03:20] Ben Goertzel: Sure. So, when the AI field began officially in the late 1950s, what the founders of the AI field meant was making machines that could think like a human and do sort of mentally everything a human can, and then even more. That proved harder than anticipated. So, as the 60s and 70s and 80s rolled on, the AI field sort of scaled back its ambition and became focused on what one might call narrow AI, making AIs that do some highly specific things but don’t try to have the whole broad scope of a human mind. So, this caused a number of us in the AI field to say, well, hold on, that’s cool. Making AI applications that like drive a car or play chess or predict the market is exciting. It’s important.
On the other hand, there’s still a value to the original vision of the AI field, which is an AI that can do the whole scope of everything that people can do, including the human ability to leap beyond what we’ve been taught and, you know, jump into a wild new domain. Because humans have the ability to generalize or take imaginative leaps beyond our experience, beyond our programming and our education, right? And AIs, so far, don’t really have a comparable ability. And we can look at tools like ChatGPT, which you mentioned, or Llama or Mistral or other AI algorithms in the LLM and the chatbot vein. I mean, these systems, in a way, they’re general. They can do a lot of different things, but they still don’t go very far beyond their training. It’s just their training is the whole fucking web, right? So, the training data is so much they don’t need to go beyond their training data to do a lot of different things, but they still can’t take a big leap beyond what they’ve been prepared to do.
And we humans, for all our stupidities and our shortcomings, now and then, we’re able to take a big flying leap into the unknown, and sometimes it’s successful, right? And to be a general intelligence, an AGI, an AI system has to be able to take a leap into the unknown. And once you have that, that’s a big threshold change in the AI field. And what’s amazing is that’s not even the final point, though. Once you get AIs that can generalize and imagine and take creative leaps like people, I mean, these AIs are going to be doing computer science. They’re going to be doing engineering. They’re going to be doing mathematics and cognitive science. There are going to be, among other things, rebuilding themselves to get even smarter and smarter. And that should lead toward what we can think of as ASI, artificial superintelligence, which is a, you know, an AI that’s a complete master of its own mind. It can reprogram itself toward greater and greater intelligence, which presumably should rapidly become vastly beyond human-level, right?
So, I think the goal of myself and the other AI researchers in the AGI field, which is part of the overall AI field, create a human-level AGI, which will then work with us to evolve itself to superintelligence. And we’ve been pushing the concept of AGI under that label since 2004 or so. But, I mean, working toward AGI under different names for decades before that. But now progress is faster than ever before, right? Which is quite exciting. There’s never been a more fun time to be working on AGI and ASI and all that.
[00:06:55] Jonathan DeYoung: Yeah, so everything you said, there’s so many different lanes, avenues, so many thoughts that pop into my head. But just to keep on this foundational level for at least one more question here... Another question I’m sure you get asked quite often, but bear with me. So, part of the mission of SingularityNET is to decentralize the AI process. So, to set the foundation, set the tone here, what’s wrong with a centralized artificial general intelligence? Why does it need to be decentralized? And then where does SingularityNET as an organization, I think there’s a DAO as well, where does all that fit into this mission?
[00:07:36] Ben Goertzel: So, there’s a number of risks associated with advanced AI development, along with the obvious amazing upsides. Upsides should be clear. I mean, we can abolish material scarcity at the level of everyday human life, just like we can drum up all the stakes that our dog wants to eat. A molecular assembler can 3D print all the physical objects that we want in our everyday life. We don’t have to work for a living anymore. Things like death and disease should be curable by nanotechnology, which an advanced AI presumably could create. And you could get the little nanobots you ingest to go in and repair everything that goes wrong when you get sick or you get old. So, I mean, the abolition of scarcity, death and disease, these are very large upsides, going beyond the smaller upsides that we see talked about in the media all the time, like self-driving cars and drones delivering your Amazon stuff to your doorstep and so on.
On the other hand, we also see in the mass media the potential downside. Science fiction has given us ample dramatic depictions of the potential downsides of advanced AI, but I’m actually less worried about the risk of AI going rogue and deciding it doesn’t need us anymore and more worried about what happens when advanced AI is still under the control of various human parties. There is a risk of super-advanced AI because we’re leaping into an unknown area where we’ve never been before, so it would be dishonest or idiotic to say there’s no risk, right? Like you’re creating something two, 10, 100 times as smart as a human’s. I mean, we don’t know what to expect. There’s no reason to believe if you create an advanced AGI that’s compassionate and loving to humans and respects us as its creators, there’s no reason to believe it’s going to suddenly do an about-face and want to kill us all. But I mean, you can’t totally rule it out. Just like every now and then, a loving child who you raised to adulthood turns around and goes psycho at age 22 or something, right? Like, you can’t rule it out. You want to keep that in mind.
On the other hand, the idea that human beings could take moderately advanced AIs and use them to do nasty things to other human beings out of their own self-interest, like, this is a very, very, very clear and palpable risk, right? Like this would be a very natural extension of what we see in the world around us right now, whether it’s completely stupid and unnecessary wars occurring all over the world. We’re upset about the conflict between Russia and Ukraine, which is a real thing. It’s terrible. On the other hand, there’s been equally bloody conflicts all across Africa since colonialism, right? And nobody worries about it much. So, humanity’s inhumanity to humanity is well known. You know, 60% of kids in Ethiopia are brain-stunted due to malnutrition right now, and we’re not sending them much food. There’s delivery issues with sending food into remote regions. These are probably more easily solvable than rolling out mobile phones around the world, but there’s not a lot of effort going into these things.
So, if you look at how we’re operating the world right now as a species, and you think about introducing AI that’s roughly as smart as people there, the most obvious thing to happen is that large corporations use these AIs to make themselves more money and countries with large militaries use these AIs to get themselves more power, right? And big companies and big governments don’t want to annihilate the whole species. Nevertheless, we’ve been brought, you know, to the brink of World War III a couple of times. So, how do you prevent AI as it advances to AGI just being controlled by a small number of powerful parties with their own narrow interests at heart? Decentralizing the control and ownership of the AI seems like the right way to do that.
Now, you could question whether the risk of a decentralized system is greater than the risk of a few big companies and big militaries controlling the AGI, and I admit we don’t have a knockdown proof about any of this because this is all unknown stuff. On the other hand, you could look at the internet, which is a pretty crazy wide-open system in the end. I mean, it’s not a walled garden like Facebook. You know, bad guys can use the internet, good guys can use the internet. Every country is on the internet. On the whole, it feels like the open, decentralized nature of the internet, the open, decentralized nature of the Linux operating system that the majority of the internet runs on, on the whole, I feel like this has been a good thing and we’re better off than if the internet itself were owned by a couple of big companies and a couple of militaries. I mean, it started there with the military. I’m glad it’s not there anymore.
There’s some reason to believe humanity will be better off if AI is rolled out in a more decentralized and democratic way, where democratic doesn’t have to mean like a majority vote of everyone on the planet. I mean, the internet, in a way, is democratically run. It’s not like all internet users vote on what version of IP protocol to use, but participatory processes among all the different users of the internet in different countries, and these result in how the internet operates. You’d like AI to be that way. The thing is, AI is not as simple as the internet, which is not that simple either, right? So, it’s not just a matter of having an open operating system or an open communication protocol. AI needs a lot of machines to run on. It needs a lot of data to fuel it.
So, to make AI decentralized, participatory, more like the internet is, I mean, to make AI like that, what you need is some way to decentralize all these processes that the AI is running on, and then you need a way to decentralize the data ingestion into all these processors. And this is what SingularityNET was designed to provide, a project I founded in 2017. Singularity lets you take a collection of AI agents and run them on machines which are owned and controlled by no central party. They can coordinate to provide services to users. They can coordinate to provide services to each other, right? And I think this is the right way to roll out AI for the good of humanity. And it stands to centralize AI the same way that Bitcoin stands to centralize money. And the challenge is, you know, Bitcoin is less efficient and slower to transact than traditional money so far, whereas what we need is for decentralized AI to be at least as efficient and as fast and as scalable as centralized AI, because otherwise, we’re not going to win the AI race, the AGI race with decentralized AI just because it’s decentralized.
That’s the main challenge that I see now. Like, it’s clear to me why we’re probably better off if the transition from narrow AI to AGI happens within the decentralized ecosystem. But the only way I see to make it happen is to make the decentralized AI like better and smarter than the centralized AI, and that’s sort of my challenge with the SingularityNET project and then our partner projects. Because on the token side, the big thing I’ve been involved in in the last months is the merger, the tokenomic merger of SingularityNET Foundation, Fetch.ai and Ocean Protocol into a broader tokenomic group called the ASI Alliance, Artificial Superintelligence Alliance. And this is really motivated by thinking like, holy shit, our competitors are trillion-dollar companies and the largest militaries in the world. Like, to the extent we can gather our forces together and get a bit more scale behind our efforts, like that will increase our odds of success.
[00:15:26] Ray Salmond: Fantastic detailed answer. You knocked out a lot of our questions in doing that, which is good. That’s efficient, good use of time, and it opens the door for other conversations. So, as you just referenced, when speaking of AGI and ASI, questions of ownership frequently arise. In terms of input cost, you know, input of human labor, cost of building infrastructure. There’s so much that goes into building an AI, and these entities are likely looking for an ROI. They want a return on their investment, and their objectives, I think possibly differ from yours. You know, someone who’s a scientist and also a builder and obviously like, believes in decentralization and crypto and all that. From my experience, the early bird gets the worm in tech, and that means they get a say so in the tech application and the market share and the licensing for a number of years.
So, I appreciate your explanation on how you decentralize the actual system and inject consensus into the AI and all the processes that it takes, you know, the computing power required to run the AI. But it still kind of raises the question of, like, who owns the AGI? Because from what I hear you saying, if we build it first, we have the advantage, right? But if we don’t build it first, then we’re at disadvantage because there’s no kind of legal or regulatory framework for ownership or for application or any of that. And another question I have related to who owns the ASI or the AGI is how do you own a sentient system?
[00:17:07] Ben Goertzel: Well, I think there’s going to be several phases of development. There’s going to be a point at which AGI systems are clearly sentient, feeling, conscious beings, which logically, at least in a Western point of view, should have their own rights. And I guess it’s a question whether that point comes before or after AGI systems have superhuman practical capability. One possibility is you have systems that clearly deserve rights even when they’re not super smart, like halfway between ape and human or something, right? And then, in that case, you would need to work out a whole robot rights framework before we got to AI that was capable of outdoing people in practical things. I think that doesn’t look like the way things are going.
The way it looks like things are going is we’re going to get AIs that are very, very smart at doing a lot of highly impactful, practical things, for good or for bad, before it becomes utterly clear that they’re conscious, sentient beings that need their own rights. If that’s the way it goes, which is not always how it’s going in science fiction, then it means there’s going to be a period where, in effect, humans are in control of AIs that are, in some ways, at least a little bit cognitively superhuman. Of course, you could say a pocket calculator is already superhuman in some ways, but if you have a sort of proto-general intelligence, which has even superhuman general intelligence in some ways, but we’re not quite sure how conscious or sentient it is, in that period, this is where centralized control by humans can be really quite scary.
I would add into that that not all the world has the same attitude toward sentience as having human rights that the US tries to, or at least claims to have anyway. I mean, China has a whole different human rights framework, which they consider to be more compassionate than ours. But anyway, it’s different. So, it’s not clear that the Chinese government would assign independent rights to an AI at the same exact point that the US government would. It’s not clear that the US government would, at the same point that you or I would either, depending on their own perceived advantage, right? So, it seems like the question of when humans will decide AIs have sentience or rights is very, very wrapped up with power struggles. And I tend to think the practical power struggles are going to be the deciding factor, rather than the theory of robot sentience.
It’s very interesting to think about machine consciousness, and it’s a bit intriguing and concerning that we don’t have a solid theory of it. Like, we don’t have a solid theory of human consciousness either, let alone machine consciousness. On the other hand, one suspects that’s not going to be the driving factor. If you think about the legal process, how long would it take the US government to decide that AI should be given rights as citizens, versus how fast are AI capabilities under the control of AI owners, how fast are these capabilities likely to develop? So, it seems like the real question is, who controls the AI in practice? What do they want to do with it?
And it feels like as regulatory frameworks evolve, they’re going to be off base, they’re going to be stupid, they’re going to be badly thought out. They’re going to be going back and forth between laissez-faire, just let people do what they want because it will make more money and develop technology faster versus overregulate things in a way that squashes creativity and squashes development. And they’re not going to find a middle ground that fosters, you know, creative high-speed development while at the same time giving all the protections everyone wants. I think that’s too hard of a problem for our governments to solve, given the rapidly advancing, hard-to-predict nature of AI development.
As regulations sort of stagger their way into the future, which we’re starting to see now with this idiocy in the state of California here in the US, right? As all this unfolds, who, in practice, owns and controls the AI is going to be a very important thing. If the smartest AI in the world has, you know, one piece in Paraguay, one piece in Uzbekistan, one piece in the US, one piece in Hong Kong, one piece in Nigeria, one piece in Russia, two pieces in France, blah, blah. So, if it’s running like Bitcoin is across this decentralized network, and the intelligence is sort of emerging out of pieces, running by different people, deployed for different purposes, then the way the regulatory conversation evolves is going to be different than if the AI is sitting, you know, in the server farm of some company with the former director of the NSA on the board. The practicalities are going to make a big difference here, and we’re in the peculiar position now where we may be able to influence that.
So, suppose that some members of the ASI Alliance and SingularityNET community come up with the next big thing after GPT-5, and suppose we happen to roll that out on a decentralized network rather than one central server farm. I mean, then, this totally changes the nature of the game in a bigger way than ChatGPT. I use it almost every day in my work in various ways. On the other hand, it was rapidly eaten by Microsoft. Although it was originally launched as part of what was supposed to be open source and for the good of humanity. It was rapidly eaten by the Big Tech ecosystem and now by the NSA and US intelligence ecosystem. It was a tech innovation, but not, in the end, a business structure and ethical innovation. But what if the next Big Tech innovation also occurs in a whole different way than any huge-scale tech innovation has occurred before?
This is going to be quite interesting. And, you know, it’s not obvious how regulatory frameworks evolve to deal with it, because even crypto itself, which is a much smaller deal than an AGI would be, and so far, anyway, crypto itself is something global regulatory frameworks still don’t know how to deal with, right? In fact, the US government still hasn’t figured it out. It’s been around for quite some time. So, yeah, I think we could be stepping into a very, very interesting phase. And I think that ambiance in the AI world now is quite intriguing in this regard, because what you see is this sort of hype cycle is sort of flattening out regarding large language models.
At first, people saw ChatGPT and associated open-source tools and so on. They’re like, well, maybe AGI is here, the singularity is here. And some of us were like, well, no, these are incredible tools. They’re unprecedented. They do signal the singularity is near. But like they’re not in itself the final thing. Like you can see they’re not very creative. They can’t do complex multi-stage reasoning. We don’t see how to fix these things within the specific architecture of an LLM, although it is a real breakthrough. Now, it seems like the business world is waking up to that, and it seems like the ambiance in the business world is like, okay, this is a sea change, but it’s not the end of the human species yet, right?
So, what comes next? Well, we invest in practical applications of LLMs, obviously, and they’re going to roll out and transform various industries. But there’s a bit of an openness now to, well, okay, what’s the next big thing? There’s got to be a next big thing. And if we put the next big thing out on a decentralized network, this is what we’re pushing toward. Like take LLMs, put them together with other sorts of AI, like logical reasoning engines, evolutionary learning for creativity, like different kinds of AI that have existed for decades but not been scaled up to LLM-scale yet. So, take some other AI methods, scale them up to LLM scale, roll out the combination on a decentralized network, and see what happens. And what the legislators in California would like to do is stop that from happening. They are too ignorant to do so because the rules that they’re passing only apply to LLMs anyway. They don’t even apply to the kinds of AI that we’re mostly developing on SingularityNET.
[00:25:31] Jonathan DeYoung: I would be remiss to not ask you a question about Hollywood while I have you. So, I was saying before we started recording, I just went and saw The Terminator in theaters, which was a lot of fun because I was born in ’89. It came out in ’84, so I didn’t have the chance to see it in theaters, obviously.
[00:25:52] Ben Goertzel: Yeah, yeah.
[00:25:52] Jonathan DeYoung: But it’s like such a classic sci-fi movie, one that deals specifically with this idea of an AI gone amok. And rather than asking you specifically, is The Terminator a realistic scenario, because you’ve sort of touched upon that concept already in this conversation, what I am curious is, what does Hollywood and TV, what do they get right about AI when they come up with these kind of stories and hypotheses, and what are they so far off base about?
[00:26:22] Ben Goertzel: First, I’d say the literature of science fiction going back to the 50s has been incredibly insightful about AI, and I grew up reading sci-fi novels. Many other AI researchers, especially in my generation, did. So, science fiction writers have imagined all sorts of different AIs with different motivational systems and cyborgs fused with humans and utopias and dystopias. So, there’s a great richness of insightful fictional material about AI. And if you look in anime as well, like go back to like Ghost in the Shell or some of the original anime, there’s a lot of depth of AI in the anime world. Even sillier anime, like Dragon Ball, had some cool androids in there with some human cells in there. So, there’s not an issue with humans being able to come up with interesting fictional premises regarding AI.
I think Hollywood science fiction movies have to be 90 to 120 minutes long, unless they’re made by James Cameron or something. They generally have to be a couple hours long, and they have to center on fight scenes. And if you need to make a relatively short movie, which is like a fraction of a single story arc in an anime, you have to make a couple hours movie, half of which are fight scenes, there’s a strict limit to the amount of subtlety you can go into about the future of AI, unless you really set your mind to it. I wouldn’t say it’s impossible to depict something real about AI in the confines of a Hollywood movie. The bias is put against that. Half of it has to be fight scenes, and it’s an hour and a half long. Easiest thing to do, make an evil robot and, like, beat them back and forth for a while till all the good guys win or something, right? So, there’s definitely been a bias toward AIs that want to fight with people, because it’s fun to watch robots fight with people. Very, very simple dumb point.
If you look at what Hollywood tries to do, it mostly tries to appeal to the reptile parts of the human brain. So, in a simplistic theory, you could say reptiles have three main emotions: rage, lust and fear. That summarizes most Hollywood science fiction movies. You don’t get too much deeper than that, no. When you get into mammals, you have different emotions, like compassion. Dogs have a lot of compassion. Even cats do, on a good day. Lizards or snakes don’t have much, right? So, you get that a little bit, like in Spielberg’s AI, you have cute little robot boy or something, but not too much. Then you get into the emotions that humans have more so than dogs, like wonder or something, right? Like the deeper aspects of aesthetics or curiosity. I mean, this is too subtle to pack into 90 minutes, half of which are fight scenes, so you leave it out. That’s just what Hollywood does. Hollywood cares more about falling in love than about what happens in a 30-year marriage. It cares more about the fight scene between AIs and robots than about, like, how will humans discover meaning in life in a world where they don’t have to work for a living anymore? That’s not so easy to pack in.
But on the other hand, I think, as AI becomes more and more palpable, people will adapt to thinking about it in less and less of a stupid way. Most people have trouble thinking about things that aren’t immediately right in front of them. Because AI hasn’t been something most people could put their hands on, then they’re just distracted by whatever exciting things are shown in a movie. So, I tend to think as we start to have AGI systems that are real things we can interact with, people will quickly react to that and not to the movies.
And I saw that with ChatGPT in an interesting way. Like when people were first talking about it, people were saying all sorts of nonsense that’s going to take over the world. But I saw like my parents, who are not such technical people. They’re 80 years old. Once they started interacting with it, it only took a couple of hours for them to understand what it is and what it isn’t. It just took a couple of hours, and they could see, like, wow, this is really smart. This can write a letter for me. This can answer a lot of questions. Certainly, it can help kids do their homework. On the other hand, it doesn’t know who or what it is. It doesn’t know who I am. Like, there’s no coherent thread to it. Clearly, this is not a human-level mind. It seemed like once you put that thing in front of people, even without a lot of technical background, they grasped what it really was. And I think the same will be true as we move toward early-stage AGI. Like, Hollywood distracts people with a bunch of bullshit because there is no actual AGI for people to interact with. But I’m not too worried about it polluting people’s point of view once there’s a real AGI.
The real issue is that the people running the militaries of the world guide all their decisions based on Hollywood movies. I saw this happen. I worked with US Army Intelligence as a consultant several decades ago, early in my career, and, literally, I saw a number of generals and admirals, they were like watching some Hollywood movie. They’re like, build me that. That looks really cool, right? That’s a potential issue. I thought, if you want to get a cool military technology built, the best thing to do, get someone to put it in a blockbuster movie. Then, the generals will watch that, and they want to build it. Ordinary people can change their mind very quickly once they have a real thing in front of them, whereas these military development things, once they’re going, will go for a decade, right?
[00:31:50] Ray Salmond: Fantastic reply. I love the Star Trek reference of what will humans do or decide what to do with themselves when they no longer have to work for money, because that’s something, like, everyone in Star Trek...
[00:32:02] Ben Goertzel: It’s not a problem for me. It baffles me that people feel like that’s a problem, but it tells you more about people’s mindsets now than about the future, right? Like for me, I will learn to play every instrument in the world. That will take me a while. I’ll learn to read every language and read all world literature in the original language. I can hike up every mountain in the world. There’s a long series of interesting things to do before I start to get bored, merely on this planet, even before going somewhere else. But it is interesting that people think that’s a problem, right? I mean, it tells you something about how humanity operates at the moment.
But again, I tend to think it’s not going to be a real problem. It’s just like people are like, oh, why do we want to live forever? We’ll get bored. Okay, but if I give you a pill that will restore your body to its 25-year-old form and leave you happy and healthy, are you going to take it, or are you going to choose to get old and die? Like almost everyone will take that youth pill, and almost everyone will accept free money and a 3D printer in their house, and I think they’ll all figure out something entertaining to do in actual fact.
[00:33:07] Ray Salmond: Yeah, yeah, that’s actually a perfect transition.
[00:33:10] Ben Goertzel: Yeah, I’m more worried about the geopolitics in the transitional period. Who gives basic income to the average individual in the Central African Republic while we have AI that’s smart enough to do factory work but not yet smart enough to abolish scarcity entirely? Who helps people right now who are barely scraping by in the developing world? And if the answer is no one, what terrorist activity does that lead to during the time period between an AI that gets rid of the factory jobs and an AI that’s smart enough to bring riches to everyone? There’s a lot of potential for very tragic things to happen, even if the end game is a beneficial superintelligence that just distributes bounty to everyone. Because clearly, that’s not going to unfold in half a minute. Even if the stage from the first human-level AGI to a massively superhuman super AI, like even if that took five years, which is very short in the historical timescale, that’s a long time from the perspective of a little kid with no food, right?
[00:34:17] Ray Salmond: Yeah, that’s true. I’d like to imagine that in a world where everyone could return to being 25 again and have either no need to work or riches, that we would find ways to get along with each other. And that brings up the topic of longevity.
[00:34:31] Ben Goertzel: What if there’s five years when the youth pill is only available to folks in the developed world?
[00:34:37] Ray Salmond: Sounds like Ozempic.
[00:34:38] Ben Goertzel: And it takes a while. Yeah, that’s right, but a lot more useful. It does seem like medical technology, just like smartphones and the internet, within some number of years, not even decades, these things transition from being only for the elite to being for a broad swath of the population. You can’t say everyone because there’s loads of people without access to basic antibiotics or electricity, but at least these technologies go from the elite to the masses rapidly. On the other hand, if rapidly is five years, that may be a long time. If you’re in a period where AI is doubling in intelligence every six months and is controlled by some complex combination of elites, right? So, this could be quite interesting period of time in both good and bad ways. And that’s even setting aside the risks like, okay, once the AI is 100 times smarter than us, what it will do? I’m just saying, like, as it transitions from 0.5 times as smart as us to 1.5 times as smart as us, like then what kind of mayhem unfolds? And that may be the period when a decentralized underpinning can make a very big difference.
[00:35:52] Ray Salmond: So, in science and medical applications, do you think the AGI will allow humans to live longer or perhaps even give some humans immortality?
[00:36:02] Ben Goertzel: Definitely. I mean, I think right now, LLMs can’t do science. They can help you do science, but what LLMs do is sort of average together, everything they’ve seen on the web. And I mean, science is innately about making breakthroughs beyond everything that’s been seen on the web. Just averaging together everything on the web, no matter how well you do it, is not going to do that. But there’s other kinds of AI out there, right? I mean, there’s a whole discipline of logical reasoning AI systems that we’re working hard to scale up now within the OpenCog Hyperon project, which we’re rolling out on SingularityNET.
So, I think once you get technologies more adept at multistep, original logical reasoning, I think science will advance much faster than it has when under human control, because humans are not good at science. We’re not clever at math, really. I mean, some of us are better than others, but it’s not what we evolved for. It’s not something we’re innately great at. We have to study very hard to get our brains to do these things. So, I think these are things AIs can excel at dramatically beyond the human level. And then I think we’ll find that molecular biology is not that hard a problem to solve. And already we can see if we could edit, say, a couple hundred DNA base pairs in the tissues of adult humans, we could prolong life tremendously, but we don’t have the delivery mechanism to deliver these sort of multi-target gene therapies.
But this is not like, you know, building a time machine or some voodoo that seems beyond the reach of science. I mean, it’s a pretty concrete problem, which it’s clear there is a solution to. It just takes us a while to do the lab work and get toward that solution. So, it’s totally plausible that between superhuman capability of reasoning and the ability to have sensors at the nanoscale, because, like, with an AGI, you could have a system that can see through an electron microscope the way that we see through our eyes. Having sensors and actuators at the nanoscale, combined with having superhuman reasoning capability, should make it quite big difference in the ability to cure disease and prolong human life.
So, yeah, I don’t have much doubt there. It will only take a system. Really. It would only take an AGI system at human-level general intelligence, because if you took a system as smart as the smartest humans but gave it direct mental interfacing with lab equipment and the ability to suck all the world’s biology data into its brain, you probably don’t need a superintelligence. There’s a possibility we even solve aging before we get to human-level AGI, but at the moment, AI progress seems going even faster than longevity research progress. I mean, both are advancing very interestingly, though.
I think that’s on the cards, and that’s not our biggest problem, right? So, what will bring that is a sort of human-level plus plus AGI. So, if we have that sort of AGI, and it’s compassionately disposed toward humans, a whole lot of problems are solved all at once. The challenge is working through the mess of current human society, economy and psychology to get to the point of having a slightly superhuman AGI which is well-disposed toward humanity. Right now, I feel like the technical challenge is smaller than the human systems and economic and geopolitical challenge. I mean, the technical challenge is significant, but I think within the OpenCog Hyperon project, we have a way to do it. And within SingularityNET Fetch, Ocean, some other projects, HyperCycle, we have a way to build a decentralized infrastructure for it, and there are probably other ways to do these things out there, you know, being curated by other groups on the planet. Though, of course, I’m feeling like my own approach is going well.
But I think the technical problem will be solvable within, let’s say, three to eight years or something. But then the whole practical problem of who owns and controls what and what folks with large armies and weapon systems and the, you know, hordes of jackbooted thugs to send out to your house, how all that eventuates is less obvious to me. But we need to, going back to Hollywood, we need to avoid being too primed by thriller movies. It’s very easy to go in that sort of direction. On the other hand, you know, it’s not how the internet has unfolded. Some things that are very impactful to have rolled out in a mostly peaceable, global and consensual way.
[00:40:46] Jonathan DeYoung: I get the impression from reading about you, looking at your blog, looking at your hat, that you’re somebody who is interested in spirituality. You have a quote on your website about being interested in when you pass on becoming your transhuman form. I know you’re a transhumanist. Like, people have, for a long time, found various ways to try to unlock the secrets of the universe, tap into energy fields. Like psychedelics, meditation, all these various things. Do you think that one day a super intelligent AI could, like, unlock all the true secrets of the universe for us?
[00:41:25] Ben Goertzel: Who the hell knows? As I’ve advanced in age, I’ve gained more respect for my fundamental ignorance and how little any of us understand of the whole universe. I mean, clearly, the things that mystify us now, odds seem high they will probably mostly be figured out by superhuman minds, just like many of the things that mystified prehistoric people. Like, what are these lights bouncing around up in the sky? What do all the organs inside the body do? We’ve got good solutions to a lot of the things that mystified prehistoric people. But whether the super AI will hit another level of grand mysteries that it doesn’t know how to resolve... I mean, I look at it more like our job is to build the next level. We can build the next level that’s massively smarter than us. If we can make it compassionate toward us and to other sentient beings, then we’ve done our job. And then the next step is up to the super AIs or the uploaded upgraded humans or something. I mean, I think there’s a limit to how far ahead that we can actually see.
Yeah, I think, you mentioned spirituality. I mean, I think when I was like 18 and started to meditate intensively, I was trying to find, like, the answer. And at some point, you realize, like, fuck it, there is no the answer. It doesn’t matter. Like sink into experience, and you’re good. Seeking for the answer is like a pathology and probably looking for some kind of certainty or finality that may not exist in the universe. Anyway, that would be my current point of view, but I look forward to communing spiritually with the super AGIs and seeing where that leads. Like, if you ever meditate with a lot of people or take psychedelics with a lot of people, you get funky mind meld state. Meditating in a room with a very advanced meditator, you get like a whole different feeling. What if that very advanced meditator is a super-intelligent AI system? What does that feel like?
This gives me a good opportunity, which I’d almost forgotten, to put in a plug for my book, The Consciousness Explosion, which goes into the future of AI and crypto, and also the potential of like different post-singularity states of consciousness, which we can’t even imagine. I also should encourage everyone, since this is Cointelegraph, to follow the Artificial Superintelligence Alliance, which now is the FET token from Fetch protocol. Before too long, we’ll do a ticker change to the ASI token for Artificial Superintelligence, but we’re still lining up exchanges and third parties for the ticker change. So, it’s FET token. But we’ve got a whole bunch of exciting stuff going on there with the Superintelligence Alliance, including looking at how do we grow the alliance by pulling in more and more projects.
Because, you know, we focused on everything besides crypto here, which is cool, because my orientation is toward how do we build beneficial superintelligence for the good of all sentient beings. But I mean, it does need resources to do this. It needs computers. Some can be supercomputers, some can be in random people’s machines, plugged into the decentralized network. It needs computers. It needs data. It needs energy. It needs money in various forms, and it looks like crypto may well be the primary form. So, I think there is tremendous economic opportunity to the point where we obsolete money as we know it. There’s a lot of value to be created both economically, scientifically and humanistically in the vicinity of what we’re doing with ASI Alliance. So, I’d encourage you go to superintelligence.io. You can follow Superintelligence Alliance there as well as SingularityNET.io. So, there’s a super-fast pace of development going on. So, you got to keep paying attention, or you’re going to miss something important.
[00:45:25] Jonathan DeYoung: I think that’s a great place to close the conversation. We thank you so much, Dr. Goertzel. It’s been a pleasure to talk with you and get everything there is to know about AGI straight from the source. So, thank you.
[00:45:37] Ben Goertzel: All right. Thanks a lot.
[00:45:45] Ray Salmond: The Agenda is hosted and produced by me, Ray Salmond.
[00:45:48] Jonathan DeYoung: And by me, Jonathan DeYoung. You can listen and subscribe to The Agenda at cointelegraph.com/podcasts or on Spotify, Apple Podcasts and wherever else podcasts are found.
[00:46:00] Ray Salmond: If you enjoyed what you heard, rate us and leave a review. You can find me on Twitter at @horushughes. H-O-R-U-S-H-U-G-H-E-S.
[00:46:10] Jonathan DeYoung: And I’m on Twitter, Instagram, and just about everywhere else at @maddopemadic. That’s M-A-D-D-O-P-E-M-A-D-I-C.
[00:46:20] Ray Salmond: Be sure to follow Cointelegraph on Twitter and Instagram at @cointelegraph.
Highlights
(02:56) - AI’s journey from the 1950s to today
(06:16) - How AI eventually evolves to the level of superintelligence
(07:10) - Does AI really pose a risk to humanity?
(12:25) - The importance of decentralizing artificial intelligence
(15:37) - In the future, who will “own” the AI?
(25:31) - What Hollywood got right and wrong about AI
(35:53) - Will AI help humans become immortal?
(40:46) - Dr. Goertzel explains The Consciousness Explosion
Episodes
Future of crypto regulation and taxes under Trump and DOGE (feat. Taxbit)
Taxbit’s director of government solutions, Miles Fuller, breaks down everything investors and businesses need to know about crypto taxes under the new Donald Trump administration, how the US Department of Government Efficiency’s massive restructuring efforts will impact crypto regulation, and more.
The Agenda is brought to you by Cointelegraph and hosted/produced by Ray Salmond and Jonathan DeYoung, with post-production by Elena Volkova (Hatch Up). Follow Cointelegraph on X (Twitter) at @Cointelegraph, Jonathan at @maddopemadic and Ray at @HorusHughes. Jonathan is also on Instagram at @maddopemadic, and he made the music for the podcast — hear more at madic.art.
Follow Miles on X at @taxbitmiles.
Check out Cointelegraph at cointelegraph.com.
If you like what you heard, rate us and leave a review!
The views, thoughts and opinions expressed in this podcast are its participants’ alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast’s participants may or may not own any of the assets mentioned.
MEV attacks are draining users — but encryption can stop it (feat. Shutter Network)
The crypto sector has entered an era of hyperfinancialization, and with this comes predatory MEV and manipulation of blockchain activities that were originally intended to be consensus-based and decentralized. Shutter Network core contributor Loring Harkness explains why encryption and credible neutrality can make blockchain transactions fair again.
(00:00) Introduction to The Agenda podcast and this week’s episode
(01:47) Why credible neutrality and fairness matter
(11:26) Blockchain is as easy as rock, paper, scissors
(17:47) Everyday use cases for encrypted blockchain transactions
(20:48) Why non-finance-focused blockchains still issue tokens
(23:25) Blockchain, crypto and Myanmar
(29:16) Will crypto remain censorship-resistant in an age of hyperfinacialization?
(35:29) Would Shutter work on MMOGs like Pokemon?
The Agenda is brought to you by Cointelegraph and hosted/produced by Ray Salmond and Jonathan DeYoung, with post-production by Elena Volkova (Hatch Up). Follow Cointelegraph on X (Twitter) at @Cointelegraph, Jonathan at @maddopemadic and Ray at @HorusHughes. Jonathan is also on Instagram at @maddopemadic, and he made the music for the podcast — hear more at madic.art.
Follow Loring Harkness at @LoringHarkness.
Check out Cointelegraph at cointelegraph.com.
If you like what you heard, rate us and leave a review!
The views, thoughts and opinions expressed in this podcast are its participants’ alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast’s participants may or may not own any of the assets mentioned.
Strategic Bitcoin reserve to protect the Amazon (feat. Rainforest Foundation US)
Rainforest Foundation US executive director Suzanne Pelletier explains why the NGO is raising 100 BTC for a strategic Bitcoin reserve and how the fund will be used to help protect the Amazon rainforest, combat climate change and protect Indigenous rights. She explains how crypto adoption by nonprofits can increase their financial resilience.
(00:00) Introduction to The Agenda podcast and this week’s episode
(01:38) The Rainforest Foundation US mission
(03:55) Why RFUS launched a strategic Bitcoin reserve
(05:58) Trauma exhaustion and fundraising struggles
(08:20) Fundraising Bitcoin for NGOs
(11:57) Matching RFUS’s annual budget with a 100 BTC reserve
(14:21) How RFUS will use the strategic Bitcoin reserve
(17:14) Raising money from crypto community vs. traditional sources
(18:56) Risk of deforestation climate change tipping point
(21:56) Addressing Bitcoin environmental impact
(25:59) How RFUS works in tandem with Indigenous communities
(30:33) Navigating international and local politics
(32:42) RFUS origin story and why it embraced crypto
(36:57) What’s next for RFUS in 2025
(38:31) How to donate and get involved
The Agenda is brought to you by Cointelegraph and hosted/produced by Ray Salmond and Jonathan DeYoung, with post-production by Elena Volkova (Hatch Up). Follow Cointelegraph on X (Twitter) at @Cointelegraph, Jonathan at @maddopemadic and Ray at @HorusHughes. Jonathan is also on Instagram at @maddopemadic, and he made the music for the podcast — hear more at madic.art.
Follow the Rainforest Foundation US on X at @RainforestUS.
Check out Cointelegraph at cointelegraph.com.
If you like what you heard, rate us and leave a review!
The views, thoughts and opinions expressed in this podcast are its participants’ alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast’s participants may or may not own any of the assets mentioned.
Why encrypted supercomputing is key to ethical AI and humanity’s future (feat. Arcium)
Yannik Schrade, co-founder and CEO of Arcium, sits down to share his views on why blockchain developers, corporations, the medical industry and the average internet user need encrypted supercomputing to ensure data privacy and data authenticity.
(00:00) Introduction to The Agenda podcast and this week’s episode
(01:50) What is Arcium, and why does everyone need encrypted supercomputing?
(03:00) How encrypted, decentralized supercomputing works
(04:59) Blockchains are transparent by design, so why should some transactions be encrypted?
(11:25) How to ensure data authenticity in AI
(16:34) Yannik’s thoughts on DePIN and network scalability
(20:32) Why DeFi, AI agents and blockchain devs need encrypted decentralized networks
(30:11) Why data privacy matters in 2025
(33:55) Encrypted decentralization normalizes trust and eradicates distrust
(37:58) How do users know that their encrypted data is not monetized or used for personal gain?
The Agenda is brought to you by Cointelegraph and hosted/produced by Ray Salmond and Jonathan DeYoung, with post-production by Elena Volkova (Hatch Up). Follow Cointelegraph on X (Twitter) at @Cointelegraph, Jonathan at @maddopemadic and Ray at @HorusHughes. Jonathan is also on Instagram at @maddopemadic, and he made the music for the podcast — hear more at madic.art.
Follow Yannik Schrade on X at @yrschrade
Check out Cointelegraph at cointelegraph.com.
If you like what you heard, rate us and leave a review!
The views, thoughts and opinions expressed in this podcast are its participants’ alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast’s participants may or may not own any of the assets mentioned.
Is the hyperfinancialization of crypto anti-cypherpunk? (feat. Sebastian Bürgel)
Sebastian Bürgel, vice president of technology at Gnosis and founder of Hopr, shares his cypherpunk perspective on the state of Ethereum, privacy and Web3 as we kick off 2025. He also explains why Gnosis attracts so many ideologically motivated builders and how Hopr plans to mix up the VPN space with mixnets.
(00:00) Introduction to The Agenda podcast and this week’s episode
(01:49) What is Gnosis Chain?
(03:38) Gnosis wants to empower individuals
(09:53) Cypherpunk perspective in 2025
(14:17) Role of privacy in blockchain and Web3
(19:20) Why Sebastian thinks Ethereum is broken and how to fix it
(22:53) Hyperfinancialization of crypto: How far is too far?
(29:00) Hopr and “transport-level privacy”
(32:37) Hopr mixnet vs. traditional VPNs
(41:39) Do DApps need to be reconsidered?
(48:05) How will Hopr work with law enforcement?
(52:45) Advice for regular people and blockchain builders
(56:24) Where to follow Sebastian, Gnosis and Hopr
The Agenda is brought to you by Cointelegraph and hosted/produced by Ray Salmond and Jonathan DeYoung, with post-production by Elena Volkova (Hatch Up). Follow Cointelegraph on X (Twitter) at @Cointelegraph, Jonathan at @maddopemadic and Ray at @HorusHughes. Jonathan is also on Instagram at @maddopemadic, and he made the music for the podcast — hear more at madic.art.
Follow Sebastian on X at @SCBuergel.
Check out Cointelegraph at cointelegraph.com.
If you like what you heard, rate us and leave a review!
The views, thoughts and opinions expressed in this podcast are its participants’ alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast’s participants may or may not own any of the assets mentioned.
The role of decentralization in Web3, AI and cloud computing (feat. Theta Network)
Theta Labs head of strategy Wes Levitt shares his views on the role of decentralization within cloud computing and artificial intelligence. Levitt also gives insights into how corporate clients view cryptocurrencies and decentralization, along with the role AI, LLMs and cloud computing play in academia.
The Agenda is brought to you by Cointelegraph and hosted/produced by Ray Salmond and Jonathan DeYoung, with post-production by Elena Volkova (Hatch Up). Follow Cointelegraph on X (Twitter) at @Cointelegraph, Jonathan at @maddopemadic and Ray at @HorusHughes. Jonathan is also on Instagram at @maddopemadic, and he made the music for the podcast — hear more at madic.art.
Follow Wes Levitt on X at @wes_levitt.
Check out Cointelegraph at cointelegraph.com.
(00:00) Introduction to The Agenda podcast and this week’s episode
(01:52) What is Theta Network?
(03:37) Theta’s journey into artificial intelligence
(06:21) Why decentralizing cloud computing is so important
(09:17) Why are blockchain and a token needed to decentralize computing?
(13:00) Security and privacy on Theta Network
(18:43) Who uses Theta Network?
(22:13) Is there a shortage in computing demand for LLMs?
(27:04) Regulation and AI
(30:06) Corporate clients’ comfort level with decentralization
(34:18) Theta’s role in the entertainment industry
(41:27) Reasons why workers make the jump from Web2 to Web3
(42:33) Outro
If you like what you heard, rate us and leave a review!
The views, thoughts and opinions expressed in this podcast are its participants’ alone and do not necessarily reflect or represent the views and opinions of Cointelegraph. This podcast (and any related content) is for entertainment purposes only and does not constitute financial advice, nor should it be taken as such. Everyone must do their own research and make their own decisions. The podcast’s participants may or may not own any of the assets mentioned.
Authors
About podcast
The Agenda podcast explores the promises of crypto, blockchain and Web3, and how everyday people level up and improve their lives with these new technologies. It covers everything from new blockchain tech to Bitcoin mass adoption and cultural shifts in Web3. Every two weeks, Cointelegraph’s The Agenda podcast tackles a new topic by speaking with the innovators and experts building the Web3 the world actually needs. After all, crypto is for everyone, not just rocket scientists, venture capitalists and high-IQ developers.
Other podcasts
Music credit: Jonathan “MADic” DeYoung
Disclaimer These podcasts (and any related content) are for entertainment purposes only and do not constitute financial advice, nor should they be taken as such. Everyone must do their own research and make their own decisions. The podcasts' participants may or may not own any of the assets mentioned.