AS Seen On

By: Stephan Spencer


Dr. Ben Goertzel
“The interplay between spiritual intuitive insight and science and engineering is going to be one of the more interesting things during the next couple of decades as we advance toward singularity.”
Dr. Ben Goertzel

Artificial Intelligence is evolving quickly; some argue faster than humanity can cope with. What can’t be argued is how sweeping the impact of AI will be on the world. As all this continues to unfold, what’s the impact for us now and in the future? And how will it affect the next generation?

In this episode, Dr. Ben Goertzel and I dive deep into the intricacies and possibilities of artificial intelligence and the urgent need for us humans to prepare for this brave new world.

Dr. Ben is a cross-disciplinary scientist, entrepreneur, and author. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society, which runs the annual Artificial General Intelligence conference.

Dr. Ben also chairs the futurist nonprofit Humanity+ and serves as Chief Scientist of AI firms Singularity Studio, Rejuve, SingularityDAO, and Xccelerando Media, which are all parts of the SingularityNET ecosystem.

If you’d like to understand the benefits, risks and implications of AI from one of the top minds in the field, you’re in the right place. This episode is packed with thought-provoking insights from one of the most sought-after futurists. Dr. Ben made a compelling case for decentralizing AI and why it’s our mission not to let it be controlled by a single organization. We even discussed how superintelligent machines connect with spirituality.

And now, without further ado, on with the show!

In this Episode

  • [00:48]Dr. Ben Goertzel, a cross-disciplinary scientist, entrepreneur, and author, joins Stephan on this episode to discuss everything we want to know about artificial intelligence, including connecting with spirituality.
  • [02:16]Dr. Ben distinguishes between Artificial General Intelligence and Artificial Intelligence.
  • [08:55]Stephan and Dr. Ben talk about the implications of accelerating technological advances at an exponential rate.
  • [14:39]While bringing up China’s advancement in facial recognition, Dr. Ben also provides some background on his project, SingularityNET.
  • [24:33]Dr. Ben goes on to discuss manifesting reality, parapsychology, and his perspective on panpsychism consciousness.
  • [32:50]Stephan describes a concept he lives by: the willing suspension of disbelief and the oneness blessing he received during a trip to India about a decade ago.
  • [35:09]Dr. Ben mentions times in his life when he had supernatural experiences or spiritual epiphanies that led him to embark on parapsychology.
  • [46:24]In terms of parapsychology and consciousness, what is the prevalent belief in Africa and Asia?
  • [49:13]Stephan explains astral projection, while Dr. Ben expresses his thoughts on Carl Jung and Sigmund Freud.
  • [57:11]Check out Dr. Ben’s website,, for more information on his social media accounts, books, and podcasts. Visit to learn more about his AI projects.

Jump to Links and Resources

Ben, it’s so great to have you on the show. 

Well, thanks. It’s a pleasure to be here. 

Let’s start, for our listeners’ understanding, around Artificial General Intelligence and how that’s different from just traditional AI (artificial intelligence). Can you differentiate that? Maybe throw in ML (machine learning) in there as well so we can kind of get everybody on the same page?

I look at the history of AI as really involving three stages. There’s narrow AI, then AGI (Artificial General intelligence), and then finally ASI (Artificial Super Intelligence), which is intelligence far beyond the human level. These distinctions were not clear in the middle of the last century when the AI field was first founded. 

It was an interesting discovery that you could make AI programs doing things like chess or checkers, do differential calculus or something, or figure out what to place on the web page. It was a major discovery that AI could be made to do these things. 

The interplay between spiritual intuitive insight and science and engineering is going to be one of the more interesting things during the next couple of decades as we advance toward singularity. Share on X

These are called expert systems?

Some of them are. The definition of a narrow AI is an AI that’s smart at doing one particular type of thing, but isn’t smart at doing the broad scope of things that humans are and can’t transfer its knowledge from that one little domain it’s good at in order to other domains. It wasn’t really clear at the outset of the AI field that you could do so many impressive and exciting things with very narrowly defined AI systems. 

There are various ways to build narrow AI systems. What’s classically called an expert system with a system that was supplied with a bunch of human coded rules telling it exactly what to do. If your blood pressure is above this level or below this level, and your age is above this level, you might have hypertension or something. 

You can also build a narrow AI system using what’s called supervised learning, which is an important part of what’s now a company called machine learning where instead of coding in a bunch of rules, you carry a data set. If you’re trying to make an expert system to estimate credit default risk, you make a big data set of people who borrowed money, you have information on who paid it back and who didn’t, then the machine learning system learns the rules that will tell you which people are likely to pay back their debt or not. 

You don’t have to type in the exact rules like if you’re this age, you live in this part of the country, you have this many kids, you’re likely not to pay back your debt. The machine learning system learns those rules, but it’s still doing something highly specific and focused based on a dataset that a human curated, cleaned up, and provided to it. 

An AGI system needs to be able to do things that its programmers never thought of and do things I was unprepared for.

Narrow AIs can be super valuable, as we see all throughout the global economy to me now and they can hit every vertical market from face recognition to financial trading, to writing simple news reports about weather, or sports, or something, or diagnosing a disease based on symptoms. 

On the other hand, an AGI system needs to be able to do things that its programmers never thought of and do things that I was not prepared for and pivot to fundamentally new domains and new types of problems. We don’t yet have AI systems that are good at doing that and I think that’s really the next wave of the AI revolution, going beyond narrow AI where each AI serves one particular thing to AGI where an AI can deal with the unforeseen and unexpected and creative, imaginative and improvisatory way. 

I would say which problems need AGI and which don’t, it’s something we’re discovering as we go along to say whether full self driving requires AGI is not yet obvious. I suspect it does not, but some people think it does and you don’t know until you’re there. 

Let’s say being an AI mathematician, being able to solve and prove more math theorems than any human mathematician, including new ones that are unknown. Does that just need a narrow like math AI? Or does it need something that can really think imaginatively and broadly? 

The AI field has been pretty bad so far, including myself. Guessing in advance which problems are going to be solvable by sort of narrow AI rule driven or data driven hacks, rather than needing the kind of intelligence that people have. The term AGI also shouldn’t be over interpreted, because humans are not maximally general purpose either. 

In fact, humans are very stupid about many things a lot of the time as we see in human society generally and as we each see in our own lives, as we look back at choices we previously made. We’re not maximally generally intelligent systems and it’s pretty clear that the constraints of the physical universe—which themselves may not be absolute—should allow for intelligence going far beyond human level and that’s where you get into ASI or artificial super intelligence. 

Psychology and Alchemy by C. G. Jung

One way of looking at that is once we get to an AI that’s as smart as you or me but has the will each revise its own source code and rebuild its own hardware, it may not be that long before that AGI improves itself, recursively and iteratively, and becomes twice as smart as people. Once it’s twice the smartest people it has all that intelligence to use to become four times the smartest people and so forth. 

This gets into the notion of a technological singularity put forth by my friend Ray Kurzweil and many others and the idea that human level AGI is a sort of threshold. We’ll work to make no AI more and more general. At some point, we’ll get something that has the same level of generality of intelligence as a human and in a way, humans are sort of the minimal generally intelligent system. 

We’re about as stupid as you can be and still be able to understand yourself, improve yourself, and build new versions of yourself. Once you get to that threshold level of human level AGI that may seem recursively self improving AGI, finding superintelligence and singularity and all that. Where we are now, we’re sort of at the borderline between the narrow AI revolution and the next wave of the AGI revolution which is a very interesting place to be. 

Yeah, it is. With the Law of Accelerating Returns, Moore’s Law, Metcalfe’s, and so forth, things are just growing at an exponential rate in terms of technological advances. It’s exciting, but for some at least, it’s pretty terrifying too because if we have super intelligent AIs and we don’t look very virtuous to them, kind of a scourge on the planet, will we get?  

I find what is going on in Ukraine more terrifying. I find that the fact that half of Ethiopian children grew up with brain stunting because of malnutrition, these things are more terrifying. There’s the risk that you can build a super AI that doesn’t see the value in humans. There’s also the risk that humans don’t evolve in terms of ethics and judgment at the same rate as we’re evolving in terms of technology. 

We're at the borderline between the narrow AI revolution and the next wave of the AGI revolution, which is a very interesting place to be. Share on X

Without AGI to help us run the show, then what sort of mess are humans going to make? We’ve got interesting risks all around and then getting to the subject of your podcast, we also have, I think, significant advances in human self awareness and human individual and collective consciousness and ethics. 

We are advancing as humans in profound ways, even though it’s not always displayed in politics, warfare, and the distribution of technology throughout the world. It’s quite interesting, the balance of good things and not so good things in the human world now, as we set about launching a singularity. 

Yeah. There’s a potential for kind of worldwide destruction, kind of skynet and so forth with AI, but there’s also a real risk of global catastrophe with molecular nanotechnology and the gray goo problem where these self replicating nanobots could just continue nonstop until all resources have been depleted to zero. Any thoughts on that?

Well, that’s a risk. I would say we also have a risk of global thermonuclear war, which looks a lot more palpable at the moment. I think, in a way, people like to think about these far off risks that are hard to assess as a mode of distraction from the large obvious risks that are here right now. Yeah, nanotech is there, but looking closer to home, you’ve got the potential of a US and Russia nuclear war and maybe China helping Russia. We’ve got the COVID virus, which appears not to have been human engineered. 

On the other hand, when you look closer and closer and gain more results of function research, you have to ask what viruses could humans actually engineer if somebody wanted to? There is the risk of super intelligence, there’s the risk of nanotech running them off, but there’s also risk of global thermonuclear war and there’s risks of bioengineered viruses taking the next level beyond coverage, which is relatively mild as viruses go. 

We are advancing as humans profoundly, even though it’s not always displayed in politics, warfare, and technology distribution worldwide.

There’s a broad spectrum of risks and I would say I have less fear of technology per se than of the human beings and human organizations really in power over marshaling humanities resources because AGI, superintelligence, molecular nanotechnology, biotechnology, it’s quite plain where all of these things could be used in a way that’s utopic or in a way that’s dystopic. These are all very flexible technologies. There’s no evidence that any one of these technologies is inevitably leading down some path of horrible doom and destruction. 

How these technologies are rolled out is going to depend in large part on which human institutions are guiding their rollout. There you come down to the fact that the main uses of AI on the planet now are selling, killing, spying, and corrupt gambling in various forms. That’s a worry, if that’s what’s on the mind of the super AI as it evolves. 

I’m trying to counteract that by helping to foster development of narrow AI and AGI in a democratic decentralized fashion and trying to do beneficial AGI projects for medicine, for scientific research, for education and so forth. 

At the moment, decentralized democratic beneficial AI or AGI is kind of a drop in the bucket compared to the growing corporate/government/espionage/military AI ecosystem. That’s a challenge you can try to confront head on or you can try to confront in a subtler way by planting seeds of decentralized beneficial AGI and planting seeds of positive human consciousness trying to foster exponential growth in these things. 

It’s great what you’re up to, I really like it. What you pointed out about the dystopian uses of AI, one comes to mind. The world leader shows recognition of AI by a long shot from my understanding is China and that’s pretty scary. 

I think China may be slightly ahead in face recognition now, but everybody’s there, that’s a commodity. It’s interesting that in 2014, computer vision and face recognition were still a research thing and then by 2017 or so, it’s pretty much a commodity that’s everywhere. 

One company may be at 99.8% accuracy, the other is at 99.6% accuracy, but pretty much face recognition in good lighting, when someone hasn’t aged 10 years old, or has grown a beard, or something, it’s a solved problem. Even face recognition with masks is largely a solved problem outright. 

What’s interesting there to me is how quickly that went from being a research subject to being a widely deployed commodity. We’ve seen natural language processing go through the same arc between 2018 when Google launched the BERT model and in 2021 and 2022. 

We may see the same two to four year arc of development in AGI, where AGI goes from being a research topic to being a commodity rolled out everywhere. It may take two, three, four years to make that transition just because that’s how long people take to learn each other’s code. read each other’s papers, deploy computer networks, and fine tune things. 

Looking at China, US, Russia, and the rest of the world in terms of AI, I probably have as good a view on this as anyone. I lived in Hong Kong for nine years and relocated back to the US two years ago. I spent a bunch of time in Shenzhen, Beijing, and Shanghai, very centers in the mainland there and. 

My project SingularityNET, which is a globally distributed AI meets blockchain platform, we have offices everywhere on the planet. We have a team in Hong Kong working on humanoid robots and our largest AI development office is in St. Petersburg in Russia. We had some team members in Kiev who we’ve recently helped get a bus to get out and they’re now staying with some of our other team members in Poland. I think I’ve seen what goes on around the planet. 

The vast teeming mass of disorganized humanity is going to be better than a centralized government or a big tech company in shepherding us towards the singularity. Share on X

Certainly there’s no monopoly on AI expertise. There’s brilliant AGI and narrow AI developers, young and old, all over the place. We’ve got a substantial AI development office in Addis Ababa, Ethiopia. We’ve been looking at doing an AGI R&D lab in Nairobi in Kenya. 

There’s no shortage. People with deep knowledge of AGI R&D, computer science, cognitive science, whatever. There’s no monopoly in Silicon Valley, in Beijing, or whatever. I think China has been unparalleled in recent years and taking advanced AI and rolling it out commercially at large scale and they’ve done better than the US or Western Europe. 

If you look at the recent revolutions in computer vision and natural language processing, these were all led by innovations from the US and Western Europe. Quantum computing, you’d say the same thing, it’s led by innovations from the US and Western Europe. This is not a racist thing. 

Outside the Gates of Science by Damien Broderick

These are largely Chinese and Indian scientists operating in US and Western Europe alongside white people and black people working in US and Western universities and tech companies. It’s just that the business ecosystem mainly in the US—also UK, Germany, France—and Western Europe has been better than anywhere else. The technology transfer, taking stuff out of university labs and out of small startups, and deploying it at large scale, and figuring out how to make research innovations practical. 

Once something has been proven practical, China has then been super fast at taking out and scaling it up even more to their billion and a half people. China has not yet proved effective at tech transfer, taking stuff that’s at the university or individual gonzo hacker level and transferring it to be commercially viable. Russia has not yet been good at doing that either. I’d say the US and Western Europe still are the only places that’ve been good at taking stuff out of the research lab and turning it into something that’s practical and commercially viable. 

I don’t see that changing anytime soon. It seems that the Chinese investment community has very little appetite for technology risk, although a large appetite for market risk. They want to see something that’s already been proven and practiced somewhere else in the world before they put investment into rolling it out in China. 

In Russia, even before the current Russian economic setback, which looks to be severe in pretty much the same way, you have insanely innovative and insanely insane Russian mad scientists coming up with all sorts of wild new stuff. It’s very hard for them to get business investment within Russia until it’s been proven in US western Europe or somewhere else outside Russia. 

It’s quite an interesting ecosystem and involves inventing new ideas. Is that happening everywhere? It’s happening in universities everywhere. It’s happening with crazy hackers everywhere. Then transfer from the crazy idea phase to the commercial product phase happens in the US, UK, Germany, then China, Russia, and everywhere takes that, redeploys, and rolls it out for their own purposes. This happens quite rapidly, within months to years, the transition between these different phases. It’s quite a complex global ecosystem, really.

Where AI really happens is when that code meets data and large amounts of processors, which takes money.

The core algorithms underlying AI are published. They’re in papers on, they’re in code that’s on GitHub. Chinese researchers and Russian ones are posting the papers and putting the code in GitHub also. 

Where AI really happens is when that code meets data and meets large amounts of processors and that takes money. That means, so far, the Chinese government or US big tech companies. That’s where I think we need to work on decentralizing things. Discovery and innovation are already decentralized. Deployment and commercial rollout are heavily centralized and a few big tech companies are a few governments. 

That’s where I’ve been thinking blockchain has a role to play. Just like Bitcoin is decentralized money, SingularityNET aims to be decentralized AI processing power. The thinking here is that the vast teeming mass of disorganized humanity is going to be better than a centralized government or a big tech company in shepherding us towards the singularity. 

The risks of it are obvious because the vast teeming mass of humanity is a mess. Actually, this was a question I had for you after looking at a bit of your podcast. How do you see the progress of human consciousness? We have a technological singularity coming about. Technology is advancing exponentially. 

SingularityNET aims to be decentralized AI processing power.

Do you think there’s a hope that human self awareness, human ethics, and human positive compassion development progresses exponentially to match? Or are we looking at a future where technology gets more and more powerful and exponentially advances, but human advancement is linear? 

I would have answered this very differently just a couple of years ago, prior to my spiritual awakening on January 22 of last year, but the way I’d answer it now is immense faith and certainty around the evolution of humanity collectively to awaken and not destroy the planet and to utilize AGI, molecular nanotechnology, and all that as it comes in a way that is benevolent, but that’s not going to be the reality for everybody. 

Some people are mired in darkness and negativity. Ram Dass would say, we’re all just walking each other home. Those people are going to take a longer time and take some detours to get home. 

They may be the ones with their finger on the nuclear button. 

Technology is advancing exponentially.

Yes, but that will be for their reality, their slice of the multiverse, not ours. There are infinite numbers of universe happening and each of us are in the driver’s seat of ours as the observer. If you believe in benevolence and an unconditionally loving Creator and all kinds of magic—I know one of your areas of focus and study was parapsychology and that would include things like telepathy—that stuff is real. 

How could it be real if all we saw was what meets the eye? It’s not seeing and then believing. It’s actually believing and then seeing. That’s how you manifest reality in the simulation that we call our Earth, our reality. I’m curious to hear what your take is on all that, because that’s kind of a mouthful, but why don’t you take it from here? 

Parapsychology is its own topic. I do believe the data for that is pretty strong, that precognition, ESP, and other such phenomena exist, even though there’s a lot of fraud in that domain like in many others. It’s interesting. I’ve been fairly involved with the parapsychology research community in the past few years and there’s a couple of different camps there. 

Parapsychology is its topic.

One is yeah, this is real. It’s a physics phenomenon. It’s just physics we don’t know yet, but there’s nothing sort of spiritual or spooky there. It’s just the human brain through some quantum mechanical variation that we haven’t figured out yet. The human brain can send signals and quantum mechanics already tells us that the flow of time is not as one direction of classical physics assumed. There’s that one school of thought that these things are real and we just need to learn that bit of physics. Just because we didn’t used to have the physics of other spaces, we’ll have the physics of ESP precognition and whatnot. 

There’s another school of thought, that of Psi phenomena, to understand them, we need to shift to a whole different point of view than modern scientific materialism and take a point of view that consciousness is the ground and our physical universe is one among many manifestations within a deeper field of consciousness. Then Psi has to do with this deeper field of consciousness, sort of intervening in the apparent physical world in ways that physics doesn’t encompass. 

I think there will be new physics that we don’t know yet.

I find myself sort of torn between these two directions, thinking there’s elements of truth in both. I think there is going to be new physics that we don’t know yet. The field of physics has revolutionized itself over and over in the past few centuries. There’s going to be new physics that we just haven’t discovered yet that’s going to help us quantify a lot of these phenomena that now seem spooky and incomprehensible. 

On the other hand, I also do tend to have a sort of Buddhist view that panpsychist consciousness is the ground and in a way, the physical world we live in is an illusion built up by individual and collective minds. Illusion isn’t the ideal world, it’s a construct. It’s a collective construct and each of ourselves is a construction. 

To some level, aspects of ESP, precognition, and so on are going to be best understood by taking this first consciousness first point of view. Some aspects are best understood by the next physics point of view and we haven’t unraveled all the threads yet, but I would say these two perspectives of the empirical and the more consciousness oriented, they certainly both exist when thinking about the Singularity in the future of AGI also. 

I would say if I put my pure sort of bayesian rationalist head on and look at the future of AGI and the Singularity. The main conclusion I come to is that we just don’t know. If you’re going to create something twice as smart as a human or ten times as smart as a human, it’s idiotic to think you can rationally predict what that thing is going to do. 

Narrow AI is an AI that’s smart at doing one particular type of thing but isn’t smart at doing the broad scope of things.

We can’t even predict what the weather is going to be next week, we can’t predict what the stock market is going to do, we can’t predict what random thought is going to come into the next demographic leader’s head and it’s going to direct geopolitics.

The idea that we can really know with any certainty or predict with any certainty whether a super AGI is going to bring love and benefit to humans and other sentient beings or is going to decide that we’re sort of inefficient in the use of mass energy. We just can’t know that. Any prediction we make has a very wide confidence interval and very small degree of certainty. That doesn’t mean we can’t or shouldn’t act in a way that we estimate has the greatest possible benefit because what else can we do? Our own ignorance is profound. 

Any prediction we make has a wide confidence interval and a very small degree of certainty.

On the other hand, if I take a more sort of spiritual and personal intuitive perspective and ask what I feel in my heart and soul? I feel very positive. If I set aside the rational, logical, calculating mind and just rest in a meditative state and a state of that sort of non rational insight, I feel like the positive post singularity world is already there. 

I feel like I can mentally or even transmentally contact it. I can feel that there are beneficial post singularity minds there somewhere created by humans. Some are uploaded humans and enhanced humans. Some are not human minds. We have established contact with post singularity that we’re just not able to contact now because of the limitations of our conventional states of consciousness. One can feel those post singularity minds there. There’s even a retro causal aspect where one can feel post singularity minds in a way reaching back and helping to foster their own creation. 

Terence McKenna had these ideas. Terence McKenna also thought 2012 was going to be the end of the old order and the launch of the new world which I was never a big believer in. I find myself in a state of interesting tension between my rational calculating side, which is like we have no idea what the hell’s going to happen, but we can try to meditate things as best we can in a positive direction. 

Let’s go with democracy, it seems better than anything else. Let’s go with decentralization. It’s certainly better than self-serving elite groups. Let’s go with making AI do medicine and do science and education, which are good things and get beneficial stuff in the minds of the Democratic decentralized NGOs we build because, hey, that’s going to buy things in a positive direction more likely than not, and what else we have to go on.

One of the realizations I’ve come to in my technical AI work is human minds are not consistent, and that’s okay.

I’m torn between that and then a sort of spiritual sense of irrational certainty or transrational certainty that this is going in a positive direction. This is going to be awesome if we just flow with things in an open minded, open hearted way. I don’t think you have to reconcile these two things. One of the realizations I’ve come to in my technical AI work is human minds are not consistent and that’s okay. 

There’s something called paraconsistent logic within the field of mathematical logic. These are logic systems that can hold contradictory things in mind and just work with that contradiction without becoming trivial and without going crazy, but they can leverage that contradiction to generate new interesting things that can be contradictory in some regards, without being contradictory in all regards.

Human minds are paraconsistent. The OpenCog Hyperon system, which is the main AGI project in building on the SingularityNET platform, is capable of reasoning in paraconsistent logic. I’m okay with being a paraconsistent reasoning system myself and dealing with a part of my mind that’s very hard-nosed and rational and a part of my mind that’s very intuitive and going with the flow. We just need to see these contradictions in ourselves clearly for what they are and work with them.

There’s a quote from Emerson, “A foolish consistency is the hobgoblin of simple minds,” trying to force consistency when it’s not fundamentally the way things work. That just results in minds that shut off parts of themselves from other parts of themselves which ultimately doesn’t foster maximum, goodness, maximum joy, growth, and choice. 

It was great. One concept that I think is related to what you’re describing is early resolving the contradictions, but just being okay with them existing is a concept that I kind of live by, which is the willing suspension of disbelief, because that opens you up to so many possibilities to infinite potentialities. If you’re close minded and narrow in your thinking and you’re open to or willing to consider, then you’re closed yourself off to so much existential bliss.

Like you mentioned, Terence McKenna. I had a psychedelic experience that didn’t involve any drugs. It didn’t even involve any kind of hyperventilating type breathing. It was just getting touched on the head by a one spunk very elevated consciousness monk. This was in India in 2012. He gave me what’s called a Diksha, oneness blessing. Everything was in technicolor like a cartoon.

It was incredible. I felt this deep connection to God to all that is like the fabric of creation. Up until that point in my life at age 42, I was agnostic. I didn’t believe in anything. I was a skeptic, I was a scientist, I was almost a cynic. I was not really that cynical, but I wasn’t very open to it. So that was a huge shift in my life. Then all these miracles started happening.

It can be hard if you’re very scientific to resolve that idea or experience of miracles and synchronicities and just stuff that doesn’t seem to naturally happen, or shouldn’t kind of like glitch in the matrix sort of thing. It happens over and over and over again. That’s pretty amazing.

Discovery and innovation are already decentralized. However, deployment and commercial rollout are heavily centralized by a few big tech companies and governments. Share on X

I’m curious to hear from you what sort of experience or moment happened for you in regards to paranormal or supernatural stuff that got you interested in parapsychology? Did you have some sort of, I don’t know, some spiritual epiphany, some sort of psychic event happen? Do you see a goat like what happens?

In terms of spirituality, I would say I went back and forth many times over the course of my life. I think in early childhood, my mom was going to graduate school in Chinese history. She had all these books on the history of Buddhism and Taoism and Chinese thinking. So I dug into these and I was maybe seven or eight years old. It was quite interesting. That’s where I encountered the notion of meditation.

At that age, your mind is a little more open, and I could feel myself sinking into a state where there is a broader space in which our own world is a tiny little speck. Then I got into Ouspensky when I was in middle school. Ouspensky is in search of the miraculous. He’s a Russian mystic. He wrote that most people are basically walking zombies and asleep almost all the time. He wants you to try to be acutely conscious, every waking moment of everything.

I spent most of middle school trying to be awake all the time, including while sleeping, which is interesting. I’m sure I was doing it in a somewhat misguided way but it led me to some quite interesting states of consciousness. It was particularly funny. I was a very misbehaving middle school student. You will get put in detention after school. We weren’t allowed to read or do homework. We just had to sit there in the chair.

When you’re in a psychedelic experience with others, you have complete faith that you can see certain things in their minds and vice versa.

I would drive him crazy by just sitting there and meditating and blissing out during detention, being pissed off about being stuck in the chair for 45 minutes after school. I sort of went back and forth between that and being very hard-nosed and scientific about things. I discovered psychedelics, I guess, when I was 15 or something. I started university at 15 and college is a place for these things. That certainly added a new twist to it. Because when you’re in a psychedelic experience together with others, you have the complete faith that you can see certain things in their minds and vice versa.

Telepathy and precognition seem almost obvious in that state of mind but then the drug wears off and you’re like, was any of that real? Was that entirely delusion? Then you do the same thing with say a psycho DMT or Ayahuasca. You could take that and you’re like, what?

Either I just spent an infinite amount of time conversing with this benevolent army of mischievous infinite dimensional intelligences, who transmitted to me information far beyond anything the human mind can understand. Either that or some nerve in my brain was stimulated in some weird way giving me that illusion, which means absolutely nothing. Which was it? I have no idea.

It’s the former.

Yeah. I would say it took me a long time to integrate these two ways of thinking in my mind because I definitely grew up as an atheistic communist. I was completely rational oriented in a way and these spiritual expeditions and I did not like religious stuff at all. Like the fortune teller at the beach, trying to read your future in a crystal ball looks insanely full of shit, like the newspaper horoscope looks amazingly full of shit. Right?

I can totally relate. I was raised by Jehovah’s Witnesses and a Catholic.

I was pretty stressed and a lot of that is. I still think a lot of it is sort of a very strongly anti bullshit and pro data. Actually, I hadn’t had various ESP type experiences. I’d been close to people who had really strong remote viewing experiences, like seeing things occurring in a remote location where you could have no data for and then you go there and whoa, that thing obviously actually happened. 

Yeah, that’s real.

Having these sorts of experiences of reading what’s in someone else’s mind and finally, afterwards, it was true. Having these sorts of experiences didn’t quite convince me. I was sort of well, either this shit is real, or I’m going crazy, and neither one is entirely possible. I got to know the science fiction writer, Damien Broderick. He wrote the book Outside the Gates of Science, which is a popular expedition on psy-phenomena.

We may see a two-to-four-year arc of development in AGI, where AGI goes from being a research topic to being a commodity rolled out everywhere. Share on X

I followed up all the references in that book and spent six months of spare time just reading all the research data in parapsychology and then contacting various researchers in parapsychology to convince myself they weren’t all frauds. Eventually, it was actually scrutinizing huge amounts of data spreadsheets and writing scripts to crunch them and just finally come to the conclusion. These results do not go away.

This is weird, but it’s one way or the other. This is real stuff. Then you see that this is real stuff. Then you see various phenomena like people in a meditative trance state are better at remote viewing and ESP than people who are not. There’s various direct connections between spiritualistic modifications of state of consciousness and an effective list of parapsychology phenomena. Having opened that door, I look further into data on reincarnation and survival, which, again, not everyone who believes in parapsychology believes in those things.

I totally don’t believe in Christian visions of heaven or hell, nor in Hindu mythology of reincarnation. None of these things can really be the full story by any means. There’s all sorts of weird holes and all these traditional sort of mythological ideas. On the other hand, again, there is data there, you cannot wish away. The data of kids being born, knowing details of other people’s lives is just too much. 

Right. I had Jim Tucker on this podcast. He’s the guy at UVA.

Yeah, that’s all there. Again, you can drill down into the actual first person reports of various people, and it’s not all lies. What all this tells you is just that there’s far more to our life and universe than the conventional rationalist materialist worldview says. It doesn’t tell you that any mythological religion is correct in any sense beyond they make people feel good. It tells you there’s a lot more than what is conventionally acknowledged.

The consciousness is the ground, and our physical universe is one among many manifestations within a deeper field of consciousness.

When you cross reference that to the technological singularity, and you say we’re going to create minds that are massively more powerful and insightful than the human mind. The odds that these superhuman minds that we create by engineering AGI or via plugging computers into our brains and networking humans to gather via WiFi telepathy, the odds that these transhuman minds are going to be able to understand the aspects of life, the universe and everything that are still nebulous and confusing to us humans, it seems very high.

Apes might look up at the same and wonder what those lights are. But we’ve just gotten far more understanding of what those shiny lights are in the sky than apes are able to. In the same way, superhuman AGI and mind uploads and enhanced humans, you’re going to be able to get a far deeper understanding of survival, reincarnation, parapsychological phenomena, human consciousness, far deeper understandings of these things than we have.

Then what will they be able to do with it? What kinds of technologies will they be able to develop? Once you go in that direction, Terence McKenna’s idea that post singularity AGI could use some quantum phenomena that we don’t yet understand, to reach back in time and in some way influence what humans do to cause their creation. This starts to seem less insane in any way. You’re led to think, in fact, that’s probably not crazy enough.

As Max Born said about some theories of quantum mechanics, it wasn’t quite crazy enough to be true, that the reality is probably going to be far crazier than anything we’re able to cook up now. How you interleave this line of thinking with what we started off the hour talking about in terms of what’s going on in Ukraine and malnutrition and brain stunting among African children. It’s all quite complex and confusing, because there are many aspects of the universe human beings don’t understand at all.

There probably are much more intelligent, much more benevolent minds existing now in the universe than any of us. On the other hand, in this particular shard of the multiverse that we’re living in, there are pretty horrible things happening. If the AGI is reaching back in time to cause its creation, why doesn’t it reach back in time and give some food to the kids in Ethiopia to grow up very instant.

The default in Africa and Asia is to believe that parapsychology exists, and that consciousness is the universal ground.

There’s clearly a lot we don’t understand about the overall order that we’re acting within, either from a rational or a sort of spiritual and compassionate point of view. I should add, this is all my own peculiar self-understanding and my colleagues in the singularity that the blockchain project I’m running and OpenCog Hyperon and the AGI engineering project I’m running are a reasonable percentage as crazy as I am.

There’s also certainly a good number of highly productive, incredibly helpful colleagues who take more of a rationalist materialist view, who are contributing in a super ethical and productive way toward building decentralized blockchain infrastructure and building AGI cognitive algorithms and so forth. I don’t think you need to embrace all of this broader perspective to contribute very productively toward bringing about a beneficial singularity.

It is interesting to me that I’m able to mouth off on these crazy ideas on a podcast, and I’m not going to be completely shunned from the technology and science community because of it. I think there is a greater acceptance of spiritual perspectives, even a slightly greater acceptance of parapsychology than there was 10-20 years ago.

Yeah. 100 % sure you wouldn’t have even made it out alive.

Depends on where you are. We talked about China in the context of face recognition. I would say right now, a tremendous majority of Asian people believe parapsychology is real when they’re just well, that’s all stuff. I’m more interested in my phone. But it’s not like they dismiss it. The whole rationalist materialist worldview is sort of a Western thing.

The default in Africa and Asia is to believe that parapsychology exists, and that consciousness is the universal ground. That’s taken for granted in India too. That’s a default belief system for the majority of the world’s population. It’s not what drives those economies. It’s not what they used to want to think about. It’s taken for granted in a very different way than what you find in the US or Western Europe.

Now, AGI is part of the marketing slogan for global corporations.

Even regarding AGI though, when I got my PhD in Math in the mid 80s, and when I started working on AGI and published the first book on AGI in 2004, I started the AGI conference series in 2006. The notion of AGI was way out there and not really a subject for polite discussion in the university research seminar or a corporate boardroom whereas now. Now, AGI is part of the marketing slogan for global corporations.

Vladimir Putin talked about AGI in his talk at the AI Journey Conference a few years ago. We’ve seen a transition from AGI being on the margins to being well accepted. I think, the greater acceptance of consciousness as well. When I was a research fellow in psychology at University of Western Australia in the mid 1990s, consciousness research was still not fit for polite discussion in the psychology department. Now, consciousness study is a real part of the psychology field.

I think we’re seeing an opening up within the scientific community and the mainstream technology world, even the politics world is seeing and opening up to broader understanding of what might be possible of what humanity in the universe might be, which is interesting. Still, if I look at it empirically, with my hard-nosed scientists, I’m going to say technology is advancing exponentially in a very clear way. Human consciousness is not opening up exponentially in a clear way. I mean, there’s, there’s some of that.

And it is happening. It’s behind the veil.

Well, there’s an increase in tribalism. There’s an increase in narrow-mindedness in many ways.

If you get in these conversations with people you think—okay, this person is a die hard materialist. That’s how I identify this person as or how I characterize and then you have a spiritual conversation with them to take that risk. Then you find out the person has seen angels, or the person had an out of body experience. Astral projection is what it’s called. 

For example, Vishen Lakhiani. He’s the founder of Mindvalley. He has a whole online course about astral projection, where he brought in one of the leading experts to teach it. He shares in an interview that he had an out of body experience.

His first one, I think he was still a teenager and changed his life. He was able to travel out of body and hang out in his backyard or something. It was amazing. I think even the diehard materialist would agree that somebody like Carl Jung was an amazing scientist.

That’s definitely not true, by the way. I mean, I spent a number of years in university Psychology Department, and guys like Jung and Freud are thought of, on a par with, say, Plato or Spinoza or something, but they’re not thought of as scientists. 

I did a search and a key came up with the term synchronicity and he did all this research on synchronous.

Well, again, if you put the hard-nosed empiricist hat on, science is about creating precise hypotheses and validating or refuting them. Anecdotal evidence isn’t science, Jung’s book on synchronicity was all anecdotal evidence.

There’s no monopoly on AI expertise. There are brilliant AGI and narrow AI developers, young and old, all over the place.

Okay, I get your point.

I mean, if you look at Freud—Freud came up with the notion of repression, which is real. He also had the electric complex and all women are plaguing their childhood by the desire to be boys so that they can have the desire to have sex with their mom and kill their dad. There’s a lot of weird stuff in there, along with stuff that we now consider valid. They didn’t have a method for winning through which of their ideas held up or not, which is what the scientific method gives.

I find Jung and his whole notion of archetypes and the collective unconscious incredibly inspirational. I think there’s a deep validity there to the notion of synchronicity also, but definitely those guys were not scientists in the modern sense. The interplay between spiritual intuitive insight and science and engineering is going to be one of the more interesting things during the next couple of decades as we advance toward singularity, I think, because something like AGI, robotics, brain computer interfacing, to make these work, this takes a very vigorous, hard-nosed scientific approach.

You would probably agree, if you’re going to get your head cut open and get some wires stuck in to connect your brain to a peripheral, you probably want a very rigorous and careful testing methodology to be carried out on the machinery. On the other hand, there’s very big decisions to be made about, what direction to take technology, when to roll things out, who controls what technology, and we don’t have science to make those strategic decisions.

Those strategic decisions are either being made on a sort of spiritual inspired basis, or they’re being made on an ego, greed selfishness and tribalist driven basis. We need to reconcile the scientific engineering empiricist point of view that we’ll throw out the brain computer interface if it doesn’t hold up to testing. We need to get that view working very closely together within an inspired and spiritual mindset and perspective for guiding the strategic decisions. I would say that’s not how things are mostly working now with decisions on science and technology. They’re being made for a position of ego and a position of tribalism. This is a significant issue.

Stay updated on machine learning and AI.

My friend Gabriel Axel is working on an app and a whole program called Path Form. One of the motives of which is to sort of enlighten the AGI developers and the technologists of the world. To try to combine various consciousness practices together into a program that will be appealing to technical people, AI developers, nanotech developers and brain computer interface developers, with a view toward increasing the percentage of people that work on the nitty gritty technology, who are coming at it from a perspective of broader blissful consciousness and states of extraordinary well being.

The more the developers come up with from that perspective, the more likely that strategic decisions are going to be informed in a sort of enlightened way. That said, I mean, if you look at the rollout of face recognition, it was mostly developed from a standpoint of just neutral academic curiosity and is being mostly rolled out now for espionage, for tribalistic purposes and secondarily for commercial purposes and figuring out what to sell you. What started out ethically neutral is mostly being deployed in an ethically bad way.

Brain computer interfacing is an example where we would really rather not have it come out that way. AGI, of course, is a different example. Look at the the assistants in our smartphones, Siri and Google Assistant, these things are pretty dumb right now. One thing I want to do with OpenCog Hyperon running on SingularityNET, is make smart phone systems that actually are smart and understand what’s going on.

But again, this can be done from a standpoint of brainwashing you to buy a bunch of garbage, or it can be from a standpoint of having the smartphone assistance helping you through your day and remind you when you’re not being your best self and help you keep on track with spiritual practices and help connect you with the experiences you need to take the next step in your own growth.

These sorts of smartphone assistant apps can go either way and it depends on who owns them. It depends on who’s developing them and what state of consciousness as they’re doing the development. That’s sort of the bifurcation or choice point we’re at now, which is a quite interesting position to be in. I think I’m out of time here, I think.

This is fabulous, though. What a great conversation we had. Just a fun fact to share with you to round off this interview—Terence McKenna became interested in psychology because at age 14, he read Carl Jung’s book, Psychology and Alchemy.

Oh, wow. Yeah, interesting.

This is fun stuff. 

We could talk about Jungian archetypes for a long time. We can save that for a follow-up next year.

Yeah. Yeah. Fun stuff. Alright. So, how do our listeners or viewers learn more from you, follow you on social media, and read more of your stuff? Where should they go?

My personal website, has links to my Twitter, LinkedIn, and a bunch of my books and podcasts, and so forth. For my professional work, you can look at which is the AI project with links to a bunch of other stuff too. That should keep you busy for a while. I did a 4-hour podcast with Lex Fridman a couple years ago.

I would say on psych and reincarnation, I did a couple podcasts on New Thinking Allowed podcast in those areas also.

Wow, awesome. Fabulous. Ben, thank you so much. Thank you, listener. Keep an open mind and stay curious. We’ll catch you in the next episode. I’m your host, Stephan Spencer, signing off.

Important Links

Checklist of Actionable Takeaways

?Exercise the freedom of information and decentralize AI. This will allow the public to access the exponentially advancing technology without the government or a big tech company interfering with its development and usage.

?Aim to improve humanity’s future through inventing paradigms, tech, and robotics. Technology made life easier – from farming to building cities to traveling. It effectively links all countries on earth, helps to create globalization, and makes it easier for economies to grow and for companies to do business.

?Stay updated on machine learning and AI. Awareness of the latest ML and AI technology will allow me to slowly incorporate it into my business and daily life

?Find ways to future-proof myself and my business. This will allow me to design or change myself and my business to continue to be useful or successful in the future if the situation changes.

?Dream of a better world. With all the negative things happening in the world, it needs dreamers and action-takers to envision and make it better for future generations.

?Spark ideas and creativity into my day. This will allow me to view and solve problems more openly and with innovation as it broadens my perspectives and can help me overcome my prejudices.

?Join communities that discuss AI. These communities allow people to share their insights and learn from each other about AI. The success of AI technology is in the hands of the people that collaborate on the data and insights gathered from it.

?Have faith in humanity. Always believe that the good outweighs the bad and will win in the end. Faith in humanity is a prerequisite to bringing out the best in all of us.

?Visit Dr. Ben Goertzel’s website to learn more about his research on his blog, podcast, and books. Also, check out SingularityNET’s website to learn more about AI and AGI.

About Dr. Ben Goertzel

Dr. Ben Goertzel is a cross-disciplinary scientist, entrepreneur and author. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference.

Dr. Goertzel also chairs the futurist nonprofit Humanity+, and serves as Chief Scientist of AI firms Singularity Studio, Rejuve, SingularityDAO and Xccelerando Media, all parts of the SingularityNET ecosystem. As Chief Scientist of robotics firm Hanson Robotics, he led the software team behind the Sophia robot; as Chief AI Scientist of Awakening Health he leads the team crafting the mind behind Sophia’s little sister Grace.

Dr. Goertzel’s research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more.  He has published 25+ scientific books, ~150 technical papers, and numerous journalistic articles, and given talks at a vast number of events of all sorts around the globe.

Before entering the software industry Dr. Goertzel obtained his PhD in mathematics from Temple University in 1989, and served as a university faculty in several departments of mathematics, computer science and cognitive science, in the US, Australia and New Zealand.

Disclaimer: The medical, fitness, psychological, mindset, lifestyle, and nutritional information provided on this website and through any materials, downloads, videos, webinars, podcasts, or emails is not intended to be a substitute for professional medical/fitness/nutritional advice, diagnoses, or treatment. Always seek the help of your physician, psychologist, psychiatrist, therapist, certified trainer, or dietitian with any questions regarding starting any new programs or treatments, or stopping any current programs or treatments. This website is for information purposes only, and the creators and editors, including Stephan Spencer, accept no liability for any injury or illness arising out of the use of the material contained herein, and make no warranty, express or implied, with respect to the contents of this website and affiliated materials.


live life to the max

How Optimized Are You?



Please consider leaving me a review with Apple, Google or Spotify! It'll help folks discover this show and hopefully we can change more lives!

Rate and Review

Leave a Reply

Your email address will not be published. Required fields are marked *


Master the Use of Spycraft with David C. Baker
From Duality to Oneness with Andrew Daniel
Faith, Miracles, and Divine Intervention with Andrew Windham, Dr. Angelo Brown, and Spencer Shaw
almost there

xFill in Your Name and Email, and Access Your Free Diagnostic Assessment

Upon completing the assessment we will email your personalized results Privacy Policy


Lorem ipsum

live life to the max

How Optimized Are You?

Give me 9 minutes and I'll give you a map to a fully optimized YOU Start Optimizing