<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1014361069129065&amp;ev=PageView&amp;noscript=1">
7 min read

WTF Just Happened in AI?

Apr 25, 2024

This blog is presented by

The 2024 Abundance Summit was the best ever. Themed "The Great AI Debate," we discussed whether digital superintelligence represents humanity's greatest hope or our gravest threat.

In this blog, I’ll summarize the key insights and revelations that came up during my discussions with Elon Musk, Eric Schmidt, Nat Friedman, Emad Mostaque, Michael Saylor, Ray Kurzweil, and Geoffrey Hinton. 

Last week during a Moonshots Podcast with Salim Ismail (Founder, OpenExO), we summarized the key takeaways from last month's Abundance Summit. 

Let's dive in!


Elon Musk: We are Raising AI as a Super Genius Kid

One of the most extraordinary conversations was with Elon Musk. He compared the process of creating AI to raising children. As he put it, "I think the way in which an AI or an AGI is created is very important. You grow an AGI. It's almost like raising a kid, but it's a super genius godlike kid, and it matters how you raise such a kid … My ultimate conclusion is that the best way to achieve AI safety is to grow the AI in terms of the foundation model and then fine tune it to be really truthful. Don't force it to lie even if the truth is unpleasant. That's very important. Don't make the AI lie."

I think Elon makes a good point about not forcing an AI to lie. But as Salim noted, the pace of AI development means we'll have AI smarter than us very quickly, which carries immense implications—both positive and negative.

On the positive side, it could rapidly deliver abundance. But on the negative side, AI can be used by malevolent individuals to cause great harm, or be programmed with goals that are misaligned with those best for humanity.


Is AI Our Greatest Hope or Gravest Threat?

During my conversation with Elon, I pushed him on his views regarding humanity's future with digital superintelligence. He estimated a 10% to 20% probability of a dystopian outcome where superintelligent AI ends humanity.

Others like Ray Kurzweil and Salim are more optimistic, putting the odds of devastating negative effects from AI in the 1% range. Salim put it this way, "The AI genie is out of the bottle and containment is no longer an option. The smartest hacker in the room is the AI itself. Our job is to raise it well, like Elon suggested, making sure that we are giving birth to a Superman rather than a super villain.


Eric Schmidt: AI Containment & Regulation

The topic of AI containment and regulation also came up during my discussion with Eric Schmidt. Some in the AI community are frustrated with OpenAI's Sam Altman for releasing models publicly and then suggesting to governments that regulation is needed, when most experts agree effective containment or regulation is not feasible at this stage. 

As Salim noted, the key is to help AIs become as conscious as possible—as soon as possible. The more expansive an AI's awareness and modeling of the needs of all life on Earth, the more likely we’ll have a positive outcome. We must point them towards a future of abundance and flourishing for all.


Mike Saylor: Bitcoin Won’t Fail

At the Summit, I had a 90-minute fireside conversation with my MIT fraternity brother Mike Saylor, CEO of MicroStrategy (the largest corporate Bitcoin holder). Mike recounted how he convinced his board of directors to put the company's entire treasury into Bitcoin in 2020. 

Since then, MicroStrategy has been the fastest growing stock alongside NVIDIA. As Salim observed, "The more anybody understands Bitcoin, the more they believe in it." When one of the Abundance Summit members asked Mike if Bitcoin could ever fail, he was resolute: "As long as the world doesn't plunge into some Orwellian, no property rights situation, I think we're good."


Mike Saylor: Bitcoin Equals Freedom

One of the most memorable moments was when I asked Mike to elaborate on the idea that Bitcoin equals freedom. He said, "My view on Bitcoin is the reason to do it is because it represents freedom and self-sovereignty, truth, integrity, and hope for the world."

During my Moonshots Podcast, Salim put it poetically, "Web2 is being your own boss. Web3 is being your own bank." For the first time, we have a decentralized store of value that can't be tampered with by middlemen. That represents an unbelievable leap in independence and self-sovereignty.


Nat Friedman: The Discovery of “AI Atlantis”

The AI portion of the Summit kicked off with two extraordinary leaders: Nat Friedman, former CEO of GitHub, and Emad Mostaque, who recently stepped down as CEO of Stability AI, to focus on bigger picture issues around AI governance and decentralization.

Nat Friedman’s most memorable statement was the following: “We have just discovered a new continent—AI Atlantis—where 100 billion virtual graduate students are willing to work for FREE for anyone for just a few watts of power."


Emad Mostaque: “Today is the Worst That AI Will Ever Be”

Emad is now laser-focused on how AI can disrupt healthcare and education. We discussed how AI will soon be capable of groundbreaking advances in physics, biotech, and materials science by mining open-source databases. Crucially, AI can also help address the replication crisis in scientific research.

Emad made the insightful observation that "today is the worst that AI will ever be." While it may seem like huge sums are going into AI right now, he noted that even more money was spent on the San Francisco Railway. We're truly still in the early days with immense room for growth.


Ray Kurzweil: A Few Visionary Predictions

Next, we were joined by the visionary Ray Kurzweil, Salim's and my longtime mentor and colleague. Back in 1999, Ray predicted that we'd have human-level AI by 2029. At the time, most experts scoffed, insisting it was 50 to 100 years away.

No one's laughing now.

As Salim quipped, "Ray has that unbelievable ability to make ridiculous projections that turn out to be mostly true." His track record of accurate technological forecasts is an astonishing 86%. If Ray is right, we are on pace to reach "longevity escape velocity" by 2029, where each year of life leads to more than an additional year of life expectancy thanks largely to AI-driven health tech.

We've already been adding about 4 months to average lifespans per year over the past century. With the exponential progress in stem cells, gene therapies, organ regeneration, and CRISPR, we may soon hit an inflection point of adding more than a year per calendar year—enabling indefinite lifespans. 

Imagining a future where death is optional is mind-boggling. As Salim observed, "We've been birthed for death for the entire history of humanity and every species on Earth ... really, really hard to conceive of the implications of that."

Ray also painted a vision of the future with high-bandwidth brain-computer interfaces (BCI) connecting our neocortices to the cloud. Imagine having Google in your head! Even wilder is the prospect Salim described of meshing our minds together into a "hive consciousness." In my book The Future is Faster Than You Think, I refer to this emergence as a “Meta-Intelligence.”


Geoffrey Hinton: Machine Consciousness is Coming

Finally, we were joined by "godfather of AI" Geoffrey Hinton to discuss machine consciousness. Will AIs eventually become conscious in a way we recognize? Geoffrey and I both believe the answer is yes. 

Salim also agrees, noting that while we lack a clear definition and test for machine consciousness, there's no principled reason why we couldn't replicate the core ingredients of human consciousness in silicon rather than carbon. He pointed to the android character Data from Star Trek as a good model for what we may eventually create.


Final Thoughts

Undoubtedly, we are living through the most extraordinary time in human history. 

While there's a range of opinions on the timeline to AGI, from Elon's 1 to 2 years to Hinton's 10 to 20 years, there's broad agreement that the destination is locked in and approaching fast. 

Along the way, there will be bumps in the road, but I'm tremendously optimistic that the future we're racing towards is one of unimaginable flourishing and abundance. 

What new vistas will we discover as we set sail for AI Atlantis? 

I personally can't wait to find out!


70% of all cancers that are fatal turn out to be the result of cancers that are not routinely tested for by today’s medical system. Today, advanced diagnostics are able to evaluate your health on a regular basis, with the goal of finding disease at the earliest stage possible. Every year, I go through a Fountain Life “upload” as part of their APEX Membership program, and I urge you to do the same. Get started with Fountain Life and become the CEO of your health:

Get Started With Fountain Life

I discuss the latest developments in AI and other exponential tech on my podcast. Here’s a conversation I recently enjoyed:

A Statement From Peter:

My goal with this newsletter is to inspire leaders to play BIG. If that’s you, thank you for being here. If you know someone who can use this, please share it. Together, we can uplift humanity.

Peter H. Diamandis

Written by Peter H. Diamandis


Peter’s laws

The 28 laws that have guided Peter to success.

See Peter's Laws