Peter H. Diamandis BLOG - Upgrade Your Mindset.

The Great AI Debate

Written by Peter H. Diamandis | Mar 28, 2024

The following are my opening remarks from the 2024 Abundance Summit, which took place last week and focused on the topic of the “The Great AI Debate.”

 

2024 ABUNDANCE SUMMIT OPENING REMARKS

We are at a unique point in human history in which we are witnessing the birth of a new intelligence on this planet—our digital progeny.

It’s been the stuff of science fiction for nearly a century, described in the pages of Isaac Asimov, Robert Heinlein, Philip K. Dick, and Arthur C. Clarke.

Back in the 1980’s, I had the pleasure to know Arthur C. Clarke—author of 2001 A Space Odyssey—and to call him a friend, mentor, and the Chancellor of my first university, Int’l Space University (ISU).

Here’s what Uncle Arthur had to say back in 1964:

“The most intelligent inhabitants of that future world won’t be men or monkeys, they will be machines, the remote descendants of today’s computers,” said Clarke. “Now the present-day electronic brains are complete morons, but this will not be true in another generation. They will start to think, and eventually they will completely outthink their makers.”

“Is this depressing? I don’t see why it should be. We superseded the Cro-Magnon and the Neanderthal man, and we presume we are an improvement. I think we should regard it as a privilege to be steppingstones to higher things. I suspect that organic or biological evolution has about come to its end, and we are now at the beginning of inorganic or mechanical evolution, which will be thousands of times swifter.”

So, what happens when AIs are smarter than humans? More importantly, what happens when AIs are a million or a billion times smarter than humans?

 

Will these AIs ultimately be our greatest hope, or our gravest existential threat?

 

And how do we tilt this “AI singularity” in favor of humanity?

 

This in part is what our conversations today and tomorrow should focus on.

 

As Mo Gawdat wrote in his wonderful book Scary Smart: “My hope is that together with AI, we can create a utopia that serves humanity, rather than a dystopia that undermines it.”

 

Later this morning we’ll be hearing from Ray Kurzweil, who predicted with extraordinary accuracy way back in 1999, nearly 30 years ago, that AIs would achieve human-level intelligence before the end of this decade.

 

And while we are rapidly approaching that intellectual milestone, the exponential growth of machine intelligence won’t stop at this arbitrary point.

 

Twenty doublings more will yield a million-fold improvement, and thirty doublings will result in a billion-fold more intelligence than we possess.

 

And as Elon Musk recently noted, “I’ve never seen any technology advance faster than this, the AI compute coming online appears to be increasing by a factor of 10x every 6 months.”

 

Then in early March 2024, in response to Ray Kurzweil, Elon tweeted: “AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.”

 

To reinforce this point, two weeks ago Anthropic released Claude 3, its newest large language model (LLM), which was measured as having an above average IQ score of 101.

 

The field is moving at blinding speed.

 

So how do we think about having AI progeny that may soon be one-billion-fold more intelligent than us?

 

Which, by the way, is numerically equivalent to the intellectually difference between a hamster and a human.

 

With that kind of raw power and intelligence, we can anticipate that AIs will discover many new breakthroughs in physics, realize ingenious solutions to problems like famine, poverty, the climate crisis, and human mortality.

 

But solving such problems doesn’t only rely on intelligence—it also relies on one’s morality and values.

 

Morality serves as a compass guiding us toward ethical actions, particularly when personal gain and intense emotions tempt us away from doing what is just and fair.

 

So, where will our AIs get that morality?

 

The answer is US: humanity.

 

All of us here today are raising this new species of intelligence on planet Earth, and as Gawdat says, “We are its parents, and the AIs are our children... our artificially intelligent infants.”

 

Just as we teach our children to be empathetic, ethical, curious, and respectful, we must instill these values in our AIs to ensure they are forces for good in our world.

 

So, there you have it, an appetizer for our upcoming discussions.

Where do you stand in this conversation?

Do you fear AI? Or fear the malevolent human use of AI.

If given the choice to merge or couple with such AIs, and hop onto their exponential growth curve, to become a kinder and gentler cyborg, would you?

Does increasing intelligence come along with increasing wisdom? And greater respect for life?

I personally think so... or, at least I hope so.

Gawdat believes that the machines will eventually, “adopt the ultimate form of intelligence, the intelligence of life itself. And in doing so, they will embrace abundance. They will want to live and let live.”

 

Some of you will question humanity’s ability to survive in the age of digital super intelligence.

And others, including myself, imagine the mirror thesis: That humanity’s chances of surviving and thriving increase immeasurably with the emergence and oversight of a benevolent, digital super intelligence.

So, again, I ask you, what do you think?

Will digital super intelligence be humanity’s greatest hope? Or our gravest existential threat?

These then, are just a few of the questions we’ll be exploring here at the Abundance Summit during The Great AI Debate.

 

PS – The Theme for the 2025 Abundance Summit will be “Technological Convergence.” You can find more information here.