Peter H. Diamandis BLOG - Upgrade Your Mindset.

The Great AI Debate

Written by Peter H. Diamandis | Dec 3, 2023

The great AI debate is on... Is AI friend or foe? Is AI the road to abundance? Or the end of humanity? Are you an accelerationist (go fast!) or a doomer (slow down!)? 

Do you believe AI will give us all god-like powers in the next 5 years, or overthrow humanity?

During my upcoming Abundance360 Summit in March, we’ll be discussing this very topic with some of the leading entrepreneurs and voices in AI.

It’s hard to believe that ChatGPT is only 1 year old, and how fast it is all moving.

Question: Have you prioritized AI as *the most important tech* in your life yet? How do you view AI in your life, your company, and your industry?

Frankly, there is no more important question for you to contemplate if you’re an entrepreneur, CEO, investor, philanthropist, or business owner.

AI is so fundamental, it’s likely to have a binary outcome:

“There will be two kinds of companies at the end of this decade.

Those fully utilizing AI... and those that are out of business.”

Let’s take a quick look at some “Reasons to be concerned” vs. “Reasons to relax” about AI.

 

Reasons to be Concerned...

Firing Sam: Why in the world did the OpenAI Board (specifically Ilya Sutskever, OpenAI’s Chief Scientist) fire Sam Altman? This is the $90 billion question that will no doubt be a pay-per-view movie within the next 3 years. Did early evidence of artificial general intelligence (AGI) spook the Board? Whatever the reason, the fact that the world’s most powerful AI company has experienced such turbulence is itself reason for concern. It’s also a reminder that in the great AI debate, we not only need to pay attention to the technology itself but also the people who develop and control it.  

2024 US Presidential Election (“Patient Zero”): Eight years ago, it took Cambridge Analytica and a sizeable budget to interfere with the 2016 elections. If Cambridge Analytica and other actors had so much influence back in 2016 (when AI was far less developed than it is today), what can we expect to happen during our next election (2024) when a high school student, using today’s free AI tools could likely cause significant havoc?

Job Losses Due to AI: The elimination of jobs resulting from the prolific use of AI could leave an entire generation of college grads scratching their heads as the market for skilled labor disappears. This on top of the trauma from the COVID-19 pandemic could be a formula for civil unrest. How do we retrain or upskill in time? Can we realistically make some version of UBI work? These are all questions that will become more pressing as the pace of AI development accelerates.

Disinformation and the Erosion of Truth: We have every reason to be concerned about the erosion of truth and trust due to disinformation. The dystopian use of AI equips bad actors and malicious individuals with the tools to generate misinformation at scale—from deepfakes to fake news. What will this mean for crucial social institutions (e.g., the media, education) which, although imperfect, play a critical role in keeping our society functioning? Can algorithms that enable us to parse fact from fiction, truth from disinformation become available in time?

 

Reasons to Relax...

When it comes to reasons to relax, the logic of Meta’s Chief AI Scientist Yann LeCun will help take the edge off your concerns.

As LeCun clearly argues, “intelligence doesn't necessarily equate to a desire for domination.” This insight is crucial in understanding the trajectory of AI development.

Why AGI Doesn’t Equate to Domination Over Humanity: First, let’s consider the human analogy used by LeCun: “Intelligence in humans doesn’t always correlate with a thirst for control. In fact, it's often the opposite. The most intellectually gifted among us aren't necessarily those seeking power. This phenomenon can be observed in various arenas, from international politics to local communities. It's believed that those who aren't as intellectually endowed might feel a greater need to influence others, perhaps as a compensatory mechanism. In contrast, those with higher intelligence can often navigate life relying on their skills and knowledge.”

The second point to consider is our existing comfort with working alongside people smarter than ourselves. As Yann LeCun reflects on his own experience leading a research lab, “the most rewarding hires were those who brought more intellect to the table than myself. Working with individuals who display superior intellect can be enriching and elevating.” Similarly, our future interactions with AI assistants—envisioned to be more intelligent than us—will likely enhance our capabilities rather than diminish them.

AI: Enhancing Human Intelligence, Not Overthrowing It: These AI systems, advanced as they may be, will exist to serve and augment our intellect. This relationship is analogous to a mentor and apprentice, where the AI plays the role of a supportive and enlightening guide. It's a misconception that higher intelligence naturally leads to a desire for domination. This notion stems from our understanding of social species, like humans, where hierarchical structures are prevalent. However, intelligence and the desire to dominate are not inherently linked.

LeCun suggests that we consider orangutans, a species nearly as intelligent as humans, yet lacking any notable desire for dominance due to their non-social nature. This example illustrates that intelligence doesn't intrinsically lead to a desire for control. In the realm of AI, this principle remains true. We can design intelligent systems with no inherent ambition for domination. Their primary function will be to assist us in achieving our goals, acting more like tools than rulers.

Guiding AI with Human-Defined Goals: The key to harnessing AI's potential lies in goal setting. LeCun argues that, “It is we, the humans, who will define the objectives for AI systems. These intelligent systems will then create subgoals to achieve these primary goals. However, the method to ensure that AI aligns its subgoals with our intended outcomes is a technical challenge yet to be fully resolved.” It represents a frontier in AI research, crucial for ensuring that AI develops in a manner beneficial to humanity.

AI as an Extension of Human Interaction: In the future, we might interact with the digital world predominantly through AI agents. Imagine an AI as a more interactive, knowledgeable version of Wikipedia, a platform that not only stores information but also infers, learns, and assists. This AI would need to be based on an open-source platform, akin to the way the internet operates today. It’s imperative for such a system to be open and accessible to all, avoiding the perils of being controlled by a few private entities.

The Open-Source Imperative and Global Contribution: Yann LeCun does acknowledge that the dangers of a small number of companies controlling super-intelligent AI systems are significant. “They could potentially influence public opinion, culture, and more, leading to an imbalance of power and control.” He concludes, therefore, that the development and operation of these AI systems must be open source, allowing for global contribution and oversight. This approach ensures that AI becomes a repository of our collective human knowledge, shaped by diverse inputs from across the globe.

 

Join the AI Debate at the Abundance360 Summit

Whatever side of this debate you fall on, there is no question that we all need to invest the time to fully understand AI, how to use it, and how to guide it.

And as an entrepreneur, CEO, or investor NOW is the time to engage.

If you want a front-row seat to this debate and the insights and tools you need to go big, create wealth, and increase your own impact on the world… then consider joining me at the Abundance360 Summit on March 18-21, 2024.

The speed of AI is accelerating at an exponential rate...

Don’t blink.