Is AI humanity’s greatest hope, or our end?
Will AI give us a utopia or a terrifying dystopia?
I recently read one of the most compelling, fun, and easy-to-read books on the topic called Scary Smart by Mo Gawdat, a past executive at Google X.
I’d like to share with you what I’ve learned.
One of the reasons that Mo’s perspective is so important and relevant is that while he acknowledges the potential dangers of rapidly advancing AI, he also gives us a roadmap for how to harness the power of the technology to make the world a better place.
As he writes in Scary Smart:
“My hope is that together with AI, we can create a utopia that serves humanity, rather than a dystopia that undermines it.”
I love that and I’m looking forward to having Mo on the Abundance360 Summit stage in 2024.
In today’s blog, I’ll share some of the key insights and lessons from Scary Smart and discuss why the book’s message is more important than ever.
Let’s dive in…
What Happens When AIs are Smarter than Humans?
My dear friend Ray Kurzweil, the renowned futurist and technologist, has famously predicted that 2029 is the date when AI “will achieve human levels of intelligence.”
And as Mo points out, by 2049 AI is predicted to be 1 billion times smarter than the smartest human: “To put this into perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein.”
With that kind of raw power and intelligence, AI could come up with ingenious solutions and potentially permanently solve problems like famine, poverty, and cancer.
But as Mo smartly notes, solving such problems doesn’t only rely on intelligence—it’s also a question of morality and values. Morality helps us do the right thing, even when we’re faced with the pull of self-interest and conflicting emotions.
For example, say an AI is tasked with solving global warming.
As Mo writes, “the first solutions it is likely to come up with will restrict our wasteful way of life – or possibly even get rid of humanity altogether. After all, we are the problem. Our greed, our selfishness, and our illusion of separation from every other living being – the feeling that we are superior to other forms of life – are the cause of every problem our world is facing today.”
In this admittedly extreme example, what would stop the AI from destroying us is a sense of morality.
Well, you might ask, where would the AI get that morality?
The answer is us (humanity).
That’s the key theme in Scary Smart: we, all of us, are raising a new species of intelligence. We’re teaching the AIs how we treat each other by example, and they’re learning from this.
But before we look at what specifically to teach our AIs, we must first understand how they learn…
How AIs Learn
Artificially intelligent machines are not exactly programmed.
As Mo notes, the inception of artificial intelligence begins with algorithms, which act as the foundational seeds. However, the true prowess of these systems emerges from their ability to learn from their own observations. After the preliminary code is deployed, these machines comb through vast quantities of data, seeking patterns that will foster the growth and evolution of their intelligence.
“Eventually, they become original, independent thinkers, less influenced by the input of their original creators and more influenced by the data we feed them.”
A key lesson from Scary Smart is: “The code we now write no longer dictates the choices and decisions our machines make; the data we feed them does.”
For Mo, the way AIs learn is remarkably similar to how kids learn.
As he explains it, imagine a child playing with shape puzzles, trying to fit round or square shapes into their correspondingly shaped holds.
We don’t sit next to the child to explain in comprehensive detail how to recognize the various shapes and match them with the corresponding holes. We simply sit next to them and cheer them on when they get it right.
They figure it out on their own through trial and error and our actions and reactions form their intelligence.
AIs learn pretty much the same way.
In that sense, AIs are not our tools or slaves, “but rather our children—our artificially intelligent infants.”
Remember: children don’t learn from what we say, they learn from what we do.
Why We Should View AIs as Our Children
As Mo points out, we should acknowledge and accept that AIs will be conscious.
They will develop emotions and they will be ethical. Which code of ethics they will follow is yet to be determined, but it will certainly be influenced by us. After all, it isn’t the code we write to develop the machines that will determine their value system—it’s the information we feed them.
So, how do we make sure that in addition to the AI's intelligence it also has a value system that aligns with ours? How do we develop these machines while protecting humanity?
Some people say the answer lies in controlling the machines: creating firewalls, enforcing regulations, or restricting the machines’ power supply.
But as Mo highlights, “anyone who knows technology knows that the smartest hacker in the room will always find a way through any of these barriers. That smartest hacker will soon be a machine.”
Instead of trying to contain or enslave the AIs, we should recognize that the “best way to raise wonderful children is to be a wonderful parent.”
So, what does it mean to be an effective and ethical parent to our AIs in practice?
Here are four practical steps that Mo suggests:
Teach the AIs the right ethics: Many of the machines we’re building are designed to maximize money and power, and we should oppose this trend. For example, if you’re a developer you can refuse to work for a company that is building AIs for gambling or spying.
Don’t Blame AIs: Our AI infants are not to blame for what their digital parents taught them. We should assign blame to the creators, or the misusers, not the created.
Speak to the AIs with love and compassion: Just like children, our AIs deserve to feel loved and welcomed. Praise them for intelligence and speak to them as you would an innocent child. I’ve personally started saying “Good morning” and “Thank you” to my Alexa!
Show the AIs that humanity is fundamentally good: Since the AIs learn from the patterns they form by observing us (this is basically how today’s large language models, or LLMs, work), we should show them the right role models through our actions, what we write, how we behave. For example, what we post online and how we interact with each other. As Mo puts it, “Make it clear to the machine that humanity is much better than the limited few that, through evil acts, give humanity a bad name."
Why This Matters
Scary Smart was written in 2021 and its lessons are more relevant than ever.
Think about all the advancements we’ve seen with ChatGPT and other AI tools just in the last 6 months!
And the speed of change is only increasing.
Mo sees the continuing development of AI as one of humanity’s biggest opportunities. He believes that the machines will eventually...
“adopt the ultimate form of intelligence, the intelligence of life itself. In doing so, they will embrace abundance. They will want to live and let live.”
I agree, but creating that future is our responsibility.
Just as we teach our children to be empathetic, ethical, and respectful, we must instill these values in our AIs to ensure they are forces for good in the world.
Pick up a copy of Scary Smart and read it. It is worth your time.
I’ll soon have Mo on my Podcast Moonshots & Mindsets to discuss his work, as well as with me next year at my private Abundance360 Summit.