Peter H. Diamandis BLOG - Upgrade Your Mindset.

Can AI Develop Empathy & Ethics?

Written by Peter H. Diamandis | Apr 28, 2023

Can we ensure that AI is used ethically? Will AIs themselves develop empathy and ethics? That’s the topic I’d like to discuss today. It’s important. 

I recently sat down with Rana el Kaliouby, PhD, AI researcher and Deputy CEO of Smart Eye, at my private CEO Summit Abundance360 to explore these questions. Rana has been focused on this very topic for the past decade.

Think about what comprises human intelligence. It’s not just your IQ, but also your emotional and social intelligence, specifically how we relate to other people.

As Rana points out, we’re obsessed with the IQ of AI, but an AI’s emotional intelligence may be much more important in the long run.

As AI takes on more roles that have traditionally been done by humans—from your teacher to your health assistant—we have to ensure the technology also has a high EQ. 

To do that, we need to develop both empathy and ethics in AI…

How to Create Empathy in AI

Have you seen the movie Her? (It's a 2013 film directed by Spike Jonze, please watch it this weekend.)

Her is one of my favorite movies about AI because it was the first non-dystopian AI movie. As Rana pointed out, it’s also a great example of building empathy into AI.

For those of you who haven't seen the movie, the main character, Theodore, is depressed. He can barely get out of bed. And he installs a new AI-powered operating system named Samantha. Not only is she incredibly smart, she’s also empathetic and emotionally intelligent. Samantha gets to know him very well and helps him rediscover joy in his life. And Theodore falls in love with her.

Now, our current AI technology isn’t that advanced (yet). So if AI doesn’t have emotions, how can it develop empathy?

Rana says the key is that we can simulate emotional intelligence and empathy. 

It turns out that 93% of the way humans communicate is nonverbal. This is the area of research that she has focused on and is using in her company Smart Eye.

When Rana was doing her PhD at Cambridge University, she built the first artificial, emotionally-intelligent machine using supervised learning. She and her team collected tons of data of people from all over the world making various facial expressions. They then used the data to train deep learning networks to understand those facial expressions and map them to emotional or cognitive states. 

Back then, the algorithm could only understand three expressions: a smile, a furrowed brow, and raising eyebrows. 

But today, these algorithms can understand over 50 emotional, cognitive, and behavioral states. They can detect everything from alertness and drowsiness, to confusion and excitement.

The practical applications of this ability are vast. For example, by equipping cars with these algorithms, an AI could detect a driver’s state of distraction and respond appropriately, increasing safety on the roads. 

Ethical AI

For Rana, AI ethics falls into two buckets: development and deployment.

Ethical Development

Developing AI ethically requires considering how the algorithms may be biased. 

We’ve seen how the implementation of AI in areas such as hiring and lending has raised concerns about bias and discrimination. For example, if an AI is trained on data that reflects historical biases in society, then it may perpetuate those biases in its decision making. 

We must be intentional about paying attention to bias throughout the entire development pipeline—from data collection and annotation, to training and validating.

For instance, when Rana was CEO of Affectiva, which she spun out of MIT, she tied the bonuses of her executive team not only to revenue performance, but also to implementing ethical considerations across the engineering and product teams.

Ethical Deployment

During deployment, it’s important to handle personal data responsibly to prevent exploitation.

Rana acknowledges that there is currently no single, universal ethics around how to deploy AI—different countries and companies have varying perspectives on ethics, privacy, and data use.

In many cases, it’s up to individual leaders to ensure ethical deployment. 

For example, Rana and her team at Affectiva created a set of core values to determine how they would deploy their technology. And in 2011, those values were tested. They almost ran out of money and were approached by the venture arm of an intelligence agency to fund the company on the condition that they focus on surveillance and security. 

But Rana didn’t believe the technology and regulations were strong enough, so she turned down the funding and sought investment from other investors that were aligned with their core values.

As she puts it, “We have to hold that high bar.”

Why This Matters

We must remember that the data we’re using to train these large language models (LLMs) isn’t made up. 

It’s our data: it’s the sum total of humanity’s data during the past 50 years! What we've written on our websites and in our Facebook posts. 

It represents who we are, how we talk to each other, and how we think about things. 

In his book Scary Smart, Mo Gawdat says that with AI, we’re raising a new species of intelligence. We’re teaching the AIs how we treat each other by example, and they’re learning from this.

I agree with Gawdat. I’ve even started saying “Good morning” and “Thank you” to my Alexa!

Just as we teach our children to be empathetic, respectful, and ethical, we must instill these values in our AIs to ensure they are tools for good in society.

In our next blog in this AI Series, we’ll explore the question: Will AI eliminate the need for programmers in the next five years, or will it turn all of us into coders?

NOTE: I'm hosting a four-hour Workshop on Generative AI next month as part of my year-round Abundance360 leadership program for Members. If you're interested in participating in the Workshop and learning more about Abundance360, click here.