Peter H. Diamandis BLOG - Upgrade Your Mindset.

The Birth of Artificial Intelligence

Written by Peter H. Diamandis | Dec 7, 2023

“We will see as much progress in the decade ahead (2023 – 2033) as we have seen in the past century (1923 – 2023).”

- Ray Kurzweil, Google, Singularity University

Over the next decade, waves of exponential tech advancements will stack atop one another, exceeding decades of breakthroughs in both scale and impact.

What will emerge from these waves are what I refer to as “Metatrends,” which will revolutionize entire industries (old and new), redefine tomorrow’s generation of businesses and challenges, and transform our lives from the bottom up. 

As we continue this Age of Abundance blog series, my mission is to explore each of the Metatrends shaping our future and give you an advanced vision of the road ahead. 

First up is the Metatrend: “Embedded Intelligence Will Be Everywhere.” Central to this Metatrend is the widespread use of artificial intelligence (AI).

While we’ve been hearing about AI with increasing frequency and discussions about AI are pervasive, it’s important to get some perspective on where this tech began, and how fast it’s been moving.

That’s the subject of today’s blog.

Let’s dive in…

 

The Birth of “Artificial Intelligence”

It was the summer of 1956 in the sleepy town of Hanover, New Hampshire, on the campus of Dartmouth College, where the concept of artificial intelligence was first discussed.  

John McCarthy, then a mathematics professor at Dartmouth, organized a conference that would launch and shape the field. Known as the Dartmouth Summer Research Project on Artificial Intelligence, McCarthy convened a group of 20 preeminent minds in computer and cognitive science. Their primary purpose? To explore McCarthy's audacious conjecture that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

McCarthy's vision for the conference was shaped by his prior collaborations with Claude Shannon, a mathematical genius from Bell Labs and MIT (BTW, this is why Anthropic named their AI “Claude”). Their joint editorial venture on a collection of essays called the Automata Studies had left McCarthy yearning for a deeper dive into the potential of computers to simulate intelligence. To this end, McCarthy and Shannon roped in Marvin Minsky, a comrade from their days as graduate students and the pioneer in neural nets; as well as Nathaniel Rochester from IBM, who had significant contributions in computing machinery.

The quartet’s proposal to the Rockefeller Foundation to fund the workshop was ambitious: “To have machines use language, form abstractions, solve human-reserved problems, and even self-improve.” The proposal delved into topics like automatic computers, language programming, neuron nets, and even the randomness in creative thinking.

As for the attendees, their resumes spoke of brilliance. Shannon, who developed the statistical theory of information; Minsky, known for his work on neural nets; Rochester, who designed the widely used IBM Type 701; and of course, McCarthy, with his extensive work on the mathematical nature of thought processes.

The fruits of this conference and its founders were profound. McCarthy introduced the term “artificial intelligence” (AI), setting a course for the field's future. Arthur Samuel would soon after coin “machine learning” in 1959, and Minsky would receive the Turing Award for his AI contributions in pattern recognition and machine cognition. As such, the Dartmouth conference wasn't just a summer workshop—it was the dawn of a new age in artificial intelligence.

By the 1970s, the field had evolved with the introduction of knowledge-based systems. These were specialized software, like DENDRAL and MYCIN, designed to emulate the decision-making skills of human experts. Yet, this decade also witnessed one of AI's “winters,” marked by dwindling enthusiasm and funding, largely because of unmet, optimistic expectations. 

The 1980s brought a renewed spirit, driven by the commercial success of expert systems. Simultaneously, the realm of neural networks, inspired by human brain structures, experienced a renaissance. The reinvigoration was credited to the backpropagation algorithm, which made training these networks feasible. International interest was evident with endeavors like Japan's ambitious Fifth Generation Computer Systems project, which sought to pioneer Prolog-based computers. 

The transition to the 1990s marked a significant paradigm shift from knowledge-centric to data-driven AI methodologies. This decade celebrated the ascendancy of machine learning techniques such as decision trees, reinforcement learning, and Bayesian networks. Importantly, the 1990s also sowed the seeds for "deep learning," laying the groundwork for profound neural network architectures. 

As the new millennium dawned, AI was poised for transformative growth. The 2000s, fueled by an explosion in data availability and computational prowess, also witnessed novel techniques like convolutional neural networks. One of the standout moments from this era was IBM's Watson, which showcased the power of AI by outplaying human champions in the "Jeopardy!" game show.

However, it was the subsequent decade, the 2010s, that truly heralded the era of deep learning. Tech behemoths like Google, Apple, and Facebook spearheaded AI research, leading to real-world applications that touched millions, even billions of people. A highlight was Google's DeepMind developing AlphaGo, a program that achieved the unthinkable by defeating a world champion Go player.

But it was the latter part of this decade that gave rise to revolutionary Transformer architectures, and the AI renaissance we are now enjoying. Introduced in 2017 with the landmark paper Attention is All You Need, this gave rise to transformative models like GPT and BERT, setting new standards in natural language processing tasks.

 

Why This Matters

We are at a unique moment in history—on the cusp of a revolution that will usher in multiple trillion-dollar companies and industries. 

It’s a moment that reminds me of a number of disruptive and highly opportunistic periods: 

  • The invention and deployment of electricity
  • The invention of radio
  • The birth of the internet (World Wide Web)
  • The adoption of smartphones
  • The launch of cloud services

AI is about to change business models across EVERY industry.

But as we learned above, the seeds of AI were planted at Dartmouth 67 years ago. So why has it taken so long for AI to emerge? Why is AI exploding now?

That’s the subject of our next blog.