The Rise of AI: How the Fourth Line of Information Will Transform Human Intelligence
AI is already smarter than the smartest human alive today, but will make us smarter
Hi Lab,
Last year while I was in San Francisco, I had the pleasure of catching up with Adam D’Angelo, CEO of Quora and Board Member of OpenAI. Adam described the more powerful large language models on the horizon and the exciting unexplored potential they had. One such application is the Q&A GPT-powered chatbot called Poe that Adam recently released, which will no doubt revolutionize the way we interact with AI. In an astonishingly short time, AI has become an everyday part of my and many people’s everyday workflow. Now that we’re living in a world co-inhabited by AI agents, I can see that everything Adam and I discussed was an understatement. As an engineer, it feels like the early web with so much yet-to-be-imagined possibilities. As a scientist, these agents open new questions for us.
Are AI smarter than we are?
What’s different about their intelligence compared to our own?
How will they change our own intelligence?
In A Theory of Everyone (pre-order now, etc etc), which was written before GPT - perhaps one of the last books to be written without the help of AI! - I argue that AI are a fourth line of information for our species that supplement genetic evolution, cultural evolution, and individual learning. Chapter 12 on “Becoming Brighter” is more relevant now that we are living it all first-hand. In that chapter (and various other places in the book), I explain that the way our intelligence has been shaped by our collective brain suggests the next step for AI and reveals how AI will shape our own intelligence. To see what I mean, let’s trace some of the history of the field.
Marvin Minsky was a god in the world of AI. No history of AI can be written without mentioning his contributions. Minsky shepherded the symbolic paradigm to building artificial intelligence, which was dominant from the 1950s to the 1990s. Human intelligence was assumed to lie in our ability to reason with a rich body of information. Computers were the ultimate logical machine and could store vast troves of knowledge. We could therefore program logic and apply it to rich representations of knowledge, in silico semantic networks leading to greater human-like intelligence or better.
The early successes of this approach led to wild optimism about the near-term creation of human-like intelligent machines. Minsky, in a 1970 interview with Life Magazine optimistically declared that,
“In from three to eight years we will have a machine with the general intelligence of an average human being.”
Minsky was wrong about artificial intelligence. He was also wrong about human intelligence. The secret to human intelligence was not just in our logic and knowledge representations. At the very least, these were difficult to program directly and quite frankly, humans are not great at logic (lots of examples throughout the book).
Geoff Hinton and others had a different approach. Hinton, with a degree in experimental psychology, sought to represent the brain itself in silico—an artificial neural network. Rather than directly program logic and knowledge, the machine could learn as a brain does. Initial success was limited. Indeed, Minsky famously stood up after a talk by a young computer scientist who had just presented a neural network approach to AI and asked, “How can an intelligent young man like you waste your time with something like this? … This is an idea with no future”.
The problem with neural network “connectionist” approach was that the artificial brains and training datasets were too small. But we only realized that in the early 21st century, when neural networks proved the value of the connectionist approach thanks to more powerful compute and vast troves of data that enabled the training of deeper and larger neural networks.
Hinton was vindicated.
The neural network approach has continued to offer surprising successes from image searches that didn’t rely on human descriptions in 2012, to AlphaGo beating Ke Jie in 2017, to machines writing essays and creating art from a description today.
GPT-4, and perhaps 3 and 3.5, are smarter than the smartest human alive today. Sure, there are some people who do better on LSAT, GRE, or AP Biology, but no human has that breadth of knowledge - the world’s collective knowledge at their fingertips, ready to recombine and deploy to answer any conceivable question. And that reveals one of the differences between an individual human’s intelligence and current machine intelligences. It also reveals how AI will improve and how we will too.
The success of the neural network approach comes through inspiration from neuroscience and brain architecture. But human intelligence isn’t just a function of our substantial neural hardware, but also socially acquired software. That is, human intelligence is not simply a result of our brains, but the sophisticated social learning strategies we use to acquire information, strategies, and ways of thinking from large, cooperative, collective networks of other humans. Let me put it another way.
Humans, unlike machines, don’t need vast troves of data and unimaginable amounts of compute, because our intelligence is a product of “pre-training” by millions of years of genetically evolved hardware, part of which shapes how we acquire from other humans, thousands of years of culturally evolved software, which is then adjusted over a lifetime of experience. What we now know about human evolution and human intelligence suggests that AI should not simply focus on replicating a single human brain, but on how our many brains evolved for cultural learning as a collective.
This is a paradigm my lab is just starting to work on in collaboration with colleagues at DeepMind. Within this paradigm the focus isn’t just on the neural hardware replicating a single human brain, but the way our many brains evolved for cultural learning as a collective brain making each of our brains cleverer. We shaped our machines, our culture, and our technology, but they in turn shape us and our children. For centuries, we have used energy to power machines that supplement our muscles. Over the last few decades, we have used energy to power machines that calculate and connect us. The next step is to use energy to power machines that truly supplement our minds. Computers thus far have only scratched the surface.
In A Theory of Everyone, I explain how our three main lines of information—genetic, cultural, and individual learning—are all reinforcement learning systems with different limits and lags. Ultimately all reward what works and punish what doesn’t over different time scales with different information sets. Machine intelligence is the missing line of information, parsing the world’s data. It is the combination of cultural and individual learning.
To paraphrase a common joke in the AI community, trial and error, small changes to your software is bad coding practice. But do it fast enough and it’s machine learning. Machine learning is individual learning on steroids, parsing our large cultural corpus to discover patterns that cultural evolution and certainly individual intelligence may miss. So how will it change us? In so many ways.
As personalized AI comes to know us, it can help us make better personalized decisions. At the moment, if you want to live a data-driven life and you want to know what will make you happier, wealthier, or more attractive, you are forced to rely on data and evidence at the average.
Studies show us what makes the average person happier, wealthier, or more attractive. But none of us is the average person. The average person is ironically rare. Machine learning, however, by combining your data with the data of all people like you, can help you see what makes people like you happier, wealthier, and more attractive.
AI will also make us cleverer.
In fact, it’s already making us cleverer, simply by providing powerful ways to crunch data and see new patterns. It’s a smarter search engine. But AI agents are becoming part of our collective brain and so will help humans build on machine-powered discoveries.
In 1997, AI Deep Blue beat then world chess champion Garry Kasparov. Since then, many chess players use these engines as part of their training. Humans can analyze how the computer plays, and learn from these chess machines just as they would socially learn from a chess master.
Soon after the victory of machine over man in chess, the New York Times ran a story with experts weighing in that winning at chess might be achievable by a machine, but AI was unlikely to beat human players at Go because the subtleties and space of possible moves is orders of magnitude larger.
In 2017, the AI AlphaGo defeated the best Go player in the world, Ke Jie. Similarly, since then human players have improved by learning from how AlphaGo plays, discovering new moves and play styles. Just as the steam engine helped us discover the laws of thermodynamics, AI is teaching us about our thinking and improving our cultural software.
Machines now write stories. They write profound poetry. They draw pictures from descriptions. They write functional computer code. As AI continues to advance, its capabilities will be significantly enhanced by interacting with humans and accessing resources like WolframAlpha.
At the moment, LLMs are learning human values and human ways of socializing. By communicating with humans, AI can learn and refine their understanding of complex concepts, while also improving their ability to mimic human behavior and decision-making. But at the moment, LLMs suck at math and aren’t good at traditional computation. But by tapping into vast knowledge repositories and traditional computing power like WolframAlpha, AI can acquire an even deeper understanding of various domains, further elevating its intelligence and competence. I am particularly excited by machines learning from humans and other machine intelligences. The combination of GPT and WolframAlpha is a huge advance made possible by the LLMs being able to read WolframLanguage documentation and WolframAlpha’s natural language interface.
Large leaps in innovation are often made through recombinations and intellectual arbitrage (see Chapters 4 on Innovation in the Collective Brain, 10 on Triggering a Creative Explosion, and 11 on Improving the Internet). I really like how Biologist Carl Bergstrom described DALL-E, as tapping into our collective unconsciousness. DALL-E, Midjourney and others, like large language models, search a latent space of possibilities and create new art and writing, never before seen. It is recombinatorial creativity in a very human sense. The same principle could be used for scientific discovery, which is based on similar principles of recombination and intellectual arbitrage. That means computers can do science and engineering, improving themselves and leading to unprecedentedly quick advancements. Or perhaps helping us make those leaps. AI empowers innovation and may help us crack the next level of energy - one of the central themes of A Theory of Everyone.
AI also has the potential to empower cooperation (another area we’re hoping to study this year). AI are like a prestigious and knowledgable leader making us more cooperative by helping coordinate behavior. You know how memes and pop cultural references help us find common ground and the right metaphors and emotions to coordinate and communicate with? As people begin to interact more with these artificial agents, the influence of these agents on their behavior will grow. In turn, if these agents offer similar advice on appropriate responses to a particular context or appropriate behaviors in general, they will effectively become mediators, cultural diffusing their norms. That is, insofar as AI offers consistent advice, it can serve as a modern version of newscaster Walter Cronkite, helping people coordinate and communicate and re-awakening a shared culture.
But of course, AI also has the capacity to further exacerbate inequalities and fractures created by our current social systems. It is an innovation in efficiency that lets fewer people do more with less; lower scales of cooperation entrenching a few with power over many. A small group of people control OpenAI’s models. And even in other domains, some countries, such as China and Russia will have an advantage in certain AI domains. AI is dependent not just on powerful computers, but on large troves of data. Large language models, such as GPT, benefit from large corpuses of digitized English text. In other domains such as facial recognition and medical diagnoses, China and Russia can advance more quickly thanks to weaker privacy protection laws. Indeed, China has become the largest exporter of AI facial recognition technologies. The advancement of AI-enhanced surveillance drives upgrades to high resolution CCTV cameras in China and the rise of desk-based “smart cops” replacing street cops.
Everywhere around the world, there will soon be AI workers in our economies working alongside humans. In some cases they will replace humans, in others they will cooperate and enhance human abilities. The ability to work with agents, such as through the new skill of prompt engineering is quickly becoming more valuable. Knowing how to interact with AI is the hot new social skill.
As mentioned, the biggest theme in A Theory of Everyone is the central role of energy in all that we do and what we need to do to get to the next level of energy abundance. It’s the missing piece of the productivity puzzle, but I’ll save that for another post - or you can read about in the book.
The creation of these artificial agents further exacerbates our need for energy. Unlike your phone, laptop, gaming rig, or even home server, which uses very little electricity, vast amounts of electricity are needed for the computing power needed to train AI agents. These energy requirements grow every year because there are more agents trained for different tasks and because these agents train on larger datasets and become more sophisticated with more parameters requiring more compute. Of course, once trained, these agents can do more work per Joule than the equivalent number of humans, making a lot of people redundant. And of course, the control of these agents enhances the power of those few who control them.
In the coming months and next couple of years, every company in the world will be changed by AI more profoundly and far more quickly than they were changed by the Internet.
Oh, and in case you were wondering, yes GPT-4 did help write this post. If you’re interested in these ideas, I invite you to read:
Best wishes,
Michael