What Brains of the Past Teach Us About the AI of the Future – Next Big Idea Club Magazine
Posted: November 26, 2023 at 2:49 am
Max Bennett is the co-founder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuroscience and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYCs 30 Tech Leaders Under 30.
We have been trying to understand the brain for centuries, and yet we still dont have satisfying answers. The problem is that the brain is really complicated. The brain contains over 86 billion neurons and over 100 trillion connections all wired together in a tangled mess. Within a cubic millimeter of the brain, which is about the width of a single letter on a penny, there are over a billion connections. Even if we mapped all 100 trillion connections, we still wouldnt know how the brain works.
The fact that two neurons connect to each other doesnt tell us much about what they are communicatingneurons pass hundreds of different chemical signals across these connections, each with unique effects. Worst of all, this is made even more challenging by the fact that evolution doesnt design systems in coherent waysthere are duplicated, redundant, overlapping, and vestigial circuits that obscure how different brain systems fit together.
These problems have proven so difficult, that some neuroscientists believe it will be many more centuries before we ever make sense of the brain.
But there is an alternative approach, one that searches for answers not in the human brain, but within fossils, genes, and the brains of the many other animals that populate our planet. In recent years, scientists have made incredible progress reconstructing the brains and intellectual faculties of our ancestors. This emerging research presents a never-before-possible approach to understanding the brain. Instead of trying to reverse-engineer the complicated modern human brain, we can start by rolling back the evolutionary clock to reverse-engineer the much simpler first brain. We can then track the changes forward in time, observing each brain modification that occurred and how it worked. If we keep tracking this story forward from the simple beginnings through each incremental increase in complexity, we might finally be able to make sense of the magical device in our heads.
As the evidence continues to roll in, a story has begun to reveal itself. The first brain evolved over 600 million years ago; one might think that over such an astronomical amount of time, the story of brain evolution would contain so many small changes that it would be impossible to fit into a single book. But instead, amazingly, it turns out that the main reconfiguration of brains occurred in only five key steps, referred to as the five breakthroughs.
Each breakthrough emerged from a new set of brain modifications and gifted our ancestors with a new suite of intellectual faculties.
Each breakthrough was built on the foundation of those that came before. Just as the ancestors of lizards took fish-like fins and reconfigured them into feet to enable walking, and the ancestors of birds took those same feet and reconfigured them into wings to enable flying, brain evolution too worked by repurposing the available biological building blocks to face new challenges and enable new feats.
If we want to understand the human brain, and what is missing in current AI systems, the framework of these five breakthroughs offers a wonderfully instructive and simplifying approach.
Before brains evolved, animals didnt move around much. They were most like todays sea anemones and coral; they waited for food particles to come to them, at which point they would snatch food out of the water with their tentacles. But they did not actively pursue prey nor avoid predators.
However, around 600 million years ago, our ancestors evolved into a small worm-like creature the size of a grain of rice. These worm-like ancestors were the first animals to survive by moving towards food and moving away from danger. Not so coincidentally, these were the first animals to have brains.
This worm had no eyes or earsit perceived the world only through a small portfolio of individual sensory neurons that each detected vague things about the outside world. Some neurons got activated by the presence of light and others got activated by the presence of specific smells. Despite perceiving almost nothing detailed about the external world, these worms could still navigate around using a clever technique called steering. This was the first breakthrough.
When a piece of food is placed in water, molecules fall off of it and disperse throughout its surroundings. This produces what is called a smell gradient, where the concentration of these molecules is high directly around the food source and becomes progressively lower the further away from the food source you get. It is this physical fact that evolution exploited to enable the first form of navigation.
The first brains had two primary motor programsone for moving forward, and one for turning. Although these worms couldnt see, they could find the origin of food by applying two simple rules: whenever the concentration of a food smell increases, keep going forward; whenever the concentration of a food smell decreases, turn randomly. Taking advantage of how smell gradients work, if you keep applying this algorithm, eventually worms will make it towards the source of the food smell.
In other words, steering worked by categorizing things in the world into good and bad worms steer towards good things like food smells and away from bad things like predator smells. This was the function of the first brain, and from it emerged many familiar features of intelligence, from associative learning to emotional states.
There are many debates about what the final steps are on the road to human-like artificial intelligence. From the perspective of the five breakthroughs, what is missing is not the first breakthroughs in the evolution of the human brainsteering and reinforcement learningnor the most recent breakthrough, which was language. Instead, AI systems have skipped the breakthroughs that evolved halfway through our brains journey; we have missed the breakthroughs that emerged in early mammals and primates.
Early mammals emerged 150 million years ago, as small squirrel-like creatures in a world filled with massive predatory dinosaurs. They survived by burrowing underground and emerging only at night to hunt for insects. From the crucible of this incredible pressure to survive was forged a new brain region called the neocortex. The neocortex enabled these early mammals to imagine the future and remember the past, in other words, to simulate a state of the world that is not the current one.
This was the breakthrough of simulation. It enabled these animals to plan their actions ahead of time. It enabled our squirrel-like ancestors to peek out from their burrow, spot nearby predators, and simulate whether or not they could successfully make a dash across the forest floor without getting caught. Simulation also gifted these mammals fine motor skills, as they could plan their body movements ahead of time, effortlessly figuring out where to place their paws to balance themselves and jump between tree branches. This is why lizards and turtles, lacking a neocortex, move slowly and clumsily on the forest floor, while mammals like squirrels and monkeys crack open nuts and climb in trees.
To accomplish all this, the neocortex creates an internal representation of the external world, what AI researchers call a world model. The world model in the neocortex contains enough details of how the world actually works that animals can imagine themselves doing something and accurately predict the consequences of their actions. In order for a mouse to imagine itself running down a path and correctly predict whether a nearby predator will catch it before it gets to safety, its imagination needs to accurately capture the nuances of physics: speed, space, and time.
We already have AI systems that can make plans and simulate potential future actions, the most famous modern example being AlphaZero, the AI system that recently beat the best Go and chess players in the world. AlphaZero works, in part, by playing out possible future moves before deciding what to do. But AlphaZero and other AI systems still cant engage in reliable planning in real-world settings, outside of the constrained and simplified conditions of a board game.
In real-world settings, planning requires dealing with imperfect noisy information, an infinite space of possible next actions, and ever-changing internal needs. A squirrel dashing from one tree to the next has, literally, an infinite number of possible actions to take, from the low-level choices of exactly where to place each individual paw, to the higher-level choices of exactly which path to take. How the neocortex enables mammals to plan in such complex environments is still beyond our understanding; this is why we do not yet have robots that can wash our dishes and do our laundry, the secret to which lives within the minuscule brains of squirrels and rats and all the other mammals in the animal kingdom.
One of the key problems in the field of AI alignment is ensuring that AI systems understand the requests that we make of them. This has also been called the paperclip problem, after Nick Bostroms allegory of asking an AI system to run a paperclip factory as efficiently as possible, at which point his imagined AI system goes on to convert all of earth into paperclips. This thought experiment reveals that AI can be dangerous even without it being intentionally nefarious: the AI system did exactly what we told it to do, but failed to infer the true intent of our request and our actual preferences. The paperclip problem is one of the biggest outstanding challenges in the field of AI safety.
When humans speak to each other, we automatically infer the intent of each others words. This ability was part of the fourth breakthrough, the breakthrough of mentalizing. It emerges from parts of the neocortex that appeared with early primates. These primate areas endow monkeys and apes with the ability to simulate not only the external world but also their own inner simulation itself, enabling them to think about their own thinking and the thinking of others.
Early primates got caught in a political arms race; their reproductive success was defined by their ability to build alliances, climb political hierarchies, and cozy up to those with high status. We see this in the social groups of modern nonhuman primates like chimpanzees, bonobos, and monkeys. The most powerful tool in surviving the political world of primate life was the evolution of mentalizing, which enables primates to predict the consequences of their social choices, to imagine themselves in other peoples shoes, to infer how they might feel and what they might do and what they want.
The new areas of the neocortex in primates contain the algorithmic blueprint for how to build AI systems that do the same. One way or another, in order to create safe AI systems, we will have to endow these systems with a reliable understanding of how the human mind works, without which our AI systems will always risk accidentally weaponizing an innocuous request like optimizing a paperclip factory into a world-ending cataclysm.
To listen to the audio version read by author Max Bennett, download the Next Big Idea App today:
Here is the original post:
What Brains of the Past Teach Us About the AI of the Future - Next Big Idea Club Magazine
- This 90's Japanese commercial for Street Fighter Alpha 2 doesn't make a ton of sense, but it somehow still makes us want to play some Alpha -... [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- Artificial intelligence: How to measure the I in AI - TechTalks [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes [Last Updated On: December 9th, 2019] [Originally Added On: December 9th, 2019]
- From AR to AI: The emerging technologies marketers can explore to enable and disrupt - Marketing Tech [Last Updated On: December 13th, 2019] [Originally Added On: December 13th, 2019]
- MuZero figures out chess, rules and all - Chessbase News [Last Updated On: December 13th, 2019] [Originally Added On: December 13th, 2019]
- John Robson: Why is man so keen to make man obsolete? - National Post [Last Updated On: December 18th, 2019] [Originally Added On: December 18th, 2019]
- Artificial intelligence in the arms race: Commentary by Avi Ben Ezra - Augusta Free Press [Last Updated On: February 9th, 2020] [Originally Added On: February 9th, 2020]
- Explained: The Artificial Intelligence Race is an Arms Race - The National Interest Online [Last Updated On: February 9th, 2020] [Originally Added On: February 9th, 2020]
- Google's DeepMind effort for COVID-19 coronavirus is based on the shoulders of giants - Mashviral News - Mash Viral [Last Updated On: March 8th, 2020] [Originally Added On: March 8th, 2020]
- Fat Fritz 1.1 update and a small gift - Chessbase News [Last Updated On: March 8th, 2020] [Originally Added On: March 8th, 2020]
- Magnus Carlsen: "In my country the authorities reacted quickly and the situation is under control" - Sportsfinding [Last Updated On: April 6th, 2020] [Originally Added On: April 6th, 2020]
- ACM Prize in Computing Awarded to AlphaGo Developer - HPCwire [Last Updated On: April 6th, 2020] [Originally Added On: April 6th, 2020]
- AlphaZero Crushes Stockfish In New 1,000-Game Match ... [Last Updated On: October 17th, 2020] [Originally Added On: October 17th, 2020]
- AlphaGo Zero - Wikipedia [Last Updated On: October 17th, 2020] [Originally Added On: October 17th, 2020]
- AlphaZero: Shedding new light on chess, shogi, and Go ... [Last Updated On: October 17th, 2020] [Originally Added On: October 17th, 2020]
- AlphaZero - Wikipedia [Last Updated On: October 17th, 2020] [Originally Added On: October 17th, 2020]
- When 3 is greater than 5 - Chessbase News [Last Updated On: October 22nd, 2020] [Originally Added On: October 22nd, 2020]
- Facebook AI Introduces 'ReBeL': An Algorithm That Generalizes The Paradigm Of Self-Play Reinforcement Learning And Search To Imperfect-Information... [Last Updated On: December 14th, 2020] [Originally Added On: December 14th, 2020]
- AI has almost solved one of biologys greatest challenges how protein unfolds - ThePrint [Last Updated On: December 14th, 2020] [Originally Added On: December 14th, 2020]
- Scientists say dropping acid can help with social anxiety and alcoholism - The Next Web [Last Updated On: January 31st, 2021] [Originally Added On: January 31st, 2021]
- Toronto scientists help create AI-powered bot that can play chess like a human - blogTO [Last Updated On: January 31st, 2021] [Originally Added On: January 31st, 2021]
- This AI chess engine aims to help human players rather than defeat them - The Next Web [Last Updated On: January 31st, 2021] [Originally Added On: January 31st, 2021]
- Artificial Intelligence, and the Future of Work Should We Be Worried? - BBN Times [Last Updated On: October 21st, 2021] [Originally Added On: October 21st, 2021]
- What Happened in Reinforcement Learning in 2021 - Analytics India Magazine [Last Updated On: November 14th, 2021] [Originally Added On: November 14th, 2021]
- How AI is impacting the video game industry - ZME Science [Last Updated On: December 15th, 2021] [Originally Added On: December 15th, 2021]
- Quest Pro is here, Google and Valve report back - MIXED Reality News [Last Updated On: October 20th, 2022] [Originally Added On: October 20th, 2022]
- AI now not only debates with humans but negotiates and cajoles too - Mint [Last Updated On: November 26th, 2022] [Originally Added On: November 26th, 2022]
- Newspoll quarterly aggregates: July to December (open thread ... - The Poll Bludger [Last Updated On: December 29th, 2022] [Originally Added On: December 29th, 2022]
- MPL 59th National Senior R3: The Systematic Pawn Structure ... - ChessBase India [Last Updated On: December 29th, 2022] [Originally Added On: December 29th, 2022]
- Personality traits and decision-making styles among obstetricians ... - Nature.com [Last Updated On: April 6th, 2023] [Originally Added On: April 6th, 2023]