KENNESAW, Ga. | Nov 14, 2023
This month I am going to go down a more philosophical angle than usual, but I think the topic requires it. As artificial intelligence (AI) has evolved, and now particularly with generative AI making waves, let’s freshly consider how human intelligence compares to AI. Such discussions typically start by assuming that:
I’ll offer here a thought-provoking challenge to those assumptions. I’ll then suggest a scenario that is either disturbing or exciting depending on how you want to take it.
How Our Human Intelligence Grows
Let’s be honest … humans start out pretty useless. As a newborn, we can’t do anything outside of basic, preprogrammed functions like swallowing and crying. While a baby can move and make sounds, it doesn’t have much control over its actions. A baby will generate random wiggles and cries, but the baby isn’t intentionally (or intelligently) controlling anything of substance. As we age, we VERY slowly get smarter and better at things. While a typical two-year-old can have a basic conversation and walk well, it takes many years before humans approach our final state of mental and physical capabilities.
In effect, we start with a mostly empty brain made up of a bunch of cells and connections. Only over many years of learning do we manage to tune our brains to enable the intelligent and intentional things that we do every day. When toddlers learn to walk and talk, they do so slowly over many iterations. Countless falls and unintelligible sounds occur before a child can walk over effortlessly and tell us about their day. It is exposure to our environment, encouragement and feedback from others, and trial-and-error efforts that let us move from an effectively useless baby to a (hopefully) intelligent and productive adult.
How Today’s Artificial Intelligence Grows
Today’s AI architectures are made up of billions of neurons that, like brain cells, have connections with other neurons. We feed AI a lot of data, and the AI model begins to adjust its connections and the criteria it uses to decide when to pass information from one connection to another. Much like a baby, a fresh AI model is also useless. It has potential, but when freshly configured has no ability to do anything useful.
As we feed more data into an AI model and provide it feedback, however, it begins to recognize patterns in the data. As we pass AI more cat and dog photos, it will begin to differentiate them. Much like humans learn, this is largely a trial-and-error process, and the model will initially be very poor at identifying what is in an image. With feedback on its performance and more data, it will slowly improve.
Today, some image models are as good or better than humans at identifying things in pictures. The AI models get there by ingesting more data, getting feedback from us on their efforts to classify it, and then adjusting their approach until they get things right. Just like a baby, an AI model eventually recognizes cats well. AI is also literally learning how to walk today with the advanced robotics being developed around the world. Robots early in their training to learn to walk stumble and fall a lot … just like children. So, today we have AI learning to see, walk, and talk … much like humans. However, is it accurate to assume that AI is using a totally different approach to learning than humans use?
Crazy Thought Or Mind Blowing Realization?
I’ve often talked about how large language applications like ChatGPT, image recognition models, and other AI tools seem very smart but really aren’t as smart as they seem (see a 2018 example here). After all, they incorporate simple statistical functions in their neurons that either pass (or don’t pass) a signal to one or more other neurons. They aren’t really “thinking” to our human minds even though we would struggle to define exactly what “thinking” is. Rather, AI models are doing a massive number of simple computations that allow them to mimic intelligence and thought. With enough of these very simple computations within enough artificial neurons, a very advanced level of what seems like intelligence appears.
This led me to wonder if we humans are as smart as like to think we are? Is it possible that the way an AI model starts out and how it learns may be almost literal, as opposed to figurative, in its analog to how humans learn? Are the neurons in our brain individually no smarter or functional than the neurons in today’s AI models? Is it also possible that all of our ingestion of information and experience (i.e. training data!) simply helps us adapt our neurons and connections almost exactly as an AI algorithm does it?
The Implications If AI Intelligence Actually Is The Same Construct As Our Own
We have many more neurons in our brains than today’s most sophisticated AI models and yet the AI models are beginning to act and sound a lot like us. As we eventually add neurons into AI models that approach the scale of our brains, AI may eventually become truly as intelligent as us. If it does, could it be because the architecture of our brains and how we build our intelligence is not fundamentally much different from AI?
If that’s true, then human intelligence as something unique is an illusion we have sold ourselves. Maybe we have simply ingested and processed masses of data with our billions of neurons and as a result enabled ourselves to appear uniquely smart. Perhaps a baby’s brain is effectively a new neural net starting from iteration #1. Then, each person’s genetic code tweaks the initial architecture and starting connections of their brain so that different people will diverge in exactly how they develop their personality, intelligence, and physical capabilities.
To summarize, I guess I’ve found myself wondering if AI isn’t an artificial approach that is attempting to mimic humans, but whether we are quite literally just a more advanced model of the same fundamental architecture. Same conceptual structure and learning pattern, but with a larger number of neurons and connections to work with. That would make the human brain simply a more advanced version of AI, and one that will eventually be equaled as we continue to push the limits of scale of the models we are building.
I am not saying that I believe this as of today. However, I will say that ever since I thought of it, I haven’t been able to find a way to dismiss and rule out the idea either. I also can’t decide whether the fact I can’t rule out that we are no different from an advanced AI model is disturbing or exciting. I guess I need to train my brain to think about it some more… Discussion and debate are welcome!
Bill Franks
Internationally recognized chief analytics officer who is a thought leader, speaker,
consultant, and author focused on analytics, data science, and AI