Artificial Intelligence has a long (fictional) history of being something to fear. The machines will rise up, take over, and our place at the top of the food chain will be over.
But is that the trajectory of AI? Being that we don’t have much of a mind for math, statistics, or the future, it’s hard to predict. I like all of those things so I’m going to make a prediction of the trend of AI.
That trend is based on the history of not just artificial intelligence, but intelligence itself. We’ll start with defining our terms. We need a better understanding of artificial and intelligence…and we can’t forget BioFeedback.
Artificial refers to something man made, not nature made. Intelligence is a little more difficult to define. But we’re going to use a familiar set of terms to aid our definition.
BioFeedback refers to how sensation and motion feed back into each other and affect each other. We define how intelligent someone is (there are multiple intelligences as Gardner delineates) by how much someone can sense and/or how much someone can act. Common sense (intelligence) refers to how many average functions one can do or sense.
Many of us wouldn’t think of a high level athlete as intelligent, we may describe them as physically gifted, but there is such as thing as physical or motor intelligence with many different physical subdomains, as well. But we do think of someone who is a whiz with numbers or language as intelligent. But being numerate or literate are just types of intelligence…not the whole of it.
Whenever we broaden our definition of intelligence, it allows us to further simplify it. Intelligence isn’t just about knowing, it’s also about doing. That’s where BioFeedback comes in.
BioFeedback is about how sensation and action affect each other in a biological system. They inform each other. When I do this, it feels this way. When I feel this way, I should do this thing.
But to classify AI, we need to brake this model apart into sensation or knowing, and action or doing. There are some AI that focus on knowing. There are some that focus on doing…and they’re both intelligent.
Whenever we see an AI who can do more than us in a particular area, we get scared. This computer can beat our best chess player. Scary.
But we often forget that is what technology is all about. Technology is about building levers to help us do what we already do…better. A screwdriver can turn a screw better than we can with just our hands.
In much the same way, Google can compile and search through data much faster (and more often than not accurately) than we can. It’s another example of technology doing what we do…only better. Is this really what we’re scared of?
We’re scared of being totally dominated by one entity. We’re scared that there is going to be one technological super organism that can do more and know more than us. There is a risk of that.
But we haven’t built AI to be centralized. In Kevin Kelly’s book, THE INEVITABLE, he makes note that AI is decentralized. We’re making individual devices smarter.
And we’re making them smarter for a simple reason – to better suit us. Machines are extensions of us, not entirely separate. We’re worried about machines connecting to each other, yet that is not AI’s trajectory.
AI’s trajectory is to better connect to us. And if we can better connect to AI, we can better connect to ourselves. AI isn’t going to take us over, it’s going to help us get over.
With AI, our NI (natural intelligence) will grow allowing us to do more and know more than ever before. How can I make that prediction? Because that what’s always happened.