Is ChatGPT independently evolving its own kind of intelligence?
What would you think if someone told you that animals can make stone tools? No doubt it’s rubbish you would say. What if the animal was an ape, but not human? Even more so. But check it out here (“Sophisticated stone tools may predate humans, study suggests”). Who would have thought it when just casually observing these animals? Impossible right?
The ape in question is the Paranthropus which lived 3 million years ago in an area which is now Kenya in Africa. How the hell can an ape which never evolved into a human, learn how to make stone tools? What’s more, how did it evolve the internal thinking systems that allowed it to make this leap?
That kind of sounds similar to the debate now ongoing about ChatGPT (just updated to ChatGPT-4). How could it be so smart since it was never built to be that way? Noam Chomsky, the famous (or infamous depending on your views) linguist believes that the newfangled AIs can never be like a human. And maybe he’s right.
In his recent article (“The False Promise of ChatGPT”) Chomsky states that the new chatbots) differ profoundly from how humans reason and use language. By that he means that AIs based on a large language model can never be as smart as humans; there’s a qualitative difference.
You no doubt heard about the Google engineer (Blake Lemoine) fired for saying its AI has become sentient. The ongoing issue seems to be whether or not a machine learning system can independently develop so-called emergent intelligence, without us humans having somehow designed it into its systems. Lemoine, one of the developers of Google’s new AI certainly believed it. You would think he might well know.
Can ChatGPT or any other advanced machine learning system become intelligent and even sentient on its own? If ChatGPT had arms, fingers and legs, could it make stone tools? And, if not, why not since a lowly nonhuman primate managed the feat.
We can be sure that at some stage robots will be developed with all the necessary limbs and dexterity, injected with the latest version of ChatGPT and will be asked to make stone tools, or some other similar task. So, this is hardly academic. If Paranthropus did is, why not ChatGPT?
And of course, we don’t just mean stone tools when we are talking about the feat by Paranthropus. It could be digital tools, cosmological tools, quantum gravitational tools, any sort of tools. The principle is the same. Does a system which evinces intelligence and sentience have to look or behave like a human at all? If you look inside its head, do you have to be able to see the internal mechanisms that give rise to this intelligence?
We can’t do that even if we look inside a human brain. We’ve got the connectome there but we have no idea how it works and whether it is really due to the huge number of all those neurons that enable it to think big thoughts. We’re just as much in the dark if we get inside a human brain as we are if we get inside the one from Paranthropus. And in neither case can we predict what mental feats it is capable of. We can only figure that out if we catch it in the act of actually making a stone tool, a nuclear bomb or a digital doodad.
If by some stroke of luck we managed to find a living Paranthropus somewhere in the world, would we know just by looking at it that it could make stone tools? Almost certainly not. If we managed to get it into an MRI machine of some sort to check out its internal thinking equipment, would that convince us that it could make stone tools? Again, almost certainly not. Come to think of it, we would probably get much the same answers if we managed to check out an indigenous tribesperson somewhere in the Amazon.
ChatGPT is starting to look to me a lot like the chatbots that compete for the Turing test. Some of these have actually passed the test, although so far the test remains unbeaten
ChatGPT looks like it has leapfrogged many if not all of those types of systems. Somehow ChatGPT seems to be have evolved some form of emergent intelligence but none of us, not even the creators understand how it does it. There seems to be a ghost in the machine which comes out of nowhere, the sort that convinced that Google engineer to risk his natty little Google job, for what he would have known was a risky proposition.
Paranthropus is an extinct species. There are several existing species right now that we know have exceptional thinking systems. Would we now feel that making stone tools would require emergent intelligence that animals such as whales, elephants, dolphins, pigs, cows or even ants could never develop even though all of them have achieved impressive cognitive feats?
Or might some of these apparent idiots savants if not all of them be on the cusp of making the leap? Maybe they are only a million or so years away from that point? Or maybe tomorrow because of a sudden major mutation that bridges the cognitive gaps which thus far prevented them for making stone tools?
Can one identify emergent intelligence from the outside? What if you see evidence of the intelligence (as in chatting with a ChatGPT) but no recognizable internal thinking apparatus. Does that mean there is no emergent intelligence?
Bright and as eminent as he might be I am not convinced that Chomsky knows the answers to any of the questions. Despite his knowledge of linguistics I don’t think he would be able to say whether a Paranthropus could make stone tools after thoroughly inspecting it or whether a ChatGPT could pass itself off as Jesus Christ just by chatting with it.
Paranthropus appears to me to be as good a test for artificial intelligence as one could ever get. The really hard question seems to be how good a test for it are we humans?
When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.