Have you noticed that in most science fiction movies the aliens are almost always biological entities – often obsessed with eating us or sucking our blood or possessing us or at least conquering us.
That’ extremely unlikely. If we are ever visited by aliens they are almost certain to be mechanical. They will be intelligent robots.
Biological entities evolve on planets. They depend on the atmosphere and the food sources those planets supply. When humans go into space, for example, we need to take oxygen, food and water with us. That’s one reason human space travel is so much more expensive than sending a mechanical rover to Mars. All living entities are adapted to their environment. They are not adapted to some other environment.
If a family of humans (like Lost in Space) traveled to Proxima Centauri (the nearest star) at the speed of NASA’s fastest spacecraft it would take 78,000 years. Assuming they could carry enough supplies to get there, they couldn’t survive unless they found a nearby planet with our atmosphere, our food supplies and lots of water. What are the odds of that happening?
Moreover, our biological makeup is dependent on earth’s gravity. Once in space, our bones begin to atrophy. So do the muscles. By the time our descendants reached our nearest star, their bones and muscles would be so atrophied they wouldn’t be able to walk on an earth-like planet. They would probably have lots of other health problems as well.
So the idea of biological entities traveling long distances through space is not just improbable. Based on everything we know, it’s impossible.
On the other hand, robots from space may someday visit us. If so, they won’t be coming here to eat us, or suck our blood or even do us harm – unless … well unless something goes horribly wrong in their fundamental programs.
How could that happen? It could happen if their biological creators do some of the same things we humans are doing today.
There are two things you need to know about robotics: Singularity and the Turing Test.
Wikipedia has as good of a definition as any of the first of these two concepts:
The technological singularity hypothesis is that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization in an event called the singularity. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.
Just so you’ll know, everybody who knows anything about robotics takes this idea seriously. For example, here is Elon Musk (Tesla and SpaceX):
I don’t think anyone realizes how quickly artificial intelligence is advancing. Particularly if [the machine is] involved in recursive self-improvement . . . and its utility function is something that’s detrimental to humanity, then it will have a very bad effect ….
If its [function] is just something like getting rid of e-mail spam and it determines the best way of getting rid of spam is getting rid of humans . . .
The second idea arises from a simple question: when can a computer be said to “think”? One of the early pioneers in computer science, Alan Turing, suggested a way of answering that question. As explained by Walter Isaacson in the Wall Street Journal:
His test, now usually called the Turing Test, was a simple imitation game. An interrogator sends written questions to a human and a machine in another room and tries to determine which is which. If the output of a machine is indistinguishable from that of a human brain, he argued, then it makes no sense to deny that the machine is "thinking."
Here’s the problem with that. If computer scientists are trying to pass the Turing Test then they will try to make thinking machines respond more and more like humans respond. That would mean injecting into their programs emotional responses to stimuli, in addition to their rational proclivities. A rational robot is not going to start a nuclear war. But an emotional human might do that. So there is a real danger in making robots more human-like.
The reason why humans have emotions is that they have evolutionary survival value. Emotions drive us to have sex because nature needs procreation. Emotions drive us to believe there is something different and special about our own offspring (even though there isn’t) because nature needs the young to have protectors, defenders and nurturers. Emotions cause us to have allegiance to our own tribe rather than some other tribe because tribalism helps in the competition for resources. Irrational devotion to one’s own tribe increases the probability that one’s own tribe will be successful. That means a greater likelihood of passing on one’s genes.
Clearly, this is the last thing we want to program into a robot.
Structuring robots to pass the Turing Test is not just ill advised, it’s dangerous. Not only should it not be encouraged, it probably should be outlawed.