Humans and Robots

Most of us are amazed at the chess-playing skill of the IBM computer program Deep Blue, which reached its ultimate achievement in defeating the reigning world champion, Gary Kasparov, in 1997. It is only a minor stretch of the imagination to consider this program imbedded in a seated, human-appearing robot, which, using eye cameras, can sense a chessboard on the table in front of it, and then use its articulated arms to actually move its chess pieces. We can even consider our robot's construction to be so technologically sophisticated that it contains a computer program controlling its outward, or third person, appearances to mimic those of a human, expressing feigned joy at capturing a piece, or frustration in losing one.

Chess, being the classic intellectual game that it is, it now seems that a sophisticated robot could be built to defeat a human in any game, or other endeavor, that involves an algorithmic procedure to achieve some goal. And it is easy to speculate that such a programmed robot could, in the not too distant future, even excel humans in athletic sports and physical work related activities, expanding upon the type of technology employed in the current Sedgeway people mover. In certain specialized applications, mechanical robots already far exceed human capabilities and precision in manufacturing anything from computer chips to automobiles. I received, as a gift, a robotic vacuum cleaner, a foot diameter disk about three inches high. At first, I was very skeptical regarding its practicality, but with just a little hep it does a marvelous job, even getting under furniture and beds where no other unit could roam. I believe that it performs at least as well as a blindfolded human would, faced with vacuuming an unfamiliar room.

So, are we humans ultimately doomed to second-class status, as we strive ever harder to create and develop objects that can outdo us? There is little to tell between our two chess playing combatants, except that one, the non-human, is victorious. Perhaps the victor possesses some flaw that will still allow the human to hold his head up high.

And, clearly, such a flaw does exist. Our robot, with the Deep Blue program imbedded, excels at only one thing, namely playing chess. But this is only a temporary relief, as we could simply expand its computer program and memory to house all known games, in fact, to even adapt in an optimal way to a new situation. A poker playing algorithm, for example, could take note of how each human opponent tends to play their hands, and adjust its response accordingly. It would defeat any human over time.

But there are greater challenges for a robot in mimicking a human than just making reference to some pre-stored program that will apply to some game, or other activity that can be reduced to an algorithmic procedure. In particular, current, or near future, robots and computers are severely limited in their memory capacity compared to humans, thus their program and data storage abilities are incapable of storing the vast array of experiences encountered by a normal adult person. But one can speculate that this will not always be the case. Monmouth computers of just fifty years ago had random access to but 32,000 bytes of memory, whereas a current two inch flash memory stick can store 16 megabytes, and that number is ever rising. Perhaps a cellular technology will emerge to narrow the gap to the point where memory storage capacity is much less of an issue. Using its eye cameras and ear microphones (and other senses), a robot would then be capable of storing a history of its past experiences, just as does the human.

But this is not enough yet. These experiences must be placed in some kind of context, rated somehow according to their significance and impact upon the programmed value system given to our robot. Deep Blue's value system consisted only of winning at chess, but clearly an untold number of additional objectives similar to what we humans regard as important could also be added. Our robot could then quantitatively link its memorized past experiences to their value system significance. Perhaps it might even discard, or forget, those of little value in an effort to conserve memory. With its added algorithms, value system axioms, and past memory experiences, our robot now seems very much closer than Deep Blue to the personality that we humans possess. (Of course, identification with a particular sex and the ability to reproduce still seems quite a mountain to climb, but we'll ignore that for now, and stick with the present train of thought.)

When does a robot reach the stage where it can be regarded as possessing intelligence, a step closer to matching a human? One author, whose name escapes me, regards this threshold as the point where a robot could write an effective commentary or synopsis of a book that it had read or movie that it had seen. Given the comments made above, the formidable technology to accomplish this does not presently exist, but is conceivable at some distant future.

A more popular intelligence test was proposed in 1950 by the troubled British genius, and father of the computer age, Alan Turing. His Turing Test places a human and our robot behind separate screens, each linked to an unseen judge who uses a keyboard to ask questions to which keyboard responses are invited. The goal of our robot is to appear human, so it will deliberately make some errors, and avoid solving any problems too quickly. Once all the questions have been asked and the separate responses tallied, the robot is assumed to have passed the Turing Test if our judge is unable to favor either set of responses as coming from the human.

The above more difficult challenges than confronted by Deep Blue still appear to fall within the realm of algorithm processing, now aided by a much more extensive memory of experiences and a human value set, or criteria, upon which to make judgments. But are there human actions that are not the result of a systematic process, and thus likely outside the realm of mimic by a robot? One might propose that an apparently irrational violent act of passion could not be so explained, but even that may be algorithmic in nature, corresponding, in computer lingo, to a line of code that commands that act in response to a certain value satisfaction (e.g. ego salvaging) or stored experience.

Taking yet another step, one can ponder whether human creativity is also an algorithmic process that some computer of the future might be successful in implementing. This topic involves exploring the nature of creativity (which is treated more fully in another section) and must be considered in the context of the one making its judgment. A new idea in a certain discipline may seem just a short step to someone knowledgeable in that area, but seem extremely creative to someone less an expert.

Suffice it to say that almost all creative advancements are built upon a strong knowledge and experience base of the subject considered, followed by a trial-and-error Let's see if this works exploration involving some new idea. Thomas Edison's painstaking search for a suitable light bulb filament is a perfect example. And even Einstein had quite a head start before proposing his earth shaking results. Rarely does someone very young, and devoid of past contributions and their understanding, contribute something very significant. Even Mozart, uniquely gifted at an early age in building upon music that he had heard, provided his greatest contributions toward the end of his unfortunately shortened life. And, even at present, computers are being taught, with some limited success, to create music, using procedures that compose notes and harmonies that are humanly appealing.

So it is not entirely clear that creativity involves some magical insight that might not someday be explained by a rational procedure that a computer can fully mimic. Perhaps the occasional out of the blue epiphany that we all occasionally feel in coming up with something new to us is really just an illusion. All this is not yet clear. What is clear, however, is that some individuals, in certain disciplines, are better skilled than others in making advancements, whether that be the realization of some subtle algorithmic process hidden from the others, or some unexplained magical insight that a computer could never replicate. Up until now in our argument, it at least does seem quite possible that human actions result from the same kind of cause-effects processes that are characteristic of a computer program's instructions set, and many experts in the field of Artificial Intelligence hold to that view.

But are humans really just algorithm processors, something that can be copied in its decision making by some futuristic robot? If so, any algorithmically derived fact, or truth, evident to a human would also become evident to any robot whose decision making procedure is the same. But, as it turns out, a bombshell of a paper by Kurt Godel in 1931 yielded the conclusion that

Godel started with the self-evident truths, or axioms, of our normal number system of integers, and then using self-evident rules of inference, derived theorems, or new truths, much as is done in a high school Plane Geometry Class. Deriving such theorems is clearly an algorithmic procedure. These theorems can then be used to derive yet new theorems (truths), and it was once thought that all possible true statements in or about mathematics could be derived by a continued repeat of this process. Godel proved that this was not the case, as he discovered a very complicated mathematical statement that a human would know to be true, yet proved that no theorem, i.e. algorithmic procedure could possibly exist that would yield this truth. Thus mathematics contained truths that could not be proven in any systematic, step-by-step manner. This was an amazing discovery. *

But the philosophical impact of this conclusion was even greater, with some thinkers even suggesting that its implications proved the existence of a God. Consider again Deep Blue, its involved chess-playing algorithm submitted to a computer for governing its moves. But any computer armed with some program becomes just a numeric and logic number crunching machine, and thus falls within the framework of Godel's analysis. There exists a one-to-one identification of its numeric efforts with the rational intent of its program. Taking note of Godel's Theorem, it follows that no computer, limited to processing algorithms, could ever derive every truth that is evident to humans. Thus our futuristic robot is a second class citizen after all, despite all the victories he can realize over us in other endeavors. Even up to this point in our discussion, it is clear that humans possess some quality that cannot be replicated in the electromechanical world.

Strong devotees of Artificial Intelligence reject the above argument on the grounds that it contains a fallacious assumption. They acknowledge that the argument may be sound when applied to current computer technology, but that computer programs of the future will be created that behave much more like humans, making errors, creating code similar to themselves, being indecisive, learning based upon experience, etc.

In fact, they even foresee the day when human intelligence could be superseded by that of a computer running at lightning speed with orders of magnitude more memory. Such a machine would not only learn much faster than we humans, but would also create new machines that would even have greater intelligence. We humans would then be dominated by the devices that we had originally created, a chaotic point in our future evolution that has been termed The Singularity. Like a mathematical or physical singularity (e.g. a black hole) where the known normal rules break down, here, likewise, the prediction of what future social rules and values would emerge is impossible. Humans might prosper in a far kinder and more orderly world, or they might have been a party to their own extinction.

All that said, it is difficult to conceive of any computer program ever being able to shed its shackles of operating in an algorithmic way, and thus potentially escaping its subservience to Godel. Thus, for now at least, we can restate and expand our above statement, remaining cognizant of the cognitive science research that is being done.

Having discussed the more mechanistic properties relating humans and robots, we turn now to the seemingly more obvious difference between the two, namely the property of exhibiting emotions and feelings. What took us so long to get here, you might be asking? Isn't it obvious that humans experience feelings of joy, anger, lust, pain, fear, understanding, free will, sadness, love, hate, self-awareness, attraction to a particular sex, and many other emotions denied to robots, or computers, or, in fact, to any electromechanical device? This seems so obvious that it may seem useless to belabor the point any further.

But we are being a bit glib here. I experience sensations that I term feelings, sensations such as joy, pain, sadness, lust, understanding, free will, the feeling of Me, sensations or color, anger, etc. But All feelings are strictly first person experiences seen only by the person involved, and forever unknown to any outsider. There can never be any direct measure of such feelings, although any individual will claim that such feelings are linked to his/her outward, or third person, signs that can be observed and measured. Thus that person may lash out, or run away, or grimace, or smile, or display an increased pulse rate/blood pressure, or even an MRI based change in his cranial activity. Relating such outward signs to ourselves, we then assume that the subject's feelings must be similar to what we would experience. But we have absolutely no proof that the nature and intensity of what others feel is anything like what we would feel. A good actor could put on the same outward display devoid of any feeling that it is meant to imply.

And the feelings that animals, or fish, or birds, or worms, or insects, or even plants might experience are even more uncertain. My daughter's Labrador Retriever views me as the guy who will toss a tennis ball up onto a hill and into the underbrush and weeds. I assume that she experiences some kind of joy as she tears up the hill in its search, using her senses of sight and smell. Generally she locates the ball rather quickly, and I wonder whether she senses a feeling of accomplishment at its discovery. At other times, it may take several minutes as she retraces her steps back and forth, but rarely has she ever given up. Does she experience feelings of frustration as she seeks the hidden ball in vain? Is there more joy at finding a ball that took longer to discover? Does she experience a feeling of failure if she eventually must quit the hunt? She provides very little in terms of outward signs that would give me a clue. It's easier to claim ignorance on these matters because dogs differ more from humans than do humans from other humans.

Anthropomorphic beings that we are, we assume that those of Nature's creatures that appear closest to us must also experience somewhat similar feelings, and the greater the gap between us, the less certain we are about what they feel, if anything. Of one thing we seem sure. Only organic based entities have the potential to experience feelings, certainly not electromechanical devices such as robots. Where does this judgment come from? Why is this so obvious? Isn't it possible that a light bulb about to burn out, and rupturing its filament, may feel some emotion? Feelings being what they are, those associated with other beings (or things?) can never be known. Only our own can ever be evident to us.

Despite our lack of firm evidence, we assume that Deep Blue experiences no feelings whatever as it triumphs over some frustrated and joyless opponent. It feels no joy at making some clever move that it doesn't even realize is clever. Following its prescribed recipe, it has no understanding of why it is doing what it is doing, in fact, it even lacks the self-awareness necessary to ponder this question. And certainly it lacks any free will, as its actions are totally prescribed by its algorithm.

We might ask, What would we humans be like if we lacked any feelings? We are still not robots, as Godel showed, but just what have we lost? Would we be like Spock on Star Trek?

Materialists and Behaviorists have a simple answer. Their claim is that our outward actions would be exactly the same whether we possess feelings or not. To them, feelings play absolutely no role at all in either our outward behavior or any measurable physical indicators, such as blood pressure or brain patterns. Feelings are just along for the ride, such as the hum of an engine, or the squeak of a rotating axle. Feelings never dictate our behavior, and though they obviously do exist, any sense that they control anything is an illusion. Feelings are termed to be epiphenomena.

Used to dealing with physical objects, experiments, and processes, it seems only natural that many scientists would favor a view that all things, even feelings, can be explained by material causes, with no need to involve the mysterious concept that involves how a feeling, perhaps based upon Free Will, is able to effect a material change. None of their usual physics displays any example of such a mysterious force. Even the befuddling observations of Quantum Physics, with their unpredictable outcomes, do not support the role of unconstrained decision making. Here an eventual outcome, once an observation is made, follows a strict rule regarding the likelihood of any particular result.

Some support of this materialistic view is provided by a large body of scientific evidence that quantitatively links deliberately imposed stimuli or brain alterations of test animals to their actions and responses. There appears to be a fairly tight cause-effect relation, with no real decision making on the part of the animal. Further support is provided by humans who have suffered a sudden brain injury, yielding an altered MRI brain pattern to correlate with their new behavior. Here, it would appear, the material state of the brain pretty much dictates a subject's new behavior, with very little room, if any, for that person's control of his/her actions. Actions and responses unimaginable before the injury now have become commonplace. A certain cause seems to imply a certain effect, just as in the more simplified physics that doesn't involve humans. Simply presented, the electrical and chemical state of a person's brain will completely determine their next action. Even the ability of a machine to be conscious is considered possible, as considered in the short paper Can Machines be Conscious?. (Click on "article" after getting to the website.)

Those who believe that feelings and Free Will do control our actions might even accept the idea that the electrochemical state of our material brain at any particular time does determine our next action, but they insist that our feeling of Will plays a major role in forming that electrochemical state. This is still unsatisfying to the Materialists, since a mysterious force still is called upon to make a material change. But the believers counter with the usual argument in such matters, namely that we are still a fairly ignorant race, and that the way in which our Will can affect a material change in the brain is simply beyond our current comprehension. They correctly point out that other unseen and unexplained forces in Nature have only recently been discovered in mankind's evolution, such as the formally mysterious effects of electromagnetic forces and waves.

But there is another potential flaw in what the Materialists propose. Feelings are real. They do exist. Just touch a hot stove if you need proof, and nothing in our comfortable physics can explain their source and nature. How do the Materialists respond to this challenge? They simply present the flip side of the believer argument in the previous paragraph, namely that we humans are currently ignorant of just how a material element can have this kind of non-material effect. They firmly maintain that the human mind is no more than a glorified computer program of extreme complexity, and that such a program, when realized, would also experience the feelings that we humans do. It is still difficult to see how even such a program could be non-algorithmic, and thus escape Godel, but perhaps that might someday be possible.

It all seems to reduce to this: The material electrochemical state of our brain at any time almost certainly determines our next action. That brain state is at least dependent upon its prior state, including its memory, and the current processed sensor data via eyes, ears, touch, etc. Materialists admit the presence of feelings, but deny that they influence the brain in any way. Opposing this view, those tending toward a more Dualistic view of the mind see its state as influenced by an additional component, namely nonmaterial feelings and will of the subject. This divergence of views may never be settled, as a quantitative assessment of feelings and their nature lies outside the scope of human determination. So this issue will remain unresolved.

The demotion of feelings by the Materialists seems to rob humans of any real motivation to exist. We cherish the feelings that we have, and can't conceive of a life without them, a life of both pleasure and pain that seems to provide meaning (another feeling) to our existence. Even if these feelings are just epiphenomena, it seems that their loss would be the equivalent of death. Spock could easily outperform any human in almost any endeavor, but would any of us prefer to be Spock, a Vulcan who feels little or nothing? So feelings play a key role, probably the key role in our lives. One might even propose that the purely material aspects of our lives are important only to the extent that they influence our feelings, irregardless of the latter's source and nature. It is our feelings that give us life.

One final comment seems in order. Our robot is the previous discussion was specifically electromechanical in nature, and thus easier to consider foreign relative to humans. But it does not seem completely out of the question that, at some future date, researchers may be able to fully replicate even very complex cells using basic organic materials, rather than donor cells. Perhaps a female egg and male sperm could be so formed, then joined and nourished for nine months. Wouldn't this laboratory creation, this organic robot, now actually be human?

Or, getting even a bit more crazy, suppose it were someday possible to establish a complete blueprint of your current molecular state, and that state could then be replicated by some futuristic device. This would be like a nondestructive Beam me up Scotty, one that leaves its subject in place, while transmitting its blueprint information, as in Star Trek, to a remote assembler, preloaded with the necessary material substances for a reconstruction. Would there now exist another You at this remote place? Add this to the Wonderments to ponder.


* A parallel example to Godel's Theorem is presented in Infinity and the Mind by Rucker
The proof of Gödel's Incompleteness Theorem is so simple, and so sneaky, that it is almost embarrassing to relate. His basic procedure is as follows:
1. Someone introduces Gödel to a UTM, a machine that is supposed to be a Universal Truth Machine, capable of correctly answering any question at all.
2. Gödel asks for the program and the circuit design of the UTM. The program may be complicated, but it can only be finitely long. Call the program P(UTM) for Program of the Universal Truth Machine.
3. Smiling a little, Gödel writes out the following sentence: "The machine constructed on the basis of the program P(UTM) will never say that this sentence is true." Call this sentence G for Gödel. Note that G is equivalent to: "UTM will never say G is true."
4. Now Gödel laughs his high laugh and asks UTM whether G is true or not.
5. If UTM says G is true, then "UTM will never say G is true" is false. If "UTM will never say G is true" is false, then G is false (since G = "UTM will never say G is true"). So if UTM says G is true, then G is in fact false, and UTM has made a false statement. So UTM will never say that G is true, since UTM makes only true statements.
6. We have established that UTM will never say G is true. So "UTM will never say G is true" is in fact a true statement. So G is true (since G = "UTM will never say G is true").
7. "I know a truth that UTM can never utter," Gödel says. "I know that G is true. UTM is not truly universal."

Think about it - it grows on you ...
With his great mathematical and logical genius, Gödel was able to find a way (for any given P(UTM)) actually to write down a complicated polynomial equation that has a solution if and only if G is true. So G is not at all some vague or non-mathematical sentence. G is a specific mathematical problem that we know the answer to, even though UTM does not! So UTM does not, and cannot, embody a best and final theory of mathematics ...

Although this theorem can be stated and proved in a rigorously mathematical way, what it seems to say is that rational thought can never penetrate to the final ultimate truth ... But, paradoxically, to understand Gödel's proof is to find a sort of liberation. For many logic students, the final breakthrough to full understanding of the Incompleteness Theorem is practically a conversion experience. This is partly a by-product of the potent mystique Gödel's name carries. But, more profoundly, to understand the essentially labyrinthine nature of the castle is, somehow, to be free of it.