This Pro-Hamas Student at Princeton Shows How Weak and Soft the Left Is
A Democratic Party Megadonor Just Issued a Major Warning for Biden
Try a Little Honesty About Israel
Biden in a Pretzel Over Antisemitism and Bigotry
The Making of a Banana Republic
Americans Are Rejecting Climate Alarmism
CNN Deploys a 'Fact-Checker' for Trump, Not Biden
Joe Biden Is Selling Out Israel to the Antisemitic Mob
Moving Away From the Template of 'Oppressor vs. Oppressed'
Joe Biden Is Selling Out Israel to the Antisemitic Mob
Bowing to Hamas and Biden Demands Would Be Suicidal
Iron Clod
Believe Biden’s Actions, Not His Words on Israel
Biden's Impeachable Moment
Joe Biden's Latest Political Move Is Losing Him Democrat Votes
OPINION

This Is Why You Will Always Be Smarter Than Artificial Intelligence

The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com.
Advertisement
Advertisement
Advertisement
AP Photo/Mark Schiefelbein

Few issues today generate as much fascination, worry, and confusion as Artificial Intelligence (AI). Recently, for example, Vox published a feature on the potential risk of AI, stating that because AIs are becoming "smarter and smarter," it is possible that at some point, "we'll have handed humanity's future over to systems with goals and priorities we don't understand and can no longer influence." But these concerns are unfounded, as they rest on the flawed premise that AI is even capable of "intelligence" in the first place. But it is not, at least not in the human sense of the word. To understand why, it is worth considering what makes the human mind unique.

Advertisement

According to classical philosophy, there are "three acts of the mind." The first is simple apprehension, which constitutes understanding or knowing the "whatness" or meaning of something. For example, knowing what a dog is entails knowing the nature of a dog, or "dogness," and the properties that follow from it, such as barking, tail wagging, panting, chasing squirrels, etc. The second act of the mind is judgment, as expressed in a proposition or statement. For instance, the judgment that "Rover is a dog." And the third act is reasoning, or connecting one proposition to another to form a conclusion. For example, "All dogs are mortal," "Rover is a dog," therefore, "Rover is mortal."

Now, computers and AIs are very good (in fact, often far superior to humans) at the second and third acts of the mind, but they are utterly incapable of performing the first act, knowing or understanding. This is because the second and third acts concern the retrieval and combining of information, both of which AIs can be programmed to do. However, knowing or understanding meaning cannot be programmed because it is not a power that comes from the material world. Rather, it is a power that comes from the human mind, which is not material. If the reader has doubts about this, consider a circle drawn on a whiteboard. 

Upon observing this circle, we might wonder what it represents. It could, after all, depict the sun, or a basketball, or a wheel, or a coin, or circularity itself, or any number of other possibilities. But, and here is the key, no material facts alone about the circle can tell us what it is supposed to represent. 

Advertisement

For example, we could examine the thickness of the line in which it is drawn, subject the ink to chemical analysis, or even study the underlying neurological processes that take place within the artist's brain as she draws the circle, and yet none of these tests will reveal what that circle is meant to represent. That is because the representational content of the circle – the meaning of the circle – comes not from anything in the material world but from the mind of the artist herself. (For the interested reader, see philosopher Ed Feser's explanation of the immateriality of the mind here, from which I have borrowed). To be sure, an AI could be programmed to appear as if it knows, just as a video game about war can be designed to appear as if it is actual warfare. But mimicry is not reality, and in this case, the content (or meaning) of the programming would ultimately reside in the mind of the programmer, not in the material constituents of the programmed AI itself. 

Put simply, then, knowing entails meaning, and because material systems, including AIs, are incapable of meaning, no AI can ever be made capable of knowing. Hence, contrary to popular media claims, no AI is becoming "smarter and smarter" because no AI is "intelligent" at all. Furthermore, this demonstrates why the human mind must be more than merely the brain. The brain, after all, is a purely material organ, so if the mind were nothing but the material brain, then knowing would not be possible. But the fact that we do know, indeed the very fact that the reader can grasp the meaning of this sentence, leaves no doubt that knowing is a capacity of the human mind. 

Advertisement

Nevertheless, and for reasons that are too far afield to examine in detail, much contemporary philosophy, modern science and popular psychology has either ignored or downplayed the importance of the "first act" (knowing) of the human mind and has thereby reduced the mind to acts two (judging) and three (reasoning). This explains why it has become popular to believe that AI might one day surpass human cognition. AI, after all, can compare (judge) and combine data (reason) better than we can, so if that is, in fact, all that the mind does, then surely we better sound the alarm. But of course, such a view ignores the most obvious and crucial aspect of the mind, which is its ability to understand the essence or meaning of things. In fact, without this capacity, there is no judging or reasoning at all. In other words, it is because we first understand something that we can go on to judge and reason about it. Understanding is, therefore, the foundation of cognition. This means that, contrary to Vox, we will always have an advantage over AI because AI simply follows the content of the algorithms we input without understanding the meaning of that content. And without grasping meaning, AI cannot "think" at all. 

But of course, as with any advanced technology, there is always a possibility that Artificial Intelligence might fall into the wrong hands. That is a real concern. But the idea that the danger from AI lies in the advances in its "intelligence" is simply based on a gross misunderstanding of what the mind is and what material systems are capable of. The truth is AI will never be more intelligent than a human being because it will never be "intelligent" at all. 

Advertisement

Join the conversation as a VIP Member

Recommended

Trending on Townhall Videos