Overblog
Edit post Follow this blog Administration + Create my blog
Can we know of AI? - pechegramat46.over-blog.com
http://pechegramat46.over-blog.com

pechegramat46.over-blog.com

Menu

Can we know of AI?

The issue of deducing whether or not computers have the capacity for consciousness has plagued the field of philosophy of mind ever since the early days of analogue computation, but philosophizing about the potential consciousness of computers has been a labor of love. We are fascinated by the concept of recreating the human mind in an inhuman form.  Futurist thoughts of sapient artificial intelligence (AI) living among us have long been part of our culture, both in academia and pop media.  Philosopher John Searle, however, offers a more sobering view of computers, one that is quick to undercut many sapient AI stories of the media and remind us that these fantasies are only just that.  This goes against the hopes of Alan Turing, the “father of modern computing,” who believed that, in principle, a computer could go as far as fooling a person into thinking that it is human.  Searle argues that computers cannot be conscious on the grounds that computers are incapable of truly understanding symbols; Turing can respond that Searle is assuming that we will be able to recognize artificial consciousness when we see it, and that Searle cannot verify the claim that strong AI is impossible due to the problem of other minds.  Searle would respond by stating that strong AI can be shown to be metaphysically impossible because computers lack the contextual understanding of language.  However, Turing can trump this reply by stating that a computer can conceivably be programmed to receive contextual input like a human or simply be programmed with semantic data.

 

Searle begins his argument by defining two types of AI: weak AI and strong AI.  The first premise of his argument states that computers can either have the property of weak AI or of strong AI.  Weak AI, as Searle defines it, are models of the human mind.  Weak AI computers are tools used to perform specific computations and allow us to more rigorously test hypotheses regarding the human mind.  In contrast, strong AI have conscious minds themselves.  Strong AI computers are not just models of the human mind.  In addition to being able to compute input data, strong AI are capable of having cognitive states much like humans.  The functions of their minds would mirror our own.

 

The second premise of Searle’s argument states that strong AI require an understanding of symbols.  By symbols, Searle is referring specifically to letters, words, and characters that serve as abstract representations for ideas, objects, actions, etc.  In order to be considered conscious, Searle asserts, strong AI must show an understanding of symbols beyond mere manipulation.  They must understand how such symbols function in context.  Semantics is key to strong AI.

 

Searle’s third and biggest premise posits that computers are incapable of truly understanding symbols.  He argues this in his famous thought experiment called the Chinese Room.  In the thought experiment, an individual is placed in a bare room with an input slot and an output slot.  The person is delivered symbols in a language that he or she does not understand and has no knowledge of the symbols used in that language (Searle used Chinese in his own case) from a programmer outside of the room.  The individual in the room is also given questions in the unintelligible language.  However, the individual also receives an instruction table in a familiar language on how to arrange the symbols in a way to answer the questions.  In this process, the individual in the Chinese room does not in any way understand the unintelligible language and yet can generate answers to the questions as well as the instructions are written.  Presumably, a programmer could write instructions so well as to allow the individual in the Chinese room to answer as competently as a native speaker of the unintelligible language.  The Chinese Room as portrayed here is an example of how modern digital computers work and should serve as an adequate representation of a computer.

 

Since computers lack the ability to truly understand symbols, they do not have the capacity to be strong AI, and this is how Searle concludes his argument.  In his Chinese Room, the individual (or indeed, the central processing unit) does not have any deep understanding of the symbology that enters and exits the room.  The symbols given have only syntax, in that they are arranged in such and such a way.  The processor lacks any understanding of the semantics and contextual definitions of the symbols placed before him/her.  The processor, therefore, does not constitute strong AI.

 

Turing’s objection starts off by denying Searle’s conclusion to his argument, claiming that it the preceding premises are insufficient to determine that strong AI cannot exist.  Turing’s objection attacks Searle’s assumption that we can know of the existence of other minds definitively enough to deduce the existence of artificial minds.  Searle implies that we have a method of recognizing intelligence when we see it, presuming that we can take the inference of the consciousness of other human minds and apply it to nonhuman entities.  If we can know of other minds, then we can safely determine when a mind is present in a being and when it is not.  The danger in this assumption is that it relies too much on a topic that remains an epistemic mystery to us: the problem of other minds.  Since our minds and thoughts are private, we cannot know if anyone other than ourselves is conscious with much certainty.

 

This is precisely the dilemma that Turing was trying to avoid with his famous Turing Test, which placed computers on an equal playing field with humans.  In the Turing Test, a computer and one person are asked questions from a second person, an interrogator, from behind a veil.  The interrogator must determine with at least seventy percent accuracy which of the two behind the veil is the human and which is the computer.  Notably, this type of test sidesteps the problem of minds, determining merely what seems to be humanlike intelligence from the perspective of a human.  No mention is made about whether the computer is conscious in an objective state; what matters is if humans could consider them, for all practical reasons, conscious.

 

One human can infer that other humans are conscious due to how closely related we all are due to our evolutionary history.  However, one has a much weaker evolutionary connection to nonhuman beings, and even less for nonliving computers.  The unavailability of this inferential tract makes any statement against the possibility of strong AI in need of extraordinary evidence, and Searle fails to provide this.

 

This objection is rather damning of Searle’s Chinese Room in that he is entirely relying on an assumption of an inaccessible epistemic conclusion for his argument.  He is claiming that strong AI is metaphysically impossible and yet can offer no verification of this assertion.  Searle needs a stronger argument that can finesse the problem of minds without needing to address it directly.

 

If presented with this response, Searle would likely rebut that it is still possible that we can deduce the metaphysical impossibility of strong AI regardless of the epistemic problem of minds.  Searle states that computers cannot be conscious because they have no true understanding of the symbols presented to them.  Computers may manipulate symbols, but they have do not recognize the contextual and semantic properties of symbols.  Symbolic understanding is what holds the key to determining consciousness in computers since much of human intelligence is based on the ability to comprehend language.  Even the Turing Test employs language as the designator of human intelligence, so Turing ought to at least reply that language mastery should serve as the deciding factor of strong AI.

 

Turing would respond that the use of language in the Turing Test is merely a vehicle to transmit discussion of an idea from one individual to a computer and back again, and language itself is not meant to be a representation of any sort of conscious intelligence.  Turing would also state that the symbolic understanding that Searle keeps touting is not too lofty an ideal.  Computers, being “universal machines,” merely carry out actions based on their machine tables and data input.  Given properly written machine tables, computers can conceivably interpret any type of data.  From this perspective, it is not so far-fetched to imagine programming the context and semantics of symbols into a computer, or at least programming computers in such a way as to be able to handle incoming contextual data.  This method of perceiving the world seems quite similar to the way the human mind accomplishes this task.  If AI could manage this, then that should put them on par with human intelligence, and such computers would then fit Searle’s definition of strong AI.

 

Turing’s response here is adequate in defeating Searle’s claim because the core part of Searle’s argument, that computers are incapable of understanding language, does not pan out since computers in principle are able to encode and manipulate language with the same prowess and understanding as humans.  In Turing’s response to Searle’s reply, strong AI should be metaphysically possible, regardless of whether we can confirm this claim epistemologically.

 

With his Chinese Room example, Searle hoped to prove that strong AI cannot exist because computers do not truly understand the symbols that they are manipulating.  However, Turing can respond that Searle is too quick in assuming that we will be able to identify artificial consciousness; Searle’s main argument depends on the assumption that humans can even know of inhuman minds, and his reasoning for this assuming this goes unexplained in his argument.  Searle could say that strong AI are still impossible regardless of any epistemic uncertainty attached to them because they do not understand semantics, but Turing ultimately defeats this reply by stating that there is nothing stopping programmers from equipping computers with such knowledge.  In the end, Searle’s argument does not pose much of a threat to the dreamers of a future with strong AI living among us, much to the joy of futurists and sci-fi geeks alike.  Therefore, our tradition of desiring to create mental facsimiles of ourselves can continue less abashed knowing that Searle’s argument does not threaten such a future.