This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 |
Here are two sentences from the article
Both sentences are making claims that strong AI has been somehow discredited or abandoned by most "thinkers". I don't see any evidence for this. But it is an opinion which both sentences are implicitly pushing by their phrasing. There is no counterbalance in the article, and the first sentence occurs in a general expository context, and should be rephrased, I think, to comply with undue weight.
The first sentence should read "nearly all computer scientists", because right now, if a machine actually passed a Turing test, nearly all computer scientists would admit it was thinking. The second sentence I think should be qualified somehow by noting that Harnad is talking about philosophers, and that within scientific fields (cognitive science and computer science) computationalism and strong AI are conflated, and both are still majority positions. Likebox ( talk) 23:01, 17 June 2009 (UTC)
In the entry for schizophrenia it says : "Despite its etymology, schizophrenia is not the same as dissociative identity disorder, previously known as multiple personality disorder or split personality, with which it has been erroneously confused."
So the line here: "e.g. by considering an analogy of schizophrenia in human brain" is probably using the term wrongly. Myrvin ( talk) 14:44, 21 June 2009 (UTC)
<I think there has been enough time for someone to gather sources and tie this paragraph into the literature. I'm going to delete it if no one objects. ---- CharlesGillingham ( talk) 20:26, 14 September 2009 (UTC)
You need to understand both the source and the target language to translate. Every real translator knows that. Searle's setup of a person that does not know Chinese and yet translates it to English and back is irrealistc. Even a thought experiement (*specially* a thought experiment) must be have realistic bases. This fault in Searle's theory is enough to invalidate it. Marius63 ( talk) 21:47, 18 September 2009 (UTC)marius63
I have a more fundamental question on this whole experiment. What does it mean "to understand." We can make any claim as to humans that do or don't understand, and the room that does/not. But how do we determine that anything "understands"? In other words, we make distinction between syntactic and semantics, but how do they differ? These two are typically (to me) the two extremes of a continuous attribute. Humans typically "categorize" everything and create artificial boundaries, in order for "logic" to be applicable to it. Syntax is the "simple" side of thought, where application of grammar rules is used, ex. The ball kicks the boy. Grammar wise very correct. But semantically wrong (the other side of the spectrum) - rules of the world, in addition to grammar, tells us that who ever utters this, does not "understand". In effect, we say that environmental information is captured also as rules, that validates an utterance on top of grammar. To understand, is to perceive meaning, which in turns imply that you are able to infer additional information from a predicate, by the application of generalized rules of the environment. These rules are just as write able as grammar, into this experiment's little black book. For me, the categorization of rules, and the baptism of "to understand" as "founded in causal properties" (again undefined) creates a false thought milieu in which to stage this experiment. (To me, a better argument in this debate on AI vs. thought is that a single thought is processing an infinite amount of data - think chaos theory and analog processing, where as digital processes cannot. But this is probably more relevant elsewhere.) —Preceding unsigned comment added by 163.200.81.4 ( talk) 05:35, 11 December 2007 (UTC)
The notion before "understanding" is "meaning". What does "mean" mean? I think it means "a mapping", as in the idea of function in mathematics, from a domain to a range. It is an identification of similarities. It is like finding the referent, similar to the question: what kind of tree would you be if you were a tree? In this sense, syntax and semantics are not irrevocably unrelated. A mapping from one system of symbols into another system of symbols is meaning, is semantics. Then "understanding" becomes the ability to deal with mappings. ( Martin | talk • contribs 10:13, 8 July 2010 (UTC))
Likebox, the statement I tagged and which you untagged is not a quote, and I don't see how it could be read into the text. What Searle says is example shows that there could be two "systems," both of which pass the Turing test, but only one of which understands. This does not imply "that two separate minds can't be present in one head". If you have another source for that, please cite it. Regards, Paradoctor ( talk) 17:56, 26 May 2009 (UTC)
Likebox, I removed your citation because the source you cited is not reliable. It's an unreviewed draft of a paper that has seen no updates since 1997, and is linked by nobody. Thornley himself is not a reliable source by himself, he is just an amateur (in this field) with better education than most. Also note that the quoted statement is not equivalent to the one you seek to support: "two separate minds can't be present in one head" is not the same as "there is not two minds", the former is a statement about the limits of human heads (or rather brains), the latter is a statement about the special case of the internalized CRA. You can use the former to support the latter, but you'll need a reliable source for that. And you would still have to show that Searle did assume the statement in question. —Preceding unsigned comment added by Paradoctor ( talk • contribs) 18:31, 28 May 2009 (UTC)
< I'm having a little trouble figuring out if we have resolved this issue, so just to make it clear what we're talking about, here's the current structure of the "Finding the mind" section (where "->" means "rebutted by"):
(CHINESE ROOM -> SIMPLE SYSTEMS REPLY (pencils, etc) -> MAN MEMORIZES RULES (no pencils) -> (Ignore), (CR) -> SOPHISTICATED SYSTEMS REPLY (virtual mind) -> JUST A SIMULATION -> WHAT'S WRONG WITH SIMULATION?. CR FAILS. THIS DOESN'T PROVE STRONG AI.
It is possible to also add rebuttals to the "man memorizes", like this:
(CHINESE ROOM -> SIMPLE SYSTEMS REPLY (pencils, etc) -> MAN MEMORIZES RULES (no pencil) -> MAN HAS TWO MINDS IN ONE HEAD. (CR) -> SOPHISTICATED SYSTEMS REPLY (virtual mind) -> ... etc.
There are sources for "man has two minds in one head", and so this paragraph could be written, if someone wants to. I'm not interested in writing it myself, and I think the article is better without it.
The reason I believe it to be a "waste of the reader's time": Searle's description of the system reply is a straw man. He thinks the systems reply ascribes intentionality to a set of objects, one of which is a pencil. Searle refutes this reply by throwing out the pencil. He thinks he's disproved the systems reply. Most readers have no idea what Searle is doing or why it matters if there is no pencil. As Likebox says, it's "mysterious". The problem is that no one actually ascribed intentionality to the pencil. We all ascribe intentionality to something else. We might call it "software" or "neural correlates" or "a running program" or an "emulated mind" or a "virtual mind" or a "functional system", etc. But we're definitely not talking about just the pencil and the other physical objects. What's required is a more sophisticated version of the systems reply and, in this article, that's what MInsky's "virtual mind" argument is supposed to supply.
So the article, as it is written, allows Searle to have "the last word" against his straw man, and then moves on to the real argument. A sophisticated systems reply that actually works, and then Searle's basic objection to computationalism: a simulated mind is not the same as a real mind. I think these are the most important points on the ontological side of the argument.
So ... he hesitates ... are we all happy with that? ---- CharlesGillingham ( talk) 18:51, 31 May 2009 (UTC)
I think that this area of the question, virtual minds, or, better, second minds, is interesting, and I don't have a settled opinion. When you get to emergent characteristics, they can be of a small scale. Bubbles in boiling water are small, but they area a change in the emergent property: gas and not liquid. And you have to add a lot of energy before the temperature starts to go up again. If there is a second mind, emergent, can it be small, a mosquito mind, but of a mosquito-savant? Additionally, I believe that people do have multiple-minds, however probably only the dominant one is connected to the sensory and the motor cortex usually. This multiplicity is most clearly seen in dreams, where you, the hero, can be surprised by words or actions of supporting characters. The search for an unconscious mind hidden in the brain seems to me to be a search more likely of success than of a search for a mind in a metal box, or in any system containing a pencil. ( Martin | talk • contribs 18:49, 8 July 2010 (UTC))
What Searle apparenly never gets, is that it is not the computer who is thinking (just as it is not the person). It is the program. What he says is exactly the same, as saying that the universe does not think, and thereby just a distraction that is unrelated to the original question.
What he also does not get, is that that program would not follow any "programmed rules". That's not how neural networks work.
And my ultimate counter-argument, is that if we simulate a brain, down to every function, it will be the same as a human brain, and will think and understand.
If you believe that there must(!!!1!1!one) be some higher "something" to it, because we are oooh-so-special (and this is where Searles motivation ultimately stems from), then you are a religious nutjob, and nobody that has the ability to argue about this.
Another proof, that it you wisecrack like an academic, in front of people that have no idea of the subject, they will think you are right, no matter what crap you tell them.
— 88.77.184.209 ( talk) 21:13, 6 June 2009 (UTC)
I disagree that the experiment is stupid. But it just doesn't work for Searle's ideas. It's a valid attack to computationalism that just happens to fail. ;) nihil ( talk) 22:53, 1 April 2010 (UTC)
I have a question. What if the positions were reversed? The Chinese speaker has a list of commands written in binary, which he doesn't understand. He gives these to the computer. The computer reads the code, and performs an action. The computer clearly associates the symbols with the action. Does it therefore 'understand' what the symbols mean? The human doesn't understand the symbols. Does this mean he cannot think? —Preceding unsigned comment added by 62.6.149.17 ( talk) 08:18, 14 September 2009 (UTC)
Hi, please help us conclude whether there's a basis in WP policy to remove a Cultural References section from this article - and specifically to remove information about a feature film which is named for and concerns this topic. I'd really appreciate first taking a look at the mediation of the topic which lays out the major arguments. (At the bottom you can find the mediator's response to the discussion.) Sorry to bring it back to this Talk Page, but it seems to be the next step. Reading glasses ( talk) 18:18, 27 April 2010 (UTC)
Thanks. I'm hoping to also hear from 3rd party editors who have not already been part of the discussion and mediation. And I'll refrain from reproducing the whole discussion here. Reading glasses ( talk) 02:42, 28 April 2010 (UTC)
Another RFC Comment Trivia sections always seem to be contentious, and I think there must be some fundamental moral divide as to how serious Wikipedia is supposed to be. Last time I checked, Wikipedia policy on trivia sections is pretty vague, and I don't think looking for that type of authority will give either of you the answers you are looking for. My own opinion, apart from policy, is to not include trivia unless it's really interesting (if you're going to add something that detracts from Wikipedia's seriousness, it had better be fun). This film tidbit doesn't strike me as interesting at all.
That being said, I do kind of like the screenshot. It captures the essense of the 'Chinese room' (non-Chinese person analyzing a Chinese message using a big algorithm) much better than the current picture, and it's pretty to boot. If someone wanted to include the picture, appropriately captioned, I wouldn't object. -- Rsl12 ( talk) 21:42, 4 May 2010 (UTC)
"In popular culture" sections should contain verifiable facts of interest to a broad audience of readers. Exhaustive, indiscriminate lists are discouraged, as are passing references to the article subject.... If a cultural reference is genuinely significant it should be possible to find a reliable secondary source that supports that judgment. Quoting a respected expert attesting to the importance of a subject as a cultural influence is encouraged. Absence of these secondary sources should be seen as a sign of limited significance, not an invitation to draw inference from primary sources.
Animal Farm in Popular Culture: Hookers in Revolt is a retelling of the Animal Farm story, where the prostitutes revolt against the pimps to take over management of their bordello, only to turn more corrupt than the pimps ever were (cite trailer where the director says as much).
When trying to decide if a pop culture reference is appropriate to an article, ask yourself the following:
If you can't answer "yes" to at least one of these, you're just adding trivia. Get all three and you're possibly adding valuable content.
Oh dear, it appears that this RfC hasn't helped much. Please understand, my reference to trivia was not made pejoratively, only to indicate that the matter is secondary to the main subject of the page, and, indeed, potentially popular culture. As for whether or not it is popular enough, I'm afraid that's in the eye of the beholder, and the beholders here appear unlikely to agree or compromise. -- Tryptofish ( talk) 19:23, 5 May 2010 (UTC)
I've come for the RfC and think that material should be included because it's innocuous and relevant. Leadwind ( talk) 03:48, 10 May 2010 (UTC)
This line is a little confusing but I don't know how to fix it. Is it needed?
If this means only that a design is different from its implementation, that is true of everything. If it means something else, what is it? ( Martin | talk • contribs 18:43, 2 July 2010 (UTC))
Can't we change
to
( Martin | talk • contribs 16:52, 2 July 2010 (UTC))
(I put this above the comment above the paragraph below, to be out of the way.) A fuller quote from the Stanford Encyclopedia is below, posted by μηδείς at 23:15, 3 July 2010 (UTC)
Here is a quote from Searle (Chapter 5: Can machines think). There is no obstacle in principle to something being a thinking computer. ... Let's state this carefully. What the Chinese Room argument showed is that computation is not sufficient to guarantee the presence of thinking or consciousness. ... ( Martin | talk • contribs 12:15, 5 July 2010 (UTC))
Martin, that is a most excellent quote. Can you provide a full citation for the quote which you attribute to "Chapter 5: Can machines think"? What book and what page? μηδείς ( talk) 17:09, 5 July 2010 (UTC)
Medeis, here is a link to essentially that statement: http://machineslikeus.com/interviews/machines-us-interviews-john-searle "Could a man-made machine -- in the sense in which our ordinary commercial computers are man-made machines -- could such a man-made machine, having no biological components, think? And here again I think the answer is there is no obstacle whatever in principle to building a thinking machine," My quote above is from an audio. Excerpt: http://yourlisten.com/channel/content/51656/Can%20a%20machine%20think ( Martin | talk • contribs 03:05, 6 July 2010 (UTC))
The lead paragraph is: The Chinese room argument comprises a thought experiment and associated arguments by John Searle (1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave.
This is not a joke or a quibble. I believe my change was reverted in error. ( Martin | talk • contribs 21:35, 30 June 2010 (UTC))
Yes, "attempts [sic] to show" implies that he was wrong and hence violates NPOV. The current lead:
except for being too strong is quite good. I have changed the wording to "mere symbol processing machine" since Searle does not deny that brains process symbols. He holds that they do more than that, taht they deal with semantics as well as manipulating symbols using syntactical rules. μηδείς ( talk) 04:22, 1 July 2010 (UTC)
It seems to me that the introduction of the word "mere", in order to fix the problem with the word "never", results in a less satisfactory statement than what I wrote to begin with:
(The above is taken from comparison of versions) but which I would now rewrite as
The word "machine" doesn't help at all, and black-box is what he was describing, as in the Turing Test. He does show us the inside, but only as a magician might, to help us see. So I object to the word "mere", as begging the question. Does the elevator come because it realized that we pushed the button? It depends.( Martin | talk • contribs 00:39, 2 July 2010 (UTC))
The word mere to qualify symbol-processing machines was most emphatically not introduced to fix any problem with the word never.
The issue is that Searle nowhere denies that one of the capacities of the brain is the ability to manipulate syntax. He does not deny that the brain processes symbols. What he denies is that symbol processing alone entails comprehending a meaning. He denies that syntax entails semantics. He writes, "The Chinese room argument showed that semantics is not intrinsic to syntax." (tRotM, p 210) The brain does indeed processs symbols, but it also does other things which import semantic meaning to those symbols. Hence the unqualified statement without mere includes the brain and is simply false.
An analogy would be like saying that a diving organism can never be described as flying or a photosynthetic organism can never be described as carnivorous when pelicans do dive and euglena do eat. Adding the word mere corrects the overgeneralization.
As currently stated, without any other changes, the sentence needs the word mere or some other qualification to exclude other faculties.
Also, while MartinGugino opposes the use of the word he does so in the context of supporting a different lead from the current one, and admitting [1]that he needs to read further on the subject.
I do not oppose rewriting the lead, I find the general criticisms valid. But as it stands, without the word mere, the lead is simply overbroad and false. μηδείς ( talk) 03:51, 2 July 2010 (UTC)
This, "the Chinese Room does have a mind, since it has a man" is an equivocation. We might as well say that if there is a man sitting on a chair in the closet of a fishing shack on an otherwise deserted island that the chair and the closet and the shack and the island all have minds because they have men.
Searle's position is simple. The intelligent behavior of the Chinese room is parasitic upon the consciousness of the person who wrote the manual whose instructions the agent sitting inside the room follows. The person who wrote the manual is the homunculus, and the consciousness of Chinese and how to use it in the context of the world resides in him, not in the room or the manual or the man in the room or all of the latter together. They are just tools following the instructions of the actually conscious programmer, after the fact.
Of course, this is not a forum for discussion of the topic, but of the article.
As far as the lead, I suggest we simply quote Searle as to what he himself says his arguments accomplish. He does this at length in The Rediscovery of the Mind, especially in the summary of chapter nine, p225-226. μηδείς ( talk) 21:33, 2 July 2010 (UTC)
Re: "consciousness does not have a location" - is the mind is in the head? ( Martin | talk • contribs 14:32, 7 July 2010 (UTC)) Re:"The mind involved is that of the manual writer" - what if he is dead? ( Martin | talk • contribs 14:39, 7 July 2010 (UTC))
I think you all have brought up a solid objection to the current lead: it's not clear what a "computer" is. This is why it is important to μηδείς that we identify the computer as a "mere symbol processor", and why Martin thinks that presence of the man's mind implies the computer "has" a mind. I don't think the current lead solves this problem.
I think that we need to keep the lead simple enough that a person who is just trying to, say, distinguish Chinese box from Chinese room, can read a sentence or two without being inundated with hair-splitting details. So the lead should have just a sentence. How about this:
By emphasizing the program, rather than the machine, I hope I am making it clear the we are talking about symbol processing. (Is the word "create" okay? It's a common usage word that doesn't have the ontological baggage of "cause" or "cause to exist", both of which are less clear to me.)
But this is not enough to fully clarify the issues that you all have raised. I think we need a one or two paragraph section under Searle's targets that describes what a computer is: it should mention Von Neumann architecture, Turing machine, Turing complete, Formal system and physical symbol system. It should explain the Church-Turing thesis and how it applies the Chinese room. It should mention dualism, and make it clear that Searle is a materialist, and that his argument only applies to computers and programs, not to machines in general. It could also mention the Turing test and black-box functionalism, which will make more sense in this context. ---- CharlesGillingham ( talk) 20:36, 3 July 2010 (UTC)
The Chinese room argument comprises a thought experiment and associated arguments by John Searle (1980) that a computer program can't create a 'mind' or 'understanding' in a computer, no matter how intelligently it may make the computer behave.
Here is the introductory paragraph of the Chinese Room article of the online Stanford Encyclopedia of Philosophy:
I find the first sentence fatally misleading, since it equivocates on what artificial intelligence is. By Strong AI, Searle means the notion that "the mind is just a computer program." (Return, 43). He nowhere states that an artificial and intelligent brain is an impossibility. (Note the importance of the capitalization or lack thereof of the term "Artificial Intelligence.") I think the remainder of the introduction is accurate, and note with especial satisfaction the use of the word merely in qualification of the use of syntactic rules to manipulate symbols strings. μηδείς ( talk) 23:15, 3 July 2010 (UTC)
The lead is an improvement over the prior one. Yet you give undue weight to Harnad, mentioning him in two paragraphs and as often as Searle himself.. This; "The argument has generated an extremely large number of replies and refutations; so many that Harnad concludes that "the overwhelming majority still think that Chinese Room Argument is dead wrong."" amounts to synthesis. Does Harnad say that or are you atributing his statement to the large number of "refutations"? And it amounts to POV. To say they are refutations is to say that Searle has been refuted. The simple solution is to return to my one sentence condensation of Harnad.
You also say that" The program uses artificial intelligence to perfectly simulate the behavior of a Chinese speaking human being." The use of "artificial intelligence" and "perfectly" amount to begging the question and should be deleted. Also, the man simply follows the instructions, he is not simulating a computer per se, but just following a program. It should say something neutral like "in which a man who does not himself speak chinese responds sensibly to questions put to him in chinese characters by following the instructions of a program written in a manual."
As for the syntax/semantics statement, I will find a replacement that speaks of the manipulation of synbols and their meaning. μηδείς ( talk) 23:57, 4 July 2010 (UTC)
Read the first couple of pages of the article and see if you agree."And make no mistake about it, if you took a poll -- in the first round of BBS Commentary, in the Continuing Commentary, on comp.ai, or in the secondary literature about the Chinese Room Argument that has been accumulating across both decades to the present day (and culminating in the present book) -- the overwhelming majority still think the Chinese Room Argument is dead wrong, even among those who agree that computers can't understand!"
"my Chinese Room argument ... showed that a system could instantiate a program so as to give a perfect simulation of some human cognitive capacity, such as the capacity to understand Chinese, even though that system has no understanding of Chinese whatever." p. 45, bold italics mine.
I changed a sentence in the lead paragraph. The Chinese room is widely seen as an argument against the claims of AI, as opposed to the practice of AI. Clarification of an ambiguous assertion. ( Martin | talk • contribs 15:28, 7 July 2010 (UTC)) I see that there is an attribution of this formulation to Larry Hauser. Certainly this is not sufficient reason to include an imprecise phrase in a lead paragraph. Is Searle against AI in toto? ( Martin | talk • contribs 15:47, 7 July 2010 (UTC)) I suppose that the phrase, as it was, could have been construed correctly to mean that "the Chinese Room is widely seen as an argument against the claim of the achievement of manufactured intelligence", but it seems to me that many people would have thought that the phrase "against artificial intelligence" meant "opposing AI", and that a clarification is helpful.( Martin | talk • contribs 16:04, 7 July 2010 (UTC))
The lead says: "The argument applies only to machines that execute programs in the same way that modern computers do and does not apply to machines in general." This is not clear:
( Martin | talk • contribs 16:19, 7 July 2010 (UTC))
It is very hard to be clear, I see. ( Martin | talk • contribs 20:06, 7 July 2010 (UTC))
Ah, yes, it isn’t a black-box. I used the term because it is associated with behaviorism, where people care about the behavior of a box and not the contents. The Touring Test is about behavior, only. But that term, black-box, is not essential to anything I said. ( Martin | talk • contribs 21:41, 7 July 2010 (UTC))
You say “The argument applies only to machines that execute programs in the same way that modern computers do and does not apply to machines in general.” Let’s say that next to this Chinese Room #1 there was a Mystery Room #2, and it behaved exactly as Searle’s Chinese Room #1 behaves. Would Searle say that his argument applied to Room #2 as well, without looking inside the room? Yes he would: He would say it may, or it may not, be intelligent. So his argument does not apply only to “machines that execute programs in the same way that modern computers do”, but to any object where all you know is the behavior. ( Martin | talk • contribs 21:52, 7 July 2010 (UTC))
Can it "understand" Chinese? | Chinese speaker | Chinese speaker's brain | Chinese room with the right program | Computer with the right program | Black box with the right behavior | Rock |
---|---|---|---|---|---|---|
Dualism says | Yes | No | No | No | No | No |
Searle says | Yes | Yes | No | No | No one can tell | No |
Functionalism and computationalism (i.e. "strong AI") say | Yes | Yes | Yes | Yes | No one can tell | No |
Behaviorism and black box functionalism say | Yes | Yes | Yes | Yes | Yes | No |
AI (or at least, Russell, Norvig, Turing, Minsky, etc) says | Yes | Yes | Don't care | Close enough | Doesn't matter | No |
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 |
Here are two sentences from the article
Both sentences are making claims that strong AI has been somehow discredited or abandoned by most "thinkers". I don't see any evidence for this. But it is an opinion which both sentences are implicitly pushing by their phrasing. There is no counterbalance in the article, and the first sentence occurs in a general expository context, and should be rephrased, I think, to comply with undue weight.
The first sentence should read "nearly all computer scientists", because right now, if a machine actually passed a Turing test, nearly all computer scientists would admit it was thinking. The second sentence I think should be qualified somehow by noting that Harnad is talking about philosophers, and that within scientific fields (cognitive science and computer science) computationalism and strong AI are conflated, and both are still majority positions. Likebox ( talk) 23:01, 17 June 2009 (UTC)
In the entry for schizophrenia it says : "Despite its etymology, schizophrenia is not the same as dissociative identity disorder, previously known as multiple personality disorder or split personality, with which it has been erroneously confused."
So the line here: "e.g. by considering an analogy of schizophrenia in human brain" is probably using the term wrongly. Myrvin ( talk) 14:44, 21 June 2009 (UTC)
<I think there has been enough time for someone to gather sources and tie this paragraph into the literature. I'm going to delete it if no one objects. ---- CharlesGillingham ( talk) 20:26, 14 September 2009 (UTC)
You need to understand both the source and the target language to translate. Every real translator knows that. Searle's setup of a person that does not know Chinese and yet translates it to English and back is irrealistc. Even a thought experiement (*specially* a thought experiment) must be have realistic bases. This fault in Searle's theory is enough to invalidate it. Marius63 ( talk) 21:47, 18 September 2009 (UTC)marius63
I have a more fundamental question on this whole experiment. What does it mean "to understand." We can make any claim as to humans that do or don't understand, and the room that does/not. But how do we determine that anything "understands"? In other words, we make distinction between syntactic and semantics, but how do they differ? These two are typically (to me) the two extremes of a continuous attribute. Humans typically "categorize" everything and create artificial boundaries, in order for "logic" to be applicable to it. Syntax is the "simple" side of thought, where application of grammar rules is used, ex. The ball kicks the boy. Grammar wise very correct. But semantically wrong (the other side of the spectrum) - rules of the world, in addition to grammar, tells us that who ever utters this, does not "understand". In effect, we say that environmental information is captured also as rules, that validates an utterance on top of grammar. To understand, is to perceive meaning, which in turns imply that you are able to infer additional information from a predicate, by the application of generalized rules of the environment. These rules are just as write able as grammar, into this experiment's little black book. For me, the categorization of rules, and the baptism of "to understand" as "founded in causal properties" (again undefined) creates a false thought milieu in which to stage this experiment. (To me, a better argument in this debate on AI vs. thought is that a single thought is processing an infinite amount of data - think chaos theory and analog processing, where as digital processes cannot. But this is probably more relevant elsewhere.) —Preceding unsigned comment added by 163.200.81.4 ( talk) 05:35, 11 December 2007 (UTC)
The notion before "understanding" is "meaning". What does "mean" mean? I think it means "a mapping", as in the idea of function in mathematics, from a domain to a range. It is an identification of similarities. It is like finding the referent, similar to the question: what kind of tree would you be if you were a tree? In this sense, syntax and semantics are not irrevocably unrelated. A mapping from one system of symbols into another system of symbols is meaning, is semantics. Then "understanding" becomes the ability to deal with mappings. ( Martin | talk • contribs 10:13, 8 July 2010 (UTC))
Likebox, the statement I tagged and which you untagged is not a quote, and I don't see how it could be read into the text. What Searle says is example shows that there could be two "systems," both of which pass the Turing test, but only one of which understands. This does not imply "that two separate minds can't be present in one head". If you have another source for that, please cite it. Regards, Paradoctor ( talk) 17:56, 26 May 2009 (UTC)
Likebox, I removed your citation because the source you cited is not reliable. It's an unreviewed draft of a paper that has seen no updates since 1997, and is linked by nobody. Thornley himself is not a reliable source by himself, he is just an amateur (in this field) with better education than most. Also note that the quoted statement is not equivalent to the one you seek to support: "two separate minds can't be present in one head" is not the same as "there is not two minds", the former is a statement about the limits of human heads (or rather brains), the latter is a statement about the special case of the internalized CRA. You can use the former to support the latter, but you'll need a reliable source for that. And you would still have to show that Searle did assume the statement in question. —Preceding unsigned comment added by Paradoctor ( talk • contribs) 18:31, 28 May 2009 (UTC)
< I'm having a little trouble figuring out if we have resolved this issue, so just to make it clear what we're talking about, here's the current structure of the "Finding the mind" section (where "->" means "rebutted by"):
(CHINESE ROOM -> SIMPLE SYSTEMS REPLY (pencils, etc) -> MAN MEMORIZES RULES (no pencils) -> (Ignore), (CR) -> SOPHISTICATED SYSTEMS REPLY (virtual mind) -> JUST A SIMULATION -> WHAT'S WRONG WITH SIMULATION?. CR FAILS. THIS DOESN'T PROVE STRONG AI.
It is possible to also add rebuttals to the "man memorizes", like this:
(CHINESE ROOM -> SIMPLE SYSTEMS REPLY (pencils, etc) -> MAN MEMORIZES RULES (no pencil) -> MAN HAS TWO MINDS IN ONE HEAD. (CR) -> SOPHISTICATED SYSTEMS REPLY (virtual mind) -> ... etc.
There are sources for "man has two minds in one head", and so this paragraph could be written, if someone wants to. I'm not interested in writing it myself, and I think the article is better without it.
The reason I believe it to be a "waste of the reader's time": Searle's description of the system reply is a straw man. He thinks the systems reply ascribes intentionality to a set of objects, one of which is a pencil. Searle refutes this reply by throwing out the pencil. He thinks he's disproved the systems reply. Most readers have no idea what Searle is doing or why it matters if there is no pencil. As Likebox says, it's "mysterious". The problem is that no one actually ascribed intentionality to the pencil. We all ascribe intentionality to something else. We might call it "software" or "neural correlates" or "a running program" or an "emulated mind" or a "virtual mind" or a "functional system", etc. But we're definitely not talking about just the pencil and the other physical objects. What's required is a more sophisticated version of the systems reply and, in this article, that's what MInsky's "virtual mind" argument is supposed to supply.
So the article, as it is written, allows Searle to have "the last word" against his straw man, and then moves on to the real argument. A sophisticated systems reply that actually works, and then Searle's basic objection to computationalism: a simulated mind is not the same as a real mind. I think these are the most important points on the ontological side of the argument.
So ... he hesitates ... are we all happy with that? ---- CharlesGillingham ( talk) 18:51, 31 May 2009 (UTC)
I think that this area of the question, virtual minds, or, better, second minds, is interesting, and I don't have a settled opinion. When you get to emergent characteristics, they can be of a small scale. Bubbles in boiling water are small, but they area a change in the emergent property: gas and not liquid. And you have to add a lot of energy before the temperature starts to go up again. If there is a second mind, emergent, can it be small, a mosquito mind, but of a mosquito-savant? Additionally, I believe that people do have multiple-minds, however probably only the dominant one is connected to the sensory and the motor cortex usually. This multiplicity is most clearly seen in dreams, where you, the hero, can be surprised by words or actions of supporting characters. The search for an unconscious mind hidden in the brain seems to me to be a search more likely of success than of a search for a mind in a metal box, or in any system containing a pencil. ( Martin | talk • contribs 18:49, 8 July 2010 (UTC))
What Searle apparenly never gets, is that it is not the computer who is thinking (just as it is not the person). It is the program. What he says is exactly the same, as saying that the universe does not think, and thereby just a distraction that is unrelated to the original question.
What he also does not get, is that that program would not follow any "programmed rules". That's not how neural networks work.
And my ultimate counter-argument, is that if we simulate a brain, down to every function, it will be the same as a human brain, and will think and understand.
If you believe that there must(!!!1!1!one) be some higher "something" to it, because we are oooh-so-special (and this is where Searles motivation ultimately stems from), then you are a religious nutjob, and nobody that has the ability to argue about this.
Another proof, that it you wisecrack like an academic, in front of people that have no idea of the subject, they will think you are right, no matter what crap you tell them.
— 88.77.184.209 ( talk) 21:13, 6 June 2009 (UTC)
I disagree that the experiment is stupid. But it just doesn't work for Searle's ideas. It's a valid attack to computationalism that just happens to fail. ;) nihil ( talk) 22:53, 1 April 2010 (UTC)
I have a question. What if the positions were reversed? The Chinese speaker has a list of commands written in binary, which he doesn't understand. He gives these to the computer. The computer reads the code, and performs an action. The computer clearly associates the symbols with the action. Does it therefore 'understand' what the symbols mean? The human doesn't understand the symbols. Does this mean he cannot think? —Preceding unsigned comment added by 62.6.149.17 ( talk) 08:18, 14 September 2009 (UTC)
Hi, please help us conclude whether there's a basis in WP policy to remove a Cultural References section from this article - and specifically to remove information about a feature film which is named for and concerns this topic. I'd really appreciate first taking a look at the mediation of the topic which lays out the major arguments. (At the bottom you can find the mediator's response to the discussion.) Sorry to bring it back to this Talk Page, but it seems to be the next step. Reading glasses ( talk) 18:18, 27 April 2010 (UTC)
Thanks. I'm hoping to also hear from 3rd party editors who have not already been part of the discussion and mediation. And I'll refrain from reproducing the whole discussion here. Reading glasses ( talk) 02:42, 28 April 2010 (UTC)
Another RFC Comment Trivia sections always seem to be contentious, and I think there must be some fundamental moral divide as to how serious Wikipedia is supposed to be. Last time I checked, Wikipedia policy on trivia sections is pretty vague, and I don't think looking for that type of authority will give either of you the answers you are looking for. My own opinion, apart from policy, is to not include trivia unless it's really interesting (if you're going to add something that detracts from Wikipedia's seriousness, it had better be fun). This film tidbit doesn't strike me as interesting at all.
That being said, I do kind of like the screenshot. It captures the essense of the 'Chinese room' (non-Chinese person analyzing a Chinese message using a big algorithm) much better than the current picture, and it's pretty to boot. If someone wanted to include the picture, appropriately captioned, I wouldn't object. -- Rsl12 ( talk) 21:42, 4 May 2010 (UTC)
"In popular culture" sections should contain verifiable facts of interest to a broad audience of readers. Exhaustive, indiscriminate lists are discouraged, as are passing references to the article subject.... If a cultural reference is genuinely significant it should be possible to find a reliable secondary source that supports that judgment. Quoting a respected expert attesting to the importance of a subject as a cultural influence is encouraged. Absence of these secondary sources should be seen as a sign of limited significance, not an invitation to draw inference from primary sources.
Animal Farm in Popular Culture: Hookers in Revolt is a retelling of the Animal Farm story, where the prostitutes revolt against the pimps to take over management of their bordello, only to turn more corrupt than the pimps ever were (cite trailer where the director says as much).
When trying to decide if a pop culture reference is appropriate to an article, ask yourself the following:
If you can't answer "yes" to at least one of these, you're just adding trivia. Get all three and you're possibly adding valuable content.
Oh dear, it appears that this RfC hasn't helped much. Please understand, my reference to trivia was not made pejoratively, only to indicate that the matter is secondary to the main subject of the page, and, indeed, potentially popular culture. As for whether or not it is popular enough, I'm afraid that's in the eye of the beholder, and the beholders here appear unlikely to agree or compromise. -- Tryptofish ( talk) 19:23, 5 May 2010 (UTC)
I've come for the RfC and think that material should be included because it's innocuous and relevant. Leadwind ( talk) 03:48, 10 May 2010 (UTC)
This line is a little confusing but I don't know how to fix it. Is it needed?
If this means only that a design is different from its implementation, that is true of everything. If it means something else, what is it? ( Martin | talk • contribs 18:43, 2 July 2010 (UTC))
Can't we change
to
( Martin | talk • contribs 16:52, 2 July 2010 (UTC))
(I put this above the comment above the paragraph below, to be out of the way.) A fuller quote from the Stanford Encyclopedia is below, posted by μηδείς at 23:15, 3 July 2010 (UTC)
Here is a quote from Searle (Chapter 5: Can machines think). There is no obstacle in principle to something being a thinking computer. ... Let's state this carefully. What the Chinese Room argument showed is that computation is not sufficient to guarantee the presence of thinking or consciousness. ... ( Martin | talk • contribs 12:15, 5 July 2010 (UTC))
Martin, that is a most excellent quote. Can you provide a full citation for the quote which you attribute to "Chapter 5: Can machines think"? What book and what page? μηδείς ( talk) 17:09, 5 July 2010 (UTC)
Medeis, here is a link to essentially that statement: http://machineslikeus.com/interviews/machines-us-interviews-john-searle "Could a man-made machine -- in the sense in which our ordinary commercial computers are man-made machines -- could such a man-made machine, having no biological components, think? And here again I think the answer is there is no obstacle whatever in principle to building a thinking machine," My quote above is from an audio. Excerpt: http://yourlisten.com/channel/content/51656/Can%20a%20machine%20think ( Martin | talk • contribs 03:05, 6 July 2010 (UTC))
The lead paragraph is: The Chinese room argument comprises a thought experiment and associated arguments by John Searle (1980), which attempts to show that a symbol-processing machine like a computer can never be properly described as having a "mind" or "understanding", regardless of how intelligently it may behave.
This is not a joke or a quibble. I believe my change was reverted in error. ( Martin | talk • contribs 21:35, 30 June 2010 (UTC))
Yes, "attempts [sic] to show" implies that he was wrong and hence violates NPOV. The current lead:
except for being too strong is quite good. I have changed the wording to "mere symbol processing machine" since Searle does not deny that brains process symbols. He holds that they do more than that, taht they deal with semantics as well as manipulating symbols using syntactical rules. μηδείς ( talk) 04:22, 1 July 2010 (UTC)
It seems to me that the introduction of the word "mere", in order to fix the problem with the word "never", results in a less satisfactory statement than what I wrote to begin with:
(The above is taken from comparison of versions) but which I would now rewrite as
The word "machine" doesn't help at all, and black-box is what he was describing, as in the Turing Test. He does show us the inside, but only as a magician might, to help us see. So I object to the word "mere", as begging the question. Does the elevator come because it realized that we pushed the button? It depends.( Martin | talk • contribs 00:39, 2 July 2010 (UTC))
The word mere to qualify symbol-processing machines was most emphatically not introduced to fix any problem with the word never.
The issue is that Searle nowhere denies that one of the capacities of the brain is the ability to manipulate syntax. He does not deny that the brain processes symbols. What he denies is that symbol processing alone entails comprehending a meaning. He denies that syntax entails semantics. He writes, "The Chinese room argument showed that semantics is not intrinsic to syntax." (tRotM, p 210) The brain does indeed processs symbols, but it also does other things which import semantic meaning to those symbols. Hence the unqualified statement without mere includes the brain and is simply false.
An analogy would be like saying that a diving organism can never be described as flying or a photosynthetic organism can never be described as carnivorous when pelicans do dive and euglena do eat. Adding the word mere corrects the overgeneralization.
As currently stated, without any other changes, the sentence needs the word mere or some other qualification to exclude other faculties.
Also, while MartinGugino opposes the use of the word he does so in the context of supporting a different lead from the current one, and admitting [1]that he needs to read further on the subject.
I do not oppose rewriting the lead, I find the general criticisms valid. But as it stands, without the word mere, the lead is simply overbroad and false. μηδείς ( talk) 03:51, 2 July 2010 (UTC)
This, "the Chinese Room does have a mind, since it has a man" is an equivocation. We might as well say that if there is a man sitting on a chair in the closet of a fishing shack on an otherwise deserted island that the chair and the closet and the shack and the island all have minds because they have men.
Searle's position is simple. The intelligent behavior of the Chinese room is parasitic upon the consciousness of the person who wrote the manual whose instructions the agent sitting inside the room follows. The person who wrote the manual is the homunculus, and the consciousness of Chinese and how to use it in the context of the world resides in him, not in the room or the manual or the man in the room or all of the latter together. They are just tools following the instructions of the actually conscious programmer, after the fact.
Of course, this is not a forum for discussion of the topic, but of the article.
As far as the lead, I suggest we simply quote Searle as to what he himself says his arguments accomplish. He does this at length in The Rediscovery of the Mind, especially in the summary of chapter nine, p225-226. μηδείς ( talk) 21:33, 2 July 2010 (UTC)
Re: "consciousness does not have a location" - is the mind is in the head? ( Martin | talk • contribs 14:32, 7 July 2010 (UTC)) Re:"The mind involved is that of the manual writer" - what if he is dead? ( Martin | talk • contribs 14:39, 7 July 2010 (UTC))
I think you all have brought up a solid objection to the current lead: it's not clear what a "computer" is. This is why it is important to μηδείς that we identify the computer as a "mere symbol processor", and why Martin thinks that presence of the man's mind implies the computer "has" a mind. I don't think the current lead solves this problem.
I think that we need to keep the lead simple enough that a person who is just trying to, say, distinguish Chinese box from Chinese room, can read a sentence or two without being inundated with hair-splitting details. So the lead should have just a sentence. How about this:
By emphasizing the program, rather than the machine, I hope I am making it clear the we are talking about symbol processing. (Is the word "create" okay? It's a common usage word that doesn't have the ontological baggage of "cause" or "cause to exist", both of which are less clear to me.)
But this is not enough to fully clarify the issues that you all have raised. I think we need a one or two paragraph section under Searle's targets that describes what a computer is: it should mention Von Neumann architecture, Turing machine, Turing complete, Formal system and physical symbol system. It should explain the Church-Turing thesis and how it applies the Chinese room. It should mention dualism, and make it clear that Searle is a materialist, and that his argument only applies to computers and programs, not to machines in general. It could also mention the Turing test and black-box functionalism, which will make more sense in this context. ---- CharlesGillingham ( talk) 20:36, 3 July 2010 (UTC)
The Chinese room argument comprises a thought experiment and associated arguments by John Searle (1980) that a computer program can't create a 'mind' or 'understanding' in a computer, no matter how intelligently it may make the computer behave.
Here is the introductory paragraph of the Chinese Room article of the online Stanford Encyclopedia of Philosophy:
I find the first sentence fatally misleading, since it equivocates on what artificial intelligence is. By Strong AI, Searle means the notion that "the mind is just a computer program." (Return, 43). He nowhere states that an artificial and intelligent brain is an impossibility. (Note the importance of the capitalization or lack thereof of the term "Artificial Intelligence.") I think the remainder of the introduction is accurate, and note with especial satisfaction the use of the word merely in qualification of the use of syntactic rules to manipulate symbols strings. μηδείς ( talk) 23:15, 3 July 2010 (UTC)
The lead is an improvement over the prior one. Yet you give undue weight to Harnad, mentioning him in two paragraphs and as often as Searle himself.. This; "The argument has generated an extremely large number of replies and refutations; so many that Harnad concludes that "the overwhelming majority still think that Chinese Room Argument is dead wrong."" amounts to synthesis. Does Harnad say that or are you atributing his statement to the large number of "refutations"? And it amounts to POV. To say they are refutations is to say that Searle has been refuted. The simple solution is to return to my one sentence condensation of Harnad.
You also say that" The program uses artificial intelligence to perfectly simulate the behavior of a Chinese speaking human being." The use of "artificial intelligence" and "perfectly" amount to begging the question and should be deleted. Also, the man simply follows the instructions, he is not simulating a computer per se, but just following a program. It should say something neutral like "in which a man who does not himself speak chinese responds sensibly to questions put to him in chinese characters by following the instructions of a program written in a manual."
As for the syntax/semantics statement, I will find a replacement that speaks of the manipulation of synbols and their meaning. μηδείς ( talk) 23:57, 4 July 2010 (UTC)
Read the first couple of pages of the article and see if you agree."And make no mistake about it, if you took a poll -- in the first round of BBS Commentary, in the Continuing Commentary, on comp.ai, or in the secondary literature about the Chinese Room Argument that has been accumulating across both decades to the present day (and culminating in the present book) -- the overwhelming majority still think the Chinese Room Argument is dead wrong, even among those who agree that computers can't understand!"
"my Chinese Room argument ... showed that a system could instantiate a program so as to give a perfect simulation of some human cognitive capacity, such as the capacity to understand Chinese, even though that system has no understanding of Chinese whatever." p. 45, bold italics mine.
I changed a sentence in the lead paragraph. The Chinese room is widely seen as an argument against the claims of AI, as opposed to the practice of AI. Clarification of an ambiguous assertion. ( Martin | talk • contribs 15:28, 7 July 2010 (UTC)) I see that there is an attribution of this formulation to Larry Hauser. Certainly this is not sufficient reason to include an imprecise phrase in a lead paragraph. Is Searle against AI in toto? ( Martin | talk • contribs 15:47, 7 July 2010 (UTC)) I suppose that the phrase, as it was, could have been construed correctly to mean that "the Chinese Room is widely seen as an argument against the claim of the achievement of manufactured intelligence", but it seems to me that many people would have thought that the phrase "against artificial intelligence" meant "opposing AI", and that a clarification is helpful.( Martin | talk • contribs 16:04, 7 July 2010 (UTC))
The lead says: "The argument applies only to machines that execute programs in the same way that modern computers do and does not apply to machines in general." This is not clear:
( Martin | talk • contribs 16:19, 7 July 2010 (UTC))
It is very hard to be clear, I see. ( Martin | talk • contribs 20:06, 7 July 2010 (UTC))
Ah, yes, it isn’t a black-box. I used the term because it is associated with behaviorism, where people care about the behavior of a box and not the contents. The Touring Test is about behavior, only. But that term, black-box, is not essential to anything I said. ( Martin | talk • contribs 21:41, 7 July 2010 (UTC))
You say “The argument applies only to machines that execute programs in the same way that modern computers do and does not apply to machines in general.” Let’s say that next to this Chinese Room #1 there was a Mystery Room #2, and it behaved exactly as Searle’s Chinese Room #1 behaves. Would Searle say that his argument applied to Room #2 as well, without looking inside the room? Yes he would: He would say it may, or it may not, be intelligent. So his argument does not apply only to “machines that execute programs in the same way that modern computers do”, but to any object where all you know is the behavior. ( Martin | talk • contribs 21:52, 7 July 2010 (UTC))
Can it "understand" Chinese? | Chinese speaker | Chinese speaker's brain | Chinese room with the right program | Computer with the right program | Black box with the right behavior | Rock |
---|---|---|---|---|---|---|
Dualism says | Yes | No | No | No | No | No |
Searle says | Yes | Yes | No | No | No one can tell | No |
Functionalism and computationalism (i.e. "strong AI") say | Yes | Yes | Yes | Yes | No one can tell | No |
Behaviorism and black box functionalism say | Yes | Yes | Yes | Yes | Yes | No |
AI (or at least, Russell, Norvig, Turing, Minsky, etc) says | Yes | Yes | Don't care | Close enough | Doesn't matter | No |