DO NOT EDIT OR POST REPLIES TO THIS PAGE. THIS PAGE IS AN ARCHIVE.
This archive page covers approximately the dates between DATE and DATE.
Post replies to the main talk page, copying or summarizing the section you are replying to if necessary.
Please add new archivals to Talk:Artificial intelligence/Archive02. (See Wikipedia:How to archive a talk page.) Thank you. moxon 01:20, 21 October 2005 (UTC)
si
Some of the more technical parts of AI are missing such as links to the Rule based languages, fuzzy logic, Rete Algorithm, forward chaining, backward chaining, expert systems, perceptron, neural networks, simulated annealing, etc.
My suggestion is to add a subtopic such as "AI Implementation" or "AI Technology"
I don't know if you'd agree, but I think the original vision of AI has by now been thoroughly discredited by ethical problems. Creatures that satisfy the original definition of "intelligence", e.g. Great Apes, are not accorded the respect of personhood. Meanwhile, stupid programs of bad research continue to propagate themselves due to funding inertia and influence of top researchers like Minsky, who haven't produced anything worth a damn in years. Welcome to tenure, I guess, but the people building robot insects or hooking up humans into cyborg colonies (woops forget to mention Steve Mann under collective intelligence) or talking about augmenting apes witih speech synthesizers just don't believe any of the nonsense that Minsky believed...
Deception is the key difference between humans and Great Apes. Like four year old children, Great Apes do not have a "theory of mind" that enables them to lie *convincingly* by imagining how the other will think things are. Human children acquire this at four and a half or so. Great Apes never do. They lie very badly.
The thing called "intelligence" seems to me to be a combination of perception, planning, empathy, cognition and deception. One decides for oneself which to test for, and what to emphasize, and what to extinct. AI is a nonsense goal, just "infinite symbol manipulation" really, some of which symbols are maybe good enough maps to walk over rough terrain in a robot insect, but none of which are good enough to deceive a suspicious adult human.
Firstly, thanks for the edits. The argument is now much more readable.
I still have some serious problems with the article as it stands, however.
Firstly, some specific nitpicks:
Modern theorists often reject the assertion that this, or playing chess, is in fact what humans mean when they recognize each other as being concious, wise, or aware. Turing's Test highlights these questions by suggesting that adult humans perhaps assume too much based on mere language - while paradoxically rejecting or ignoring the intelligence of Great Apes, who can master 2000-4000 word vocabularies.
Firstly, all the Church-Turing thesis actually says is that anything that can be effectively computed, can be computed on a Turing machine. Many people do draw the implication that anything the brain "computes" can be computed on a Turing machine, so any limitations of a Turing machine are also limitations of the brain. Beyond that, I can't see your contention. Your earlier
comment that "all forms of symbolic or linguistic intelligence as being equivalent to the Turing Machine, which in turn was equivalent to "mathematicians doing proofs in the usual way" isn't directly reflected in the article, and the second part of that comment isn't obvious to me. Could you spell it out for me really slowly and clearly?
Secondly, who are "modern theorists", and what is the "this" in the phrase "the assertion that this"?
Thirdly, whilst it's hardly a peer-reviewed academic journal,
ABC news article credits chimpanzees with a 240-word vocabulary, an order of magnitude less than the 2000-4000 quoted in the article (a stat that tallies with my own recollection of the topic). Furthermore, from what I remember of my undergraduate psychology studies, signing great apes can't construct actual sentences - the best they can do is possibly construct two-word phrases - "tickle me" and "feed me" being by far the most common :)
one lists 150-1000, and is written by anti-personhood experimenters. Seems to argue equivalent of two year old human child's skills - while the advocates argue that they're more like four year olds. The famous Koko the gorilla had 1000 words in ASL. One group claims that adult orang-utans could master 2000 words "he already has a 2000-word vocabulary in sign language" and is shifting apparently to satisfy skeptics, bonobos trained from childhood can master 4000, according to the people doing the keyboard work. Can't validate the 4000 - maybe they withdrew it until they can satisfy all the skeptics - or maybe they projected that number based on comparisons of early progress? It does seem to require intensive training to get this far. There seem to be no challengers to Koko's claim to naming and simple direct verb object skills. There is some question whether she can invent words. but not all of this is interesting to AI except insofar as differences between species may eventually tell us much about cognitive skills of the highest order of living creatures closest to us... but ok enough, let's limit claims to the 150-1000 and note the "disputed claims of 2000 or more" arising from sign language.
What difference does it make to AI whether chimps, gorillas, dolphins or parrots are "intelligent" or not? It might matter to ethicists, theologans, and psychologists, but to me it seems of little philosophical import to the practice of building systems to solve problems which computers currently don't do very well but which humans (and to a large extent animals) do well, which to me seems to be the practical goal of AI.
As to your "ethical argument", you're absolutely right - I disagree. Whilst the "personhood" of the great apes is certainly open to debate, it has no relevance to AI research. As to the frankly disappointing research of AI research to date, that is certainly true - AI research haves't achieved nearly as much as many thought it would. However, I don't see that it follows that AI is fundamentally impossible.
"why do you drink?" - The Little Prince "to forget" - The Tippler "what must you forget?" "that I drink!"
'mother' bond with the machine and vice-versa. It may sound ridiculous to think of it this way but it is the way humans develop and if we wish to protect the machine from a wrong direction in progression then we have to provide an example that means something to it. And anyway its nothing more than a brand manager's job only taking the job very seriously, applying a human perspective to the machine."
OK, there's the nitpicks out of the way.
More generally, I find the tone of the article overly negative and somewhat narrow view of AI. There are quite a few alternative definitions of what AI is about, and most workers in the area are focussed on modest tasks and, over
the decades, made real progress at them. I intend to add considerably more material on this later, at which point I hope to continue the discussions with you guys :) -- Robert Merkel
Removed from the article:
What evidence is there for this? Cites?
Firstly, the goal of this article, like any other article on Wikipedia, is to present the topic fairly and accurately, and let people draw their own conclusions. When the topic is an academic discipline or school of thought, a fair and accurate presentation includes fair and accurate coverage of the criticisms of such a school of thought, attributed to the people who make them.
Given that, we need to:
I believe an article presenting things in pretty much that order is the way to go.
What do you think?
Yanked from the article:
Taking the second sentence first, that's a rather novel interpretation of the significance of the Turing Test, to say the least, and one that doesn't really fit with the reading of Turing's original paper IMHO. It might be a more reasonable response to some of the counterarguments raised about the Turing Test (notably the Chinese Room).
As to the first sentence, I can't parse it. -- Robert Merkel
Fixed. The "mere language" issue is now illustrated by the apparent "human racism" of the theorists who reject Great Apes' intelligence. It's not that Turing's paper highlighted the issue of what matters in language but rather than Turing's Test itself did - by failing to convince people it was decisive.
The first sentence is also fixed, and refers more directly to the CHurch-Turing Thesis, which was the specific contribution, and which defined all forms of symbolic or linguistic intelligence as being equivalent to the Turing Machine, which in turn was equivalent to "mathematicians doing proofs in the usual way"
I removed the following paragraph:
AxelBoldt, Wednesday, April 10, 2002
For the benefit of the skeptics among us, wouldn't it be desirable to list some of the outstanding successes of AI so far? (other than beating Kasparov at chess, I'm not sure what those are; and even that I'm not sure counts as genuine AI) -- Seb
There are much less such successes than average person would expect. There are some essayes about AI chatbots on http://www.alicebot.org/ (ALICE chatbot won "best chatbot" Loebner prize in 2000 and 2001, so author known what he is talking about), where author says that there was almost no progress in this field of AI since Eliza.
The most important things to notice:
-- Taw
213, you suggest you'd like to refactor this article. Would you like to suggest an outline (have a look at my brief suggestion above) so we can collaborate on such a rewrite (which I've been meaning to do myself but haven't got around to)? BTW, have you considered getting yourself a handle? 213.x.x.x is so impersonal :> -- Robert Merkel 11:11 Oct 10, 2002 (UTC)
I am thinking about (re?)writing a section, the History of AI, reusing some information already present and also introduce more events of interest. It is probably best to break this section out as a separate article I believe. Any ideas? / Vidstige 11:50, 2 Mar 2004 (UTC)
Regarding the von Neumann quote "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!" I come to think about a simmilat quote that first appeared in the context of high altitude baloons and later with space probes. "There is only one thing humans can do that instruments can not, but why would anyone want to do that there?". // Liftarn
Could somebody cite specific interest from AI researchers in ape intelligence? Neural networks were originally inspired by brain research (though latter neural nets don't resemble the biological model much), and there has been some research into "artificial insect" ideas, but ape cognition doesn't seem to be of sufficient interest to single it out. -- Robert Merkel 03:33, 20 Aug 2003 (UTC)
The following comment added by an anonymous user, was removed from the article and moved here -- Lexor 07:26, 13 Jan 2004 (UTC)
Im afraid some of us have to disagree on that. As shown in Planet of the Apes, soon apes may take over for humans at the end of the humans' rein. However, this corresponds startlingly to the plight of the robots, though they evolve only by the hands of mankind. Someday in the future, apes, robots, and the like may take over the world that has been on the fingertips of our race forever. - Legolas of the Elves of Mirkwood
was titled "A million words of memory"
In the sixties, an eminent AI research—possibly John McCarthy—said something to the effect that there were no longer any significant theoretical or practical barriers to the achievement of AI except hardware limitations, and that he could demonstrate AI as soon as someone would fund his acquisition of a machine with a million words of memory. This is just fuzzy middle-aged memory and I don't have a citation for it. Does anyone have one? Seems to me this would be worth a mention in the main article if it could be confirmed. The exact wording and context are important, of course. I'm guessing the reference would be to an IBM 709, 7090, or 7094, in which case a million words would correspond to about four-and-a-half megabytes. :-) Dpbsmith 16:56, 4 Feb 2004 (UTC)
This article (currently) starts
No. That's like saying evolution is the practice of working out how the different species came to exist. I deliberately choose a controversial (in the US at least) topic. Evolution is thought by some (most!) to be the way that different species have come to exist, it isn't the practice of describing how. Now please do not argue with me about evolution: I was just trying to find a controversial subject to compare with AI.
Similarly: AI is artifical intelligence, whether you believe that such a thing is possible or not. It is not some process of trying to make something appear intelligent, however unlikely you thing AI is. Just because you think AI unlikely doesn't mean you can deny what it is (or might be).
Another example: UFOs is not the practice of faking the photographs. AI, similarly, is not the practice of making something (i.e. artificial) appear intelligent. It is the artifical intelligence itself, whether you believe in it or not.
One or two of the contributors to this article seem to have an axe to grind. Wikipedia is supposed to be neutral. I am going to re-write that 1st paragraph.
Psb777 23:54, 24 Jan 2004 (UTC)
Arthur T. Murray, a.k.a. Mentifex, is a notorious net.kook who has been spamming and mass-mailing his pseudoscientific writings for over thirty years. He is now repeatedly adding to Wikipedia pages inappropriate references to his own work, and repeatedly removing from pages information which presents an opposing point of view to his theories or gives evidence which may cause others to see them in a negative light. For example, he has repeatedly inserted his own name in the "List of Prominent AI Theorists" section of the AI article.
As no serious AI researcher considers Murray's work to be anything but crackpottery, please help keep this page and others related to AI free of kookery.
Please see the Arthur T. Murray/Mentifex FAQ for further details on Murray's claims and posting history. This FAQ links to much of Murray's own writing so you can make your own independent assessment of it.
— Psychonaut 16:18, 20 Feb 2004 (UTC)
I've started up an article stub on Mentifex, although it's very hard to maintain NPOV on something like this. Any additional info or NPOV changes would be greatly appreciated. -- FleaPlus 19:10, 6 May 2004 (UTC)
While I can understand the deletion, I still think it's useful to be able to look up information on suspected cranks (i.e. Time Cube) to try to get an unbiased description and analysis of their claims. Even if the article I wrote up wasn't NPOV and encyclopedic enough, I hope that someone else would still be able to write one. I personally believe that Mentifex has been around for such a long time and had such an impact on Internet discussions that he deserves an article. -- NeuronExMachina 10:42, 22 Jul 2004 (UTC)
Ok, I see what you mean. I still hope in the future that there might be a Mentifex article, though it would indeed likely need a number of caretakers dedicated to maintaining its NPOV. -- NeuronExMachina 09:19, 23 Jul 2004 (UTC)
Perhaps there needs to be a kookwiki. 170.35.224.64 15:49, 10 Jan 2005 (UTC)
It seems to me this article is much improved. That all views are represented. There is a lot to be disputed on this, the Talk page, but the article seems NPOV to me. Can we remove the The neutrality of this article is disputed. tag now? Psb777 10:11, 10 Feb 2004 (UTC)
<outing content redacted>
But there is no Mentifex who has "contributed" here. We are having to put up with an Arthur Murray who keeps on trying to foist his drivel on us and Psychonaut is doing an excellent job beating him off. Maybe you have the wrong page? Psb777 15:39, 10 Feb 2004 (UTC)
To remove AI material without any knowledge of its substance is vandalism. Instead of suppressing new ideas, Wikipaedophiles ought to welcome them. -- User:66.248.100.42 (presumably Arthur T. Murray)
I apologise for removing the material reinstated by Psychonaut. Paul Beardsell 14:52, 20 Feb 2004 (UTC)
The second is much harder, raising questions of consciousness and self, mind (including the unconscious mind) and the question of what components are involved in the only type of intelligence it is universally agreed we have available to study: that of human beings. Study of animals and artificial systems that are not just models of what exists already are widely considered very pertinent, too.
Could someone be so kind to translate the above text to plain English? Vidstige 18:15, 8 Mar 2004 (UTC)
What don't you understand? "The second" is the quesion "What is intelligence?" Does it make sense now? Mr. Jones 20:11, 4 Jul 2004 (UTC)
Is the content of this page really appropriate for its topic? You'd think that the page "Artificial Intelligence" would act as an overview of the field and a gateway to subtopics in AI (which may not exist yet) -- and would be in understandable English. The sections and topics are still badly opinionated, too. The arguments aren't presented in anything resembling order. And what's with the "Electronic wavelet holographic interference" stuff?
Maybe this page's problems stem from its position as a major topic page, or it could just be people wiki-stomping on it all the time. Whatever's gone wrong, however, it needs to be fixed.
I'm halfway tempted to rewrite the page from scratch, if nobody minds. It's a bloddy mess.
I do agree that we need some more coverage of the field, rather than only the philosophical controversies, which are well-known in the field but generally don't take up much of its time (if only because many of them are basically intractable—"yes computers can be sentient" or "no computers can't be sentient"). In particular, a good overview of the symbolic vs. subsymbolic controversy is a necessary starting place, and then some overviews of various other approaches within the field. I'll try to start adding some when I get some time. -- Delirium 18:40, Sep 27, 2004 (UTC)
Currently: "Artificial intelligence, also known as machine intelligence, is defined as intelligence exhibited by anything manufactured (i.e. artificial) by humans or other sentient beings or systems (should such things ever exist on Earth or elsewhere)."
I propose: "Artificial intelligence, also known as machine intelligence, is defined as intelligence exhibited by anything manufactured (i.e. artificial)."
Mention of "other sentient beings" and "Earth or elsewhere" is needlessly speculative and sounds almost kookish. Don't want to tread on toes though.
An
automated Wikipedia link suggester has some possible wiki link suggestions for the
Artificial_intelligence article, and they have been placed on
this page for your convenience.
Tip: Some people find it helpful if these suggestions are shown on this talk page, rather than on another page. To do this, just add {{User:LinkBot/suggestions/Artificial_intelligence}} to this page. —
LinkBot 01:01, 18 Dec 2004 (UTC)
I just reverted a change in which what appeared to be a sketch paper was added to the last section. I did this because firstly the change was inappropriate content for an encyclopedia and secondly because any paper should be published elsewhere before it comes close to being of encylclopedic relevence. Barnaby dawson 08:19, 8 May 2005 (UTC)
The first paragraph needs the addition of more fashionable research areas. Artificial life might not be fashionable any more and Bayesian Networks are certainly not the only fashionable research area in AI. What is attracting funding and conference attendee attention these days?
A special edition of Journal of Consciousness Studies (a peer-reviewed journal) dedicated to Machine Consciousness [1]. Machine Consciousness and other variations are described under Artificial Consciousness, which is wider term (for example possible systems with biological components). So please study the issue a bit more, and be more careful, while deleting. Even psychology is not, strictly speaking, a science, neither are consciousness studies etc. This was not a good reason to delete that link, if someone has other consideretions, please say. Tkorrovi 22:10, 25 May 2005 (UTC)
Many of us are fixed on the area of the production of the robots. However, not many of us have stopped and thought about what might happen to civilization if robots go too far..... and begin to develop a mind of their own. Maybe, in the future, technology will grant them the ability to walk, and talk, and maybe do all sorts of things that humans can do today. mayhaps they will form an army, and proceed to obliterate all true life on the planet. to get a better grasp on this concept, read eoin colfer's novel, "The Supernaturalist"
Please feel free to add your opinion to this matter
Legolas of the Elves of Mirkwood
I find the third paragraph to be pretty annoying. It currently reads:
Not to be nasty here, but why is Kevin Warwick here? He appears to just be a minor character (at least his notoriety is recent) in the field though I grant he could possibly be aggrevating. Why aren't we mentioning here, for example, Marvin Minsky or Japan's Fifth Generation project? KarlHallowell 20:12, 22 August 2005 (UTC)
I was checking out the latest entry, Sankar K Pal to the list of AI researchers. His publication list (on Citeseer) is rather small (though several other people on the list have similar records) and he appears (at a glance) to have written at least one book and have a long career as a founder and administrator of AI-related research programs in India. OTOH, it's somewhat pedantic, but his research does seem rather limited.
So what qualities does someone need to warrant inclusion on this list? -- KarlHallowell 14:45, 30 August 2005 (UTC)
I have started a AI portal. The idea is that the information in this article not concerning the definition of AI be moved to appropriate sub-articles. Assistance in this process will be appreciated. -- Mneser 16:56, 9 October 2005 (UTC)
Please do. I hope the preliminary new links I created are sufficient. The idea is not to expand the portal to much. -- Mneser 01:20, 11 October 2005 (UTC)
The "Machines displaying some degree of intelligence" section is silly. There are many programs which display some level of intelligence. Also most of those listed are actually programs not machines. I suggest replacing it with "Famous implementaions of AI" and include deeper blue and other famous AI bots (and excluding links, if it doesn't have an article it doesn't deserve to be listed). Objecions? This article does need a lot of work. Broken S 20:21, 17 October 2005 (UTC)
DO NOT EDIT OR POST REPLIES TO THIS PAGE. THIS PAGE IS AN ARCHIVE.
This archive page covers approximately the dates between DATE and DATE.
Post replies to the main talk page, copying or summarizing the section you are replying to if necessary.
Please add new archivals to Talk:Artificial intelligence/Archive02. (See Wikipedia:How to archive a talk page.) Thank you. moxon 01:20, 21 October 2005 (UTC)
si
Some of the more technical parts of AI are missing such as links to the Rule based languages, fuzzy logic, Rete Algorithm, forward chaining, backward chaining, expert systems, perceptron, neural networks, simulated annealing, etc.
My suggestion is to add a subtopic such as "AI Implementation" or "AI Technology"
I don't know if you'd agree, but I think the original vision of AI has by now been thoroughly discredited by ethical problems. Creatures that satisfy the original definition of "intelligence", e.g. Great Apes, are not accorded the respect of personhood. Meanwhile, stupid programs of bad research continue to propagate themselves due to funding inertia and influence of top researchers like Minsky, who haven't produced anything worth a damn in years. Welcome to tenure, I guess, but the people building robot insects or hooking up humans into cyborg colonies (woops forget to mention Steve Mann under collective intelligence) or talking about augmenting apes witih speech synthesizers just don't believe any of the nonsense that Minsky believed...
Deception is the key difference between humans and Great Apes. Like four year old children, Great Apes do not have a "theory of mind" that enables them to lie *convincingly* by imagining how the other will think things are. Human children acquire this at four and a half or so. Great Apes never do. They lie very badly.
The thing called "intelligence" seems to me to be a combination of perception, planning, empathy, cognition and deception. One decides for oneself which to test for, and what to emphasize, and what to extinct. AI is a nonsense goal, just "infinite symbol manipulation" really, some of which symbols are maybe good enough maps to walk over rough terrain in a robot insect, but none of which are good enough to deceive a suspicious adult human.
Firstly, thanks for the edits. The argument is now much more readable.
I still have some serious problems with the article as it stands, however.
Firstly, some specific nitpicks:
Modern theorists often reject the assertion that this, or playing chess, is in fact what humans mean when they recognize each other as being concious, wise, or aware. Turing's Test highlights these questions by suggesting that adult humans perhaps assume too much based on mere language - while paradoxically rejecting or ignoring the intelligence of Great Apes, who can master 2000-4000 word vocabularies.
Firstly, all the Church-Turing thesis actually says is that anything that can be effectively computed, can be computed on a Turing machine. Many people do draw the implication that anything the brain "computes" can be computed on a Turing machine, so any limitations of a Turing machine are also limitations of the brain. Beyond that, I can't see your contention. Your earlier
comment that "all forms of symbolic or linguistic intelligence as being equivalent to the Turing Machine, which in turn was equivalent to "mathematicians doing proofs in the usual way" isn't directly reflected in the article, and the second part of that comment isn't obvious to me. Could you spell it out for me really slowly and clearly?
Secondly, who are "modern theorists", and what is the "this" in the phrase "the assertion that this"?
Thirdly, whilst it's hardly a peer-reviewed academic journal,
ABC news article credits chimpanzees with a 240-word vocabulary, an order of magnitude less than the 2000-4000 quoted in the article (a stat that tallies with my own recollection of the topic). Furthermore, from what I remember of my undergraduate psychology studies, signing great apes can't construct actual sentences - the best they can do is possibly construct two-word phrases - "tickle me" and "feed me" being by far the most common :)
one lists 150-1000, and is written by anti-personhood experimenters. Seems to argue equivalent of two year old human child's skills - while the advocates argue that they're more like four year olds. The famous Koko the gorilla had 1000 words in ASL. One group claims that adult orang-utans could master 2000 words "he already has a 2000-word vocabulary in sign language" and is shifting apparently to satisfy skeptics, bonobos trained from childhood can master 4000, according to the people doing the keyboard work. Can't validate the 4000 - maybe they withdrew it until they can satisfy all the skeptics - or maybe they projected that number based on comparisons of early progress? It does seem to require intensive training to get this far. There seem to be no challengers to Koko's claim to naming and simple direct verb object skills. There is some question whether she can invent words. but not all of this is interesting to AI except insofar as differences between species may eventually tell us much about cognitive skills of the highest order of living creatures closest to us... but ok enough, let's limit claims to the 150-1000 and note the "disputed claims of 2000 or more" arising from sign language.
What difference does it make to AI whether chimps, gorillas, dolphins or parrots are "intelligent" or not? It might matter to ethicists, theologans, and psychologists, but to me it seems of little philosophical import to the practice of building systems to solve problems which computers currently don't do very well but which humans (and to a large extent animals) do well, which to me seems to be the practical goal of AI.
As to your "ethical argument", you're absolutely right - I disagree. Whilst the "personhood" of the great apes is certainly open to debate, it has no relevance to AI research. As to the frankly disappointing research of AI research to date, that is certainly true - AI research haves't achieved nearly as much as many thought it would. However, I don't see that it follows that AI is fundamentally impossible.
"why do you drink?" - The Little Prince "to forget" - The Tippler "what must you forget?" "that I drink!"
'mother' bond with the machine and vice-versa. It may sound ridiculous to think of it this way but it is the way humans develop and if we wish to protect the machine from a wrong direction in progression then we have to provide an example that means something to it. And anyway its nothing more than a brand manager's job only taking the job very seriously, applying a human perspective to the machine."
OK, there's the nitpicks out of the way.
More generally, I find the tone of the article overly negative and somewhat narrow view of AI. There are quite a few alternative definitions of what AI is about, and most workers in the area are focussed on modest tasks and, over
the decades, made real progress at them. I intend to add considerably more material on this later, at which point I hope to continue the discussions with you guys :) -- Robert Merkel
Removed from the article:
What evidence is there for this? Cites?
Firstly, the goal of this article, like any other article on Wikipedia, is to present the topic fairly and accurately, and let people draw their own conclusions. When the topic is an academic discipline or school of thought, a fair and accurate presentation includes fair and accurate coverage of the criticisms of such a school of thought, attributed to the people who make them.
Given that, we need to:
I believe an article presenting things in pretty much that order is the way to go.
What do you think?
Yanked from the article:
Taking the second sentence first, that's a rather novel interpretation of the significance of the Turing Test, to say the least, and one that doesn't really fit with the reading of Turing's original paper IMHO. It might be a more reasonable response to some of the counterarguments raised about the Turing Test (notably the Chinese Room).
As to the first sentence, I can't parse it. -- Robert Merkel
Fixed. The "mere language" issue is now illustrated by the apparent "human racism" of the theorists who reject Great Apes' intelligence. It's not that Turing's paper highlighted the issue of what matters in language but rather than Turing's Test itself did - by failing to convince people it was decisive.
The first sentence is also fixed, and refers more directly to the CHurch-Turing Thesis, which was the specific contribution, and which defined all forms of symbolic or linguistic intelligence as being equivalent to the Turing Machine, which in turn was equivalent to "mathematicians doing proofs in the usual way"
I removed the following paragraph:
AxelBoldt, Wednesday, April 10, 2002
For the benefit of the skeptics among us, wouldn't it be desirable to list some of the outstanding successes of AI so far? (other than beating Kasparov at chess, I'm not sure what those are; and even that I'm not sure counts as genuine AI) -- Seb
There are much less such successes than average person would expect. There are some essayes about AI chatbots on http://www.alicebot.org/ (ALICE chatbot won "best chatbot" Loebner prize in 2000 and 2001, so author known what he is talking about), where author says that there was almost no progress in this field of AI since Eliza.
The most important things to notice:
-- Taw
213, you suggest you'd like to refactor this article. Would you like to suggest an outline (have a look at my brief suggestion above) so we can collaborate on such a rewrite (which I've been meaning to do myself but haven't got around to)? BTW, have you considered getting yourself a handle? 213.x.x.x is so impersonal :> -- Robert Merkel 11:11 Oct 10, 2002 (UTC)
I am thinking about (re?)writing a section, the History of AI, reusing some information already present and also introduce more events of interest. It is probably best to break this section out as a separate article I believe. Any ideas? / Vidstige 11:50, 2 Mar 2004 (UTC)
Regarding the von Neumann quote "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!" I come to think about a simmilat quote that first appeared in the context of high altitude baloons and later with space probes. "There is only one thing humans can do that instruments can not, but why would anyone want to do that there?". // Liftarn
Could somebody cite specific interest from AI researchers in ape intelligence? Neural networks were originally inspired by brain research (though latter neural nets don't resemble the biological model much), and there has been some research into "artificial insect" ideas, but ape cognition doesn't seem to be of sufficient interest to single it out. -- Robert Merkel 03:33, 20 Aug 2003 (UTC)
The following comment added by an anonymous user, was removed from the article and moved here -- Lexor 07:26, 13 Jan 2004 (UTC)
Im afraid some of us have to disagree on that. As shown in Planet of the Apes, soon apes may take over for humans at the end of the humans' rein. However, this corresponds startlingly to the plight of the robots, though they evolve only by the hands of mankind. Someday in the future, apes, robots, and the like may take over the world that has been on the fingertips of our race forever. - Legolas of the Elves of Mirkwood
was titled "A million words of memory"
In the sixties, an eminent AI research—possibly John McCarthy—said something to the effect that there were no longer any significant theoretical or practical barriers to the achievement of AI except hardware limitations, and that he could demonstrate AI as soon as someone would fund his acquisition of a machine with a million words of memory. This is just fuzzy middle-aged memory and I don't have a citation for it. Does anyone have one? Seems to me this would be worth a mention in the main article if it could be confirmed. The exact wording and context are important, of course. I'm guessing the reference would be to an IBM 709, 7090, or 7094, in which case a million words would correspond to about four-and-a-half megabytes. :-) Dpbsmith 16:56, 4 Feb 2004 (UTC)
This article (currently) starts
No. That's like saying evolution is the practice of working out how the different species came to exist. I deliberately choose a controversial (in the US at least) topic. Evolution is thought by some (most!) to be the way that different species have come to exist, it isn't the practice of describing how. Now please do not argue with me about evolution: I was just trying to find a controversial subject to compare with AI.
Similarly: AI is artifical intelligence, whether you believe that such a thing is possible or not. It is not some process of trying to make something appear intelligent, however unlikely you thing AI is. Just because you think AI unlikely doesn't mean you can deny what it is (or might be).
Another example: UFOs is not the practice of faking the photographs. AI, similarly, is not the practice of making something (i.e. artificial) appear intelligent. It is the artifical intelligence itself, whether you believe in it or not.
One or two of the contributors to this article seem to have an axe to grind. Wikipedia is supposed to be neutral. I am going to re-write that 1st paragraph.
Psb777 23:54, 24 Jan 2004 (UTC)
Arthur T. Murray, a.k.a. Mentifex, is a notorious net.kook who has been spamming and mass-mailing his pseudoscientific writings for over thirty years. He is now repeatedly adding to Wikipedia pages inappropriate references to his own work, and repeatedly removing from pages information which presents an opposing point of view to his theories or gives evidence which may cause others to see them in a negative light. For example, he has repeatedly inserted his own name in the "List of Prominent AI Theorists" section of the AI article.
As no serious AI researcher considers Murray's work to be anything but crackpottery, please help keep this page and others related to AI free of kookery.
Please see the Arthur T. Murray/Mentifex FAQ for further details on Murray's claims and posting history. This FAQ links to much of Murray's own writing so you can make your own independent assessment of it.
— Psychonaut 16:18, 20 Feb 2004 (UTC)
I've started up an article stub on Mentifex, although it's very hard to maintain NPOV on something like this. Any additional info or NPOV changes would be greatly appreciated. -- FleaPlus 19:10, 6 May 2004 (UTC)
While I can understand the deletion, I still think it's useful to be able to look up information on suspected cranks (i.e. Time Cube) to try to get an unbiased description and analysis of their claims. Even if the article I wrote up wasn't NPOV and encyclopedic enough, I hope that someone else would still be able to write one. I personally believe that Mentifex has been around for such a long time and had such an impact on Internet discussions that he deserves an article. -- NeuronExMachina 10:42, 22 Jul 2004 (UTC)
Ok, I see what you mean. I still hope in the future that there might be a Mentifex article, though it would indeed likely need a number of caretakers dedicated to maintaining its NPOV. -- NeuronExMachina 09:19, 23 Jul 2004 (UTC)
Perhaps there needs to be a kookwiki. 170.35.224.64 15:49, 10 Jan 2005 (UTC)
It seems to me this article is much improved. That all views are represented. There is a lot to be disputed on this, the Talk page, but the article seems NPOV to me. Can we remove the The neutrality of this article is disputed. tag now? Psb777 10:11, 10 Feb 2004 (UTC)
<outing content redacted>
But there is no Mentifex who has "contributed" here. We are having to put up with an Arthur Murray who keeps on trying to foist his drivel on us and Psychonaut is doing an excellent job beating him off. Maybe you have the wrong page? Psb777 15:39, 10 Feb 2004 (UTC)
To remove AI material without any knowledge of its substance is vandalism. Instead of suppressing new ideas, Wikipaedophiles ought to welcome them. -- User:66.248.100.42 (presumably Arthur T. Murray)
I apologise for removing the material reinstated by Psychonaut. Paul Beardsell 14:52, 20 Feb 2004 (UTC)
The second is much harder, raising questions of consciousness and self, mind (including the unconscious mind) and the question of what components are involved in the only type of intelligence it is universally agreed we have available to study: that of human beings. Study of animals and artificial systems that are not just models of what exists already are widely considered very pertinent, too.
Could someone be so kind to translate the above text to plain English? Vidstige 18:15, 8 Mar 2004 (UTC)
What don't you understand? "The second" is the quesion "What is intelligence?" Does it make sense now? Mr. Jones 20:11, 4 Jul 2004 (UTC)
Is the content of this page really appropriate for its topic? You'd think that the page "Artificial Intelligence" would act as an overview of the field and a gateway to subtopics in AI (which may not exist yet) -- and would be in understandable English. The sections and topics are still badly opinionated, too. The arguments aren't presented in anything resembling order. And what's with the "Electronic wavelet holographic interference" stuff?
Maybe this page's problems stem from its position as a major topic page, or it could just be people wiki-stomping on it all the time. Whatever's gone wrong, however, it needs to be fixed.
I'm halfway tempted to rewrite the page from scratch, if nobody minds. It's a bloddy mess.
I do agree that we need some more coverage of the field, rather than only the philosophical controversies, which are well-known in the field but generally don't take up much of its time (if only because many of them are basically intractable—"yes computers can be sentient" or "no computers can't be sentient"). In particular, a good overview of the symbolic vs. subsymbolic controversy is a necessary starting place, and then some overviews of various other approaches within the field. I'll try to start adding some when I get some time. -- Delirium 18:40, Sep 27, 2004 (UTC)
Currently: "Artificial intelligence, also known as machine intelligence, is defined as intelligence exhibited by anything manufactured (i.e. artificial) by humans or other sentient beings or systems (should such things ever exist on Earth or elsewhere)."
I propose: "Artificial intelligence, also known as machine intelligence, is defined as intelligence exhibited by anything manufactured (i.e. artificial)."
Mention of "other sentient beings" and "Earth or elsewhere" is needlessly speculative and sounds almost kookish. Don't want to tread on toes though.
An
automated Wikipedia link suggester has some possible wiki link suggestions for the
Artificial_intelligence article, and they have been placed on
this page for your convenience.
Tip: Some people find it helpful if these suggestions are shown on this talk page, rather than on another page. To do this, just add {{User:LinkBot/suggestions/Artificial_intelligence}} to this page. —
LinkBot 01:01, 18 Dec 2004 (UTC)
I just reverted a change in which what appeared to be a sketch paper was added to the last section. I did this because firstly the change was inappropriate content for an encyclopedia and secondly because any paper should be published elsewhere before it comes close to being of encylclopedic relevence. Barnaby dawson 08:19, 8 May 2005 (UTC)
The first paragraph needs the addition of more fashionable research areas. Artificial life might not be fashionable any more and Bayesian Networks are certainly not the only fashionable research area in AI. What is attracting funding and conference attendee attention these days?
A special edition of Journal of Consciousness Studies (a peer-reviewed journal) dedicated to Machine Consciousness [1]. Machine Consciousness and other variations are described under Artificial Consciousness, which is wider term (for example possible systems with biological components). So please study the issue a bit more, and be more careful, while deleting. Even psychology is not, strictly speaking, a science, neither are consciousness studies etc. This was not a good reason to delete that link, if someone has other consideretions, please say. Tkorrovi 22:10, 25 May 2005 (UTC)
Many of us are fixed on the area of the production of the robots. However, not many of us have stopped and thought about what might happen to civilization if robots go too far..... and begin to develop a mind of their own. Maybe, in the future, technology will grant them the ability to walk, and talk, and maybe do all sorts of things that humans can do today. mayhaps they will form an army, and proceed to obliterate all true life on the planet. to get a better grasp on this concept, read eoin colfer's novel, "The Supernaturalist"
Please feel free to add your opinion to this matter
Legolas of the Elves of Mirkwood
I find the third paragraph to be pretty annoying. It currently reads:
Not to be nasty here, but why is Kevin Warwick here? He appears to just be a minor character (at least his notoriety is recent) in the field though I grant he could possibly be aggrevating. Why aren't we mentioning here, for example, Marvin Minsky or Japan's Fifth Generation project? KarlHallowell 20:12, 22 August 2005 (UTC)
I was checking out the latest entry, Sankar K Pal to the list of AI researchers. His publication list (on Citeseer) is rather small (though several other people on the list have similar records) and he appears (at a glance) to have written at least one book and have a long career as a founder and administrator of AI-related research programs in India. OTOH, it's somewhat pedantic, but his research does seem rather limited.
So what qualities does someone need to warrant inclusion on this list? -- KarlHallowell 14:45, 30 August 2005 (UTC)
I have started a AI portal. The idea is that the information in this article not concerning the definition of AI be moved to appropriate sub-articles. Assistance in this process will be appreciated. -- Mneser 16:56, 9 October 2005 (UTC)
Please do. I hope the preliminary new links I created are sufficient. The idea is not to expand the portal to much. -- Mneser 01:20, 11 October 2005 (UTC)
The "Machines displaying some degree of intelligence" section is silly. There are many programs which display some level of intelligence. Also most of those listed are actually programs not machines. I suggest replacing it with "Famous implementaions of AI" and include deeper blue and other famous AI bots (and excluding links, if it doesn't have an article it doesn't deserve to be listed). Objecions? This article does need a lot of work. Broken S 20:21, 17 October 2005 (UTC)