This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
The source code, even in QBASIC, is quite obscure — it looks pretty obscure to me, and I used to do the odd bit of programming in the language a few years ago! It's not going to be very helpful for 99% of our readers. I think we should, if not remove it outright, recast this as pseudocode, and, if it would be helpful, provide an external link to source code. — Matt 18:43, 20 Sep 2004 (UTC)
We could certainly recast the program as pseudocode as an aid to comprehension. However one of the reasons for writing it in QBASIC was to give an example chatterbot which would actually run "as is". A pseudocode version might be more useful to experienced programmers but perhaps less so to neophytes or non-programmers since it would be impossible for them to run it. By comparison an interested neophyte or non-programmer can get the current code running by following the simple instructions included in the current article.
An alternative to pseudocode would be to rewrite the program to make it clearer. For all the unusual layout of the program it is actually fairly simply structured, so that would not be difficult to do. -- Derek Ross | Talk 05:15, 2004 Sep 21 (UTC)
Those are fair points but I still don't feel that pseudocode is enough. I contributed the code in answer to the request at the top of this page, so the source code is a response to demand. However if QBASIC is no good perhaps you can suggest a better language. Something like awk or perl perhaps ? -- Derek Ross | Talk 03:23, 2004 Sep 23 (UTC)
Well, thanks, Anonymous User. I'm glad that at least one person has found the code informative. -- Derek Ross | Talk 05:43, 15 October 2005 (UTC)
The following sessions show how to teach WikiChat some simple conversation. Note that sessions are started by running the WikiChat program and ended by pressing the return key without giving a response to the computer. They show WikiChat being taught a basic conversational pattern in English.
(Session 1) Human: hello there Computer: Human: hello Computer: Human: how are you Computer: Human: fine thanks how are you Computer: fine thanks how are you Human: fine thanks Computer: Human: (Session 2) Human: hello Computer: how are you Human: fine thanks how are you Computer: fine thanks Human: glad to hear it Computer: Human:
(Session 3) Human: hello Computer: how are you Human: fine thanks Computer: glad to hear it Human:
The responses will be stored for the next session, so that there is no need to reteach them but it should be noted that while this example chatterbot will create "original" sentences after a little teaching, it has a very limited capacity and works best with short sentences.
The sections between the lines above used to form part of the article. They all belong together. The example session makes no sense without the code it refers to. -- Derek Ross | Talk 16:12, 20 November 2005 (UTC)
In an effort to clean up Artificial intelligence, instead of completely removing a paragraph mostly concerning chatterbots, I copied it under "Chatterbots in modern AI". I noticed that it repeats a some information already in this article, but there might also be some additions. Unfortunately I can not spend time on a smooth merger right now. Sorry for the inconvenience. -- moxon 09:20, 20 October 2005 (UTC)
Aids to understanding are often not essential even when they are useful to understanding. The BASIC source code below demonstrates that these programs can be quite short and simple, even to people who don't understand computer programming. I am surprised that anyone should think that it was intended as a tutorial on programming. It might have some tutorial value as an example of a chatterbot (the topic of the article) but hardly as an example of programming. -- Derek Ross | Talk 16:18, 20 November 2005 (UTC)
I don't have the knowledge to contribute to this section, wish I did, but it doesn't seem to have much authority to it, no cites of statistics or links to articles, so it comes across as too anectodal to be of any use. Especially the part that says "as well as on Gay.com chatrooms". Why is that reference somehow more notable than 'bots that appear on any of a thousand other forums? dawno 05:21, 18 June 2007 (UTC)
Shouldn't this be at Chatbot, since that is the most common name? -- Visviva 11:09, 18 November 2006 (UTC)
I think the paragraph about Malish should be removed. It doesn't apply specifically to chatterbots. The following paragraph (discussing Blockhead and the Chinese Room) belongs in Philosophy of artificial intelligence.-- CharlesGillingham 10:14, 26 June 2007 (UTC)
I've just come into the Wiki business (wockham, for William of Ockham), and apologise if I've got anything wrong in respect of how to use the Wiki.
I changed the section on AI research to try to make it reflect more faithfully how things really stand in the research world, for example:
1. AIML is not a programming language, but a markup language, specifying patterns and responses, not algorithms. And ALICE can't really be considered an AI system, because (as both the other content of this section and the initial section point out), it works purely by very simple pattern-matching, with nothing that can be called "reasoning" and hardly any use even of dynamic memory.
2. Jabberwacky can't properly be described as "closer to strong AI" or even really as "tackling the problem of natural language", because it doesn't actually make any attempt to understand what's being said. It is designed to score well as an imposter - as something that can pass as intelligent - rather than even attempting any genuine grasp or processing of the information conveyed in the conversation. It can give the impression of more "intelligence" than other chatbots, sure, because it does do a rudimentary kind of learning, but again, it seems very misleading to suggest that this really has anything significant to do with natural language research.
3. The previous version suggested that it's the failure of chatbots as language simulators that has "led some software developers to focus more on ... information retrieval". But this seems odd, as though such developers were desperate to find a use for chatbots, rather than (more plausibly) trying to find a way to solve an information retrieval problem. My version maintained the point that chatbots have proved of use in information retrieval (as also in help systems), but deliberately avoided any speculation about how those researchers might have come to have such interests.
4. I made substantial changes to the paragraph that said: "A common rebuttal often used within the AI community against criticism of such approaches asks, 'How do we know that humans don't also just follow some cleverly devised rules?' (in the way that Chatterbots do). Two famous examples of this line of argument against the rationale for the basis of the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument." Here are my reasons:
(a) The argument that chatbots are moderately convincing, and therefore perhaps humans converse in the same way, is unlikely to be put forward by anyone "within the AI community". AI researchers are aiming to achieve some sort of genuinely intelligent information processing, and they are well aware of the serious difficuly of doing so. Only chatterbot enthusiasts are likely to come up with this argument, and most of them are engaged on a quite different task (see 2 above).
(b) The argument is anyway very weak, and I don't think it's fair to attack my rebuttal of it as just expressing a personal point of view. Maybe it could be put better, but the point I was making is that even an everyday conversation - for example, about what to wear or about football - requires some logical connection between the various sentences (e.g. what shirt will go with what skirt or trousers, or how the placement of one player in the team will have implications for other positions - e.g. that the same player can't be in more than one position). Now it is just obvious that this sort of thing is typical of human conversation, and equally obvious that chatbots (at any rate in their currently usual form) cannot handle such logical connections. So if it's worth putting the argument in the article, then it's also worth putting this obvious rebuttal of it (though again, it could no doubt be reworded).
(c) The stuff about Block and Searle was inaccurate. It suggested, for example, that John Searle's Chinese Room argument was "an example of this line of argument" which it isn't at all. Searle isn't arguing that human conversation is like chatterbots; on the contrary. But nor is he suggesting that AI systems are as crude as chatterbots: if he were, then nobody would take his argument seriously. What he's saying is that even if a computer system could achieve a logically coherent conversation (i.e. even if ambitious AI researchers could succeed), that still wouldn't give genuine semantic content to what the system says. All this really belongs in the section on Philosophy of AI. The most that could be said here (and it could be added) is that chatbots (arguably) provide some evidence against the usefulness of the Turing Test. If even a pattern-match-response chatbot can fool a human into thinking that it's intelligent, then obviously the ability to fool a human isn't any good as a criterion of intelligence.
Wockham 21:30, 31 August 2007 (UTC)
OK, thanks very much for this. I've got rid of the "despite the hype" and "But the answer is clear" stuff, and also the link to Elizabeth (before reading your note, in fact). I would hope, however, that the references to "help systems" and the potential of chatbots in education would be worthy of consensus (even if references to examples violate protocol).
In a section called "Chatterbots in Modern AI", and starting "Most modern AI research", I should have thought it important to reflect what is actually happening in the research world; that was why I confined my edits to that.
Regarding claims that are "unsourced", I honestly can't see that what I put is any worse than what was there before. What claims do you think need sourcing, that currently aren't?
Wockham 22:31, 31 August 2007 (UTC)
Artificial conversational entity seems to be too short and stubby to warrant its own article, and it is basically the same as a Chatterbot. Chatterbot is the more common name for this category of software.-- Cerejota ( talk) 18:40, 19 January 2009 (UTC)
I would like to propose that the term 'chatbot' should be leading. Actual for just one reaon: this term is by far most often used:
Google Trends shows us when we compare chatbot, chatterbot, embodied conversation agent and Artificial conversational entity: http://www.google.com/trends?q=chatbot%2C+chatterbot%2C+embodied+conversation+agent%2C+Artificial+conversational+entity
That 1 chatbot (by far on top) 2 chatterbot (30% usage in the US, 70% US users use chatbot. Chatterbot is often used Poland) 3 embodied conversation agent (rarely entered in Google) 4 Artificial conversational entity (rarely entered in Google)
So from a user point of view (not for academics or professionals), and Wikipedia is created for users, the term 'chatbot' should be used.
Furthermore, the term is much shorter, it sound better and much more likely that it will be adapted by an even large group of users.
Therefore I believe that the various articles should be merged in a new article named chatbot.
Please discuss here.
It seems there is no objection to merge Ace into Chatterbot. It also seems there are good reasons to reverse the roles of Chatbot and Chatterbot, so Chatbot is the leading term. If no objections are posted here in the next couple of days, I shall merge the contents of Ace with the contents of Chatterbot into the leading title Chatbot, and redirect from both Chatbot and Ace. UdRuhm ( talk) 14:20, 2 November 2009 (UTC)
I would like to suggest that the "Method of operation" section could usefully be revised so as to reflect its title, which perhaps should be plural: "Methods" rather than "Method" (because chatbots vary, and even a single chatbot might use a number of different techniques). In particular, it would be good to give a few examples of the sorts of methods used by the original ELIZA (e.g. replacement of near-synonyms, sets of patterns and corresponding responses, replacement of "me" with "you" etc.), so that readers who don't know much about the subject can actually get a good feel for how basic chatbots work. I'm happy to have a go at this, focusing on examples from the famous ELIZA scripts (which will, I hope, avoid controversial choices among more recent systems). But before doing so, I'd like to know how other more long-term editors feel about it.
I don't know anything about Jabberwacky's methods, and the Wikipedia page on Jabberwacky says very little on this too. It would be good if whoever does know about it could add something on them (i.e. on the methods that it uses, not just the purpose of the methods - which is apparently to model how humans learn language). Shouldn't a "methods" section focus on how things are done? And I presume that the justification for giving this special space to Jabberwacky is that it apparently works differently from other chatbots. It would be nice to be told how!
For the same reason, I suggest the section would be much better to focus just on chatbots' methods of operation, and avoid the philosophical stuff on "understanding", Searle, Block etc. This anyway seems inaccurate (e.g. Searle is American, not British), almost completely unsourced (e.g. the "Much debate" passage), and too sketchy to be of much use. Wouldn't it be better just to refer readers to the section on the Turing test for all that sort of thing? People will be looking at this section to try to find out about chatbot methods of operation, not philosophical discussion about whether chatbot are of philosophical significance. WikkPhil ( talk) 17:13, 10 January 2010 (UTC)
I have removed a fair amount of quite loose description, some repetition and some overlinking. I don't believe I have taken out any hard facts. Charles Matthews ( talk) 14:34, 13 January 2010 (UTC)
You've made a big improvement, in my view. But I still think it would be nice to have a section that's really on the methods used, and I don't see that the stuff on Searle and Block belongs in the "background" of an entry on chatbots. Weizenbaum wrote ELIZA long before their work, and the only philosophical significance of chatbots seems to be to show how easily humans can be fooled by something that cannot - by any stretch of the imagination - be called genuinely intelligent (i.e. they cast doubt on the value of the Turing Test). Weizenbaum's paper said more or less that on p. 42: "This is a striking form of Turing's test. ... ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding". (That is the only mention of Turing or any other philosophy publication in his entire paper, so its overall effect is to downplay any philosophical significance at all, which I reckon is dead right.) Besides, Searle's "Chinese Room" doesn't hypothesise a chatbot, but a full-blown NLP algorithm that can generate a fully appropriate answer for any question put to it. I'm not sure that Searle thinks such a thing is possible, either: he just insists that even if it were possible, it would lack genuine intentionality. That really doesn't have much to do with chatbots at all, does it? WikkPhil ( talk) 22:36, 13 January 2010 (UTC)
Thanks, Charles. I've now redone the "Background" section accordingly, in what I hope will be found a useful way, with relevant historical stuff about ELIZA and Weizenbaum, and leading up to more recent uses. I don't think there's anything controversial here, and I trust people will agree that ELIZA deserves a particular focus (PARRY was also influential, but much more complex and difficult to imitate, and I don't think its "script" was ever published openly in the way that ELIZA's was).
I plan next - when I get the time - to add back a section on "Methods of Operation" as discussed above, again focusing on the techniques that ELIZA pioneered. If desired, I could say something about ALICE and AIML too here, because that seems to be the most used recent system. It would be good to know what other editors think of this idea (because I appreciate that it can be controversial mentioning some systems rather than others). Thanks again, Phil WikkPhil ( talk) 15:29, 16 January 2010 (UTC)
The scopes of these articles appear to be the same. If merging, it seems Chatterbot is the preferred target, as it has much more "what-links-here"s and more than 10x more traffic (according to
http://stats.grok.se/).
Mikael Häggström (
talk) 07:17, 12 March 2011 (UTC)
Are there beneficial chat robots I recently replied to what might be a Yahoo answers suicide bait question Then I realized that a chat robot that just sifted YA as well as blogs to find what appeared to be suicide preferences could say things like "wow, that totally makes me think of those wacky suicide prevention things like 1800chillout" or "ha! I detect romance gone awry I just placed a personals ad for you, if you live until monday you can see who responded" or "You seem pretty dedicated to things making sense Have you considered going to irs.gov to file early so your relations can get all that refundy goodness theyd otherwise miss out on?" These might be phrased more kindly as well as effectively to prevent suicide, As lifesaving software goes you might prevent an actual death for every few hundred or thousand autoposts You might prevent many hundreds or thousands of emotively crummy suicide attempts as well. —Preceding unsigned comment added by 163.41.136.51 ( talk) 18:38, 23 May 2011 (UTC)
The result of the move request was: moved. ( non-admin closure) JudgeRM (talk to me) 15:36, 14 January 2017 (UTC)
Chatterbot → Chatbot – chatterbots are overwhelmingly referred to as chatbots nowadays. A simple Google search will reveal this: 2.8m hits for chatbot (meaning this), 0.3m hits for chatterbot. Keizers ( talk) 17:59, 5 January 2017 (UTC)
These are not really different concepts, but just different names for the same thing. Besides, both references for the "IM bot" article are about "chatbots". Amir E. Aharoni ( talk) 17:12, 26 June 2017 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Chatbot. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 01:24, 7 October 2017 (UTC)
A chatbot (also known as a talkbots, chatterbot, Bot, IM bot, interactive agent, or Artificial Conversational Entity)
Can we get some sources for the names or remove some of them? It's already a handful and I'm not sure they're all equally notable. I especially think "Bot" is far too universal to restrict to chatbots. Prinsgezinde ( talk) 10:30, 11 August 2018 (UTC)
A discussion is taking place to address the redirect AIMBot. The discussion will occur at Wikipedia:Redirects for discussion/Log/2021 February 23#AIMBot until a consensus is reached, and readers of this page are welcome to contribute to the discussion. 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 ( 𝗍𝗮𝘭𝙠) 12:50, 23 February 2021 (UTC)
Draft contains additional information that appears to be applicable. Robert McClenon ( talk) 21:35, 29 April 2021 (UTC)
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Ayspiff. Peer reviewers: Ayspiff.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT ( talk) 17:16, 16 January 2022 (UTC)
This article was the subject of a Wiki Education Foundation-supported course assignment, between 26 August 2019 and 11 December 2019. Further details are available on the course page. Student editor(s): Blakelevinson.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT ( talk) 18:52, 17 January 2022 (UTC)
Chatbots' reliance on a limited number of key words and stock responses, together with the increasing use of chatbots by business to replace live help, pose increasing problems for customers consigned to dealing with chatbots. The ultimate success of chatbots use by business will depend the ability of business to program customers to communicate with chatbots and limit their inquiries to topics, content and language that chatbots can "understand." 134.56.232.206 ( talk) 17:29, 4 February 2023 (UTC)
No objections have been expressed to the suggestions above, so I hope to go ahead as planned within the next week. Does anyone know of reliable ELIZA implementations in a form that readers can inspect for themselves? Nearly all of the implementations listed on the ELIZA page are very different from Weizenbaum's original, and the one that genuinely follows his algorithm is a Java program which might make it hard to use for many people. Has anyone tried implementing a genuine ELIZA in AIML, for example, or any other simple scripting language? (I'm not sure whether AIML can handle all the processes that Weizenbaum uses, but presume it can.) Thanks, Phil WikkPhil ( talk) 14:14, 30 January 2010 (UTC)
Sorry, "the next week" was very optimistic! I've still not found any reliable ELIZAs in the form of an easily-comprehensible script (e.g. AIML) that will enable examples to be given in a usable and testable format. Any suggestions for how to move forward on this? If no luck, I'll just have to explain things informally. WikkPhil ( talk) 18:05, 20 August 2010 (UTC)
This
edit request to
Chatbot has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Change:
A 2017 study showed that only 4% of companies were using chatbots. [1] However, according to a 2016 study, around 80% of businesses expressed their intention to implement chatbots by the year 2020. [2]
To:
In May 19, 2022, Meta announced the global public availability of the cloud-based platform, enabling businesses of all sizes worldwide to directly connect with WhatsApp through the WhatsApp Cloud API. [3] Chatbots on WhatsApp have emerged as a prominent reference channel for consumers, establishing themselves as an integral part of the omnichannel e-commerce experience. [4]
Cesarcas1994 ( talk) 04:41, 9 June 2023 (UTC)
References
Suppose we can remove this, probably not important study and outdated as well: "A study by Forrester (June 2017) predicted that 25% of all jobs would be impacted by AI technologies by 2019." Karlaz1 ( talk) 16:04, 14 June 2023 (UTC)
This
edit request to
Chatbot has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Later in 2022 a chatbot by the name chatgpt launched /info/en/?search=ChatGPT Totallynotmwa ( talk) 20:29, 4 July 2023 (UTC)
This
edit request to
Chatbot has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Suggestion for "Further-Reading" König, Wolfgang (2023): Why a Chatbot Didactics Model? The Grey-Box-Model of Chatbot-Didactics http://dx.doi.org/10.13140/RG.2.2.32217.29280 2003:D1:6737:4200:D406:D001:9A9D:A334 ( talk) 06:14, 14 August 2023 (UTC)
Not done: Self-published monograph. See also: Wikipedia:Further reading. Xan747 ✈️ 🧑✈️
This
edit request to
Chatbot has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
I have found one dead link in this article, i want to improve the link by adding well written content on the same issue. The dead link is present in the reference no. 18. Mindscrafter ( talk) 14:15, 22 August 2023 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
The source code, even in QBASIC, is quite obscure — it looks pretty obscure to me, and I used to do the odd bit of programming in the language a few years ago! It's not going to be very helpful for 99% of our readers. I think we should, if not remove it outright, recast this as pseudocode, and, if it would be helpful, provide an external link to source code. — Matt 18:43, 20 Sep 2004 (UTC)
We could certainly recast the program as pseudocode as an aid to comprehension. However one of the reasons for writing it in QBASIC was to give an example chatterbot which would actually run "as is". A pseudocode version might be more useful to experienced programmers but perhaps less so to neophytes or non-programmers since it would be impossible for them to run it. By comparison an interested neophyte or non-programmer can get the current code running by following the simple instructions included in the current article.
An alternative to pseudocode would be to rewrite the program to make it clearer. For all the unusual layout of the program it is actually fairly simply structured, so that would not be difficult to do. -- Derek Ross | Talk 05:15, 2004 Sep 21 (UTC)
Those are fair points but I still don't feel that pseudocode is enough. I contributed the code in answer to the request at the top of this page, so the source code is a response to demand. However if QBASIC is no good perhaps you can suggest a better language. Something like awk or perl perhaps ? -- Derek Ross | Talk 03:23, 2004 Sep 23 (UTC)
Well, thanks, Anonymous User. I'm glad that at least one person has found the code informative. -- Derek Ross | Talk 05:43, 15 October 2005 (UTC)
The following sessions show how to teach WikiChat some simple conversation. Note that sessions are started by running the WikiChat program and ended by pressing the return key without giving a response to the computer. They show WikiChat being taught a basic conversational pattern in English.
(Session 1) Human: hello there Computer: Human: hello Computer: Human: how are you Computer: Human: fine thanks how are you Computer: fine thanks how are you Human: fine thanks Computer: Human: (Session 2) Human: hello Computer: how are you Human: fine thanks how are you Computer: fine thanks Human: glad to hear it Computer: Human:
(Session 3) Human: hello Computer: how are you Human: fine thanks Computer: glad to hear it Human:
The responses will be stored for the next session, so that there is no need to reteach them but it should be noted that while this example chatterbot will create "original" sentences after a little teaching, it has a very limited capacity and works best with short sentences.
The sections between the lines above used to form part of the article. They all belong together. The example session makes no sense without the code it refers to. -- Derek Ross | Talk 16:12, 20 November 2005 (UTC)
In an effort to clean up Artificial intelligence, instead of completely removing a paragraph mostly concerning chatterbots, I copied it under "Chatterbots in modern AI". I noticed that it repeats a some information already in this article, but there might also be some additions. Unfortunately I can not spend time on a smooth merger right now. Sorry for the inconvenience. -- moxon 09:20, 20 October 2005 (UTC)
Aids to understanding are often not essential even when they are useful to understanding. The BASIC source code below demonstrates that these programs can be quite short and simple, even to people who don't understand computer programming. I am surprised that anyone should think that it was intended as a tutorial on programming. It might have some tutorial value as an example of a chatterbot (the topic of the article) but hardly as an example of programming. -- Derek Ross | Talk 16:18, 20 November 2005 (UTC)
I don't have the knowledge to contribute to this section, wish I did, but it doesn't seem to have much authority to it, no cites of statistics or links to articles, so it comes across as too anectodal to be of any use. Especially the part that says "as well as on Gay.com chatrooms". Why is that reference somehow more notable than 'bots that appear on any of a thousand other forums? dawno 05:21, 18 June 2007 (UTC)
Shouldn't this be at Chatbot, since that is the most common name? -- Visviva 11:09, 18 November 2006 (UTC)
I think the paragraph about Malish should be removed. It doesn't apply specifically to chatterbots. The following paragraph (discussing Blockhead and the Chinese Room) belongs in Philosophy of artificial intelligence.-- CharlesGillingham 10:14, 26 June 2007 (UTC)
I've just come into the Wiki business (wockham, for William of Ockham), and apologise if I've got anything wrong in respect of how to use the Wiki.
I changed the section on AI research to try to make it reflect more faithfully how things really stand in the research world, for example:
1. AIML is not a programming language, but a markup language, specifying patterns and responses, not algorithms. And ALICE can't really be considered an AI system, because (as both the other content of this section and the initial section point out), it works purely by very simple pattern-matching, with nothing that can be called "reasoning" and hardly any use even of dynamic memory.
2. Jabberwacky can't properly be described as "closer to strong AI" or even really as "tackling the problem of natural language", because it doesn't actually make any attempt to understand what's being said. It is designed to score well as an imposter - as something that can pass as intelligent - rather than even attempting any genuine grasp or processing of the information conveyed in the conversation. It can give the impression of more "intelligence" than other chatbots, sure, because it does do a rudimentary kind of learning, but again, it seems very misleading to suggest that this really has anything significant to do with natural language research.
3. The previous version suggested that it's the failure of chatbots as language simulators that has "led some software developers to focus more on ... information retrieval". But this seems odd, as though such developers were desperate to find a use for chatbots, rather than (more plausibly) trying to find a way to solve an information retrieval problem. My version maintained the point that chatbots have proved of use in information retrieval (as also in help systems), but deliberately avoided any speculation about how those researchers might have come to have such interests.
4. I made substantial changes to the paragraph that said: "A common rebuttal often used within the AI community against criticism of such approaches asks, 'How do we know that humans don't also just follow some cleverly devised rules?' (in the way that Chatterbots do). Two famous examples of this line of argument against the rationale for the basis of the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument." Here are my reasons:
(a) The argument that chatbots are moderately convincing, and therefore perhaps humans converse in the same way, is unlikely to be put forward by anyone "within the AI community". AI researchers are aiming to achieve some sort of genuinely intelligent information processing, and they are well aware of the serious difficuly of doing so. Only chatterbot enthusiasts are likely to come up with this argument, and most of them are engaged on a quite different task (see 2 above).
(b) The argument is anyway very weak, and I don't think it's fair to attack my rebuttal of it as just expressing a personal point of view. Maybe it could be put better, but the point I was making is that even an everyday conversation - for example, about what to wear or about football - requires some logical connection between the various sentences (e.g. what shirt will go with what skirt or trousers, or how the placement of one player in the team will have implications for other positions - e.g. that the same player can't be in more than one position). Now it is just obvious that this sort of thing is typical of human conversation, and equally obvious that chatbots (at any rate in their currently usual form) cannot handle such logical connections. So if it's worth putting the argument in the article, then it's also worth putting this obvious rebuttal of it (though again, it could no doubt be reworded).
(c) The stuff about Block and Searle was inaccurate. It suggested, for example, that John Searle's Chinese Room argument was "an example of this line of argument" which it isn't at all. Searle isn't arguing that human conversation is like chatterbots; on the contrary. But nor is he suggesting that AI systems are as crude as chatterbots: if he were, then nobody would take his argument seriously. What he's saying is that even if a computer system could achieve a logically coherent conversation (i.e. even if ambitious AI researchers could succeed), that still wouldn't give genuine semantic content to what the system says. All this really belongs in the section on Philosophy of AI. The most that could be said here (and it could be added) is that chatbots (arguably) provide some evidence against the usefulness of the Turing Test. If even a pattern-match-response chatbot can fool a human into thinking that it's intelligent, then obviously the ability to fool a human isn't any good as a criterion of intelligence.
Wockham 21:30, 31 August 2007 (UTC)
OK, thanks very much for this. I've got rid of the "despite the hype" and "But the answer is clear" stuff, and also the link to Elizabeth (before reading your note, in fact). I would hope, however, that the references to "help systems" and the potential of chatbots in education would be worthy of consensus (even if references to examples violate protocol).
In a section called "Chatterbots in Modern AI", and starting "Most modern AI research", I should have thought it important to reflect what is actually happening in the research world; that was why I confined my edits to that.
Regarding claims that are "unsourced", I honestly can't see that what I put is any worse than what was there before. What claims do you think need sourcing, that currently aren't?
Wockham 22:31, 31 August 2007 (UTC)
Artificial conversational entity seems to be too short and stubby to warrant its own article, and it is basically the same as a Chatterbot. Chatterbot is the more common name for this category of software.-- Cerejota ( talk) 18:40, 19 January 2009 (UTC)
I would like to propose that the term 'chatbot' should be leading. Actual for just one reaon: this term is by far most often used:
Google Trends shows us when we compare chatbot, chatterbot, embodied conversation agent and Artificial conversational entity: http://www.google.com/trends?q=chatbot%2C+chatterbot%2C+embodied+conversation+agent%2C+Artificial+conversational+entity
That 1 chatbot (by far on top) 2 chatterbot (30% usage in the US, 70% US users use chatbot. Chatterbot is often used Poland) 3 embodied conversation agent (rarely entered in Google) 4 Artificial conversational entity (rarely entered in Google)
So from a user point of view (not for academics or professionals), and Wikipedia is created for users, the term 'chatbot' should be used.
Furthermore, the term is much shorter, it sound better and much more likely that it will be adapted by an even large group of users.
Therefore I believe that the various articles should be merged in a new article named chatbot.
Please discuss here.
It seems there is no objection to merge Ace into Chatterbot. It also seems there are good reasons to reverse the roles of Chatbot and Chatterbot, so Chatbot is the leading term. If no objections are posted here in the next couple of days, I shall merge the contents of Ace with the contents of Chatterbot into the leading title Chatbot, and redirect from both Chatbot and Ace. UdRuhm ( talk) 14:20, 2 November 2009 (UTC)
I would like to suggest that the "Method of operation" section could usefully be revised so as to reflect its title, which perhaps should be plural: "Methods" rather than "Method" (because chatbots vary, and even a single chatbot might use a number of different techniques). In particular, it would be good to give a few examples of the sorts of methods used by the original ELIZA (e.g. replacement of near-synonyms, sets of patterns and corresponding responses, replacement of "me" with "you" etc.), so that readers who don't know much about the subject can actually get a good feel for how basic chatbots work. I'm happy to have a go at this, focusing on examples from the famous ELIZA scripts (which will, I hope, avoid controversial choices among more recent systems). But before doing so, I'd like to know how other more long-term editors feel about it.
I don't know anything about Jabberwacky's methods, and the Wikipedia page on Jabberwacky says very little on this too. It would be good if whoever does know about it could add something on them (i.e. on the methods that it uses, not just the purpose of the methods - which is apparently to model how humans learn language). Shouldn't a "methods" section focus on how things are done? And I presume that the justification for giving this special space to Jabberwacky is that it apparently works differently from other chatbots. It would be nice to be told how!
For the same reason, I suggest the section would be much better to focus just on chatbots' methods of operation, and avoid the philosophical stuff on "understanding", Searle, Block etc. This anyway seems inaccurate (e.g. Searle is American, not British), almost completely unsourced (e.g. the "Much debate" passage), and too sketchy to be of much use. Wouldn't it be better just to refer readers to the section on the Turing test for all that sort of thing? People will be looking at this section to try to find out about chatbot methods of operation, not philosophical discussion about whether chatbot are of philosophical significance. WikkPhil ( talk) 17:13, 10 January 2010 (UTC)
I have removed a fair amount of quite loose description, some repetition and some overlinking. I don't believe I have taken out any hard facts. Charles Matthews ( talk) 14:34, 13 January 2010 (UTC)
You've made a big improvement, in my view. But I still think it would be nice to have a section that's really on the methods used, and I don't see that the stuff on Searle and Block belongs in the "background" of an entry on chatbots. Weizenbaum wrote ELIZA long before their work, and the only philosophical significance of chatbots seems to be to show how easily humans can be fooled by something that cannot - by any stretch of the imagination - be called genuinely intelligent (i.e. they cast doubt on the value of the Turing Test). Weizenbaum's paper said more or less that on p. 42: "This is a striking form of Turing's test. ... ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding". (That is the only mention of Turing or any other philosophy publication in his entire paper, so its overall effect is to downplay any philosophical significance at all, which I reckon is dead right.) Besides, Searle's "Chinese Room" doesn't hypothesise a chatbot, but a full-blown NLP algorithm that can generate a fully appropriate answer for any question put to it. I'm not sure that Searle thinks such a thing is possible, either: he just insists that even if it were possible, it would lack genuine intentionality. That really doesn't have much to do with chatbots at all, does it? WikkPhil ( talk) 22:36, 13 January 2010 (UTC)
Thanks, Charles. I've now redone the "Background" section accordingly, in what I hope will be found a useful way, with relevant historical stuff about ELIZA and Weizenbaum, and leading up to more recent uses. I don't think there's anything controversial here, and I trust people will agree that ELIZA deserves a particular focus (PARRY was also influential, but much more complex and difficult to imitate, and I don't think its "script" was ever published openly in the way that ELIZA's was).
I plan next - when I get the time - to add back a section on "Methods of Operation" as discussed above, again focusing on the techniques that ELIZA pioneered. If desired, I could say something about ALICE and AIML too here, because that seems to be the most used recent system. It would be good to know what other editors think of this idea (because I appreciate that it can be controversial mentioning some systems rather than others). Thanks again, Phil WikkPhil ( talk) 15:29, 16 January 2010 (UTC)
The scopes of these articles appear to be the same. If merging, it seems Chatterbot is the preferred target, as it has much more "what-links-here"s and more than 10x more traffic (according to
http://stats.grok.se/).
Mikael Häggström (
talk) 07:17, 12 March 2011 (UTC)
Are there beneficial chat robots I recently replied to what might be a Yahoo answers suicide bait question Then I realized that a chat robot that just sifted YA as well as blogs to find what appeared to be suicide preferences could say things like "wow, that totally makes me think of those wacky suicide prevention things like 1800chillout" or "ha! I detect romance gone awry I just placed a personals ad for you, if you live until monday you can see who responded" or "You seem pretty dedicated to things making sense Have you considered going to irs.gov to file early so your relations can get all that refundy goodness theyd otherwise miss out on?" These might be phrased more kindly as well as effectively to prevent suicide, As lifesaving software goes you might prevent an actual death for every few hundred or thousand autoposts You might prevent many hundreds or thousands of emotively crummy suicide attempts as well. —Preceding unsigned comment added by 163.41.136.51 ( talk) 18:38, 23 May 2011 (UTC)
The result of the move request was: moved. ( non-admin closure) JudgeRM (talk to me) 15:36, 14 January 2017 (UTC)
Chatterbot → Chatbot – chatterbots are overwhelmingly referred to as chatbots nowadays. A simple Google search will reveal this: 2.8m hits for chatbot (meaning this), 0.3m hits for chatterbot. Keizers ( talk) 17:59, 5 January 2017 (UTC)
These are not really different concepts, but just different names for the same thing. Besides, both references for the "IM bot" article are about "chatbots". Amir E. Aharoni ( talk) 17:12, 26 June 2017 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Chatbot. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 01:24, 7 October 2017 (UTC)
A chatbot (also known as a talkbots, chatterbot, Bot, IM bot, interactive agent, or Artificial Conversational Entity)
Can we get some sources for the names or remove some of them? It's already a handful and I'm not sure they're all equally notable. I especially think "Bot" is far too universal to restrict to chatbots. Prinsgezinde ( talk) 10:30, 11 August 2018 (UTC)
A discussion is taking place to address the redirect AIMBot. The discussion will occur at Wikipedia:Redirects for discussion/Log/2021 February 23#AIMBot until a consensus is reached, and readers of this page are welcome to contribute to the discussion. 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 ( 𝗍𝗮𝘭𝙠) 12:50, 23 February 2021 (UTC)
Draft contains additional information that appears to be applicable. Robert McClenon ( talk) 21:35, 29 April 2021 (UTC)
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Ayspiff. Peer reviewers: Ayspiff.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT ( talk) 17:16, 16 January 2022 (UTC)
This article was the subject of a Wiki Education Foundation-supported course assignment, between 26 August 2019 and 11 December 2019. Further details are available on the course page. Student editor(s): Blakelevinson.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT ( talk) 18:52, 17 January 2022 (UTC)
Chatbots' reliance on a limited number of key words and stock responses, together with the increasing use of chatbots by business to replace live help, pose increasing problems for customers consigned to dealing with chatbots. The ultimate success of chatbots use by business will depend the ability of business to program customers to communicate with chatbots and limit their inquiries to topics, content and language that chatbots can "understand." 134.56.232.206 ( talk) 17:29, 4 February 2023 (UTC)
No objections have been expressed to the suggestions above, so I hope to go ahead as planned within the next week. Does anyone know of reliable ELIZA implementations in a form that readers can inspect for themselves? Nearly all of the implementations listed on the ELIZA page are very different from Weizenbaum's original, and the one that genuinely follows his algorithm is a Java program which might make it hard to use for many people. Has anyone tried implementing a genuine ELIZA in AIML, for example, or any other simple scripting language? (I'm not sure whether AIML can handle all the processes that Weizenbaum uses, but presume it can.) Thanks, Phil WikkPhil ( talk) 14:14, 30 January 2010 (UTC)
Sorry, "the next week" was very optimistic! I've still not found any reliable ELIZAs in the form of an easily-comprehensible script (e.g. AIML) that will enable examples to be given in a usable and testable format. Any suggestions for how to move forward on this? If no luck, I'll just have to explain things informally. WikkPhil ( talk) 18:05, 20 August 2010 (UTC)
This
edit request to
Chatbot has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Change:
A 2017 study showed that only 4% of companies were using chatbots. [1] However, according to a 2016 study, around 80% of businesses expressed their intention to implement chatbots by the year 2020. [2]
To:
In May 19, 2022, Meta announced the global public availability of the cloud-based platform, enabling businesses of all sizes worldwide to directly connect with WhatsApp through the WhatsApp Cloud API. [3] Chatbots on WhatsApp have emerged as a prominent reference channel for consumers, establishing themselves as an integral part of the omnichannel e-commerce experience. [4]
Cesarcas1994 ( talk) 04:41, 9 June 2023 (UTC)
References
Suppose we can remove this, probably not important study and outdated as well: "A study by Forrester (June 2017) predicted that 25% of all jobs would be impacted by AI technologies by 2019." Karlaz1 ( talk) 16:04, 14 June 2023 (UTC)
This
edit request to
Chatbot has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Later in 2022 a chatbot by the name chatgpt launched /info/en/?search=ChatGPT Totallynotmwa ( talk) 20:29, 4 July 2023 (UTC)
This
edit request to
Chatbot has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Suggestion for "Further-Reading" König, Wolfgang (2023): Why a Chatbot Didactics Model? The Grey-Box-Model of Chatbot-Didactics http://dx.doi.org/10.13140/RG.2.2.32217.29280 2003:D1:6737:4200:D406:D001:9A9D:A334 ( talk) 06:14, 14 August 2023 (UTC)
Not done: Self-published monograph. See also: Wikipedia:Further reading. Xan747 ✈️ 🧑✈️
This
edit request to
Chatbot has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
I have found one dead link in this article, i want to improve the link by adding well written content on the same issue. The dead link is present in the reference no. 18. Mindscrafter ( talk) 14:15, 22 August 2023 (UTC)