This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 5 | Archive 6 | Archive 7 | Archive 8 | Archive 9 | Archive 10 |
In accordance with Wikipedia:Bot policy#Appeals and reexamination_of_approvals, this is a formal request for Anybot to be deflagged and indefinitely blocked.
Anybot has now had four major runs. The first run, in February, introduced many major errors, by admission of Martin, the bot operator. [1] The second run, in March and April, fixed some of these errors; but it didn't even come close to making these articles acceptable. From April on, Martin was being asked to address problems introduced by his bot, and did not do so. For example, on 6 March Rkitko pointed out that Anybot had wrongly described thousands of cyanobacteria as algae [2], and raised the matter again on 21 April [3], but as of today, 26 June, Rkitko hasn't received a reply [4] and these articles have still haven't been fixed. [5]
Anybot ran for a third time in May and June, and continued to introduce errors. It also exhibited unacceptable behaviours such as edit warring. [6] [7] Martin has stated that he did not run the bot at this time, and that whoever did run it was not authorised to do so; apparently anyone could run the bot by visiting a certain webpage; he did not bother to secure the page because he figured no-one knew of its existence— security through obscurity. [8] [9]
The extent of the problem did not become clear until the last couple of weeks, when 69.226.103.13, who appears to have expertise in this area, spoke out strongly on the matter at WT:PLANTS. There was a long discussion, during which it became clear that there were so many wrong articles, with so many errors, of some many different types, that the only way they could be fixed is they were individually manually repaired by a phycologist. This would take thousands, perhaps tens of thousands, of hours; it would probably be quicker to delete them and write them all from scratch. Therefore I sent all 4000 articles to AfD; consensus seems to be emerging there that they will need to be deleted. [10]
One result of the AfD discussion was that it finally prompted Martin to respond. Having discovered that the bot had been run without his authorisation, he blocked it. He then began working on a bot that would fix the errors. Once this bot was ready, he announced his intention of running it. A number of people objected to the idea that Anybot could be trusted to fix these errors. [11] [12] [13] But despite these objections, and calls for the bot to be deflagged, [14] [15] [16] [17] Martin unblocked the bot and set it going, apparently without a test run, and without notifying or seeking approval from the BAG.
This fourth run put a great many articles into a novel state, including introducing new errors, such as classifying diatoms as plants. [18] These were all new edits, not reverts; but disturbingly, every edit was marked as minor, and given the misleading edit summary "Restore article to last good version." [19] The bot also edited at least one article that it had never edited before, [20] despite Martin's assurance that it had only edited articles created by Anybot and not since edited by a human. [21] I have now reblocked the bot.
In summary, this bot has been a complete disaster from start to finish. Martin may have the best of intentions but he has presided over a monumental screwup and his bot cannot be trusted at any level. I am seeking to have Anybot deflagged and indefinitely blocked on the grounds that
Hesperian 03:05, 26 June 2009 (UTC)
You know, it would be nice if these articles were ever examined in a way that showed fewer errors instead of more in number and kind. The list with articles that anybot touched but did not create contains a whole new mess of errors, each unique, and each will have to be checked and fixed by a human.
Also, the bot's synonymies are wrong, so probably 100% of its redirects should also be deleted if they can't be 100% checked, although user Hesperian is planning to deal with that.
I hope BAG makes certain that bot created articles in the future are coded properly to NOT overwrite existing articles. [22] [23]
It's hard to understand how the bot was ever allowed to continue the first time it was noticed it was doing this. Obscure algae require expertise programmers may not have, but code that allows a bot to overwrite existing, unrelated text, is a major and inexcusable error.
This group tends to ignore comments made by IPs-you don't respond to my posts. But, if you, as a group, did not ignore IPs, someone might have caught and stopped this mess long before it reached this level. The IP 213.214.136.54 edited over a thousand articles, correcting the most egregious errors, and all of his/her edits and hard work are slated to be deleted.
IPs contribute a lot of excellence to wikipedia. I can't stop you from ignoring my every post, and setting an example to bot operators that this is how to act (as Martin acted), but the wikipedia community has decided over and over to allow anonymous IPs to edit.
If this group does not respect the community consensus, it's no wonder that it allows the creation of messes that put wikipedia in disrepute.
A group that can make this much work for other writers of the encyclopedia should be a part of the community, not a non-responsive law alone.
That's just my opinion on the matter. -- 69.226.103.13 ( talk) 06:29, 27 June 2009 (UTC)
Replies below are chronologically before many of the replies above. This break splits up comments and replies, which is unfortunate. - Jarry1250 [ humourous – discuss ] 20:18, 27 June 2009 (UTC)
On the question of what can you do? How about implement the most basic type of rules that programmers use when paid to write code? A simple start is demanding algorithms, maybe only from new programmers. Some programmers have many bots and few errors.
Any competent programmer can read another's algorithm and see they've missed the most basic things like initializing variables (probably why Martin's bot added bad lines of text and created bad taxoboxes, if the information had been gathered from a prior genus, but the next genus didn't mention spore types or its taxonomy, it just used leftover information), no protection against deleting an entire existing article. This is not genius level programming.
Then get specific approval from interested editors in their arena for all bots creating articles-don't just wait to see if anyone objects here, get positive approval. After a bot's initial run creating articles, post the list of its articles on wikiproject plants or somewhere and ask writers to check off on all of the articles.
-- 69.226.103.13 ( talk) 18:20, 27 June 2009 (UTC)
I think the anybot edits show allowing a bot to create articles has potential to create an even bigger mess than this one. This appears to be because of a lack of basic coding requirements and fundamental safeguards from the BAG prior to and once a bot is given approval.
Some writers are now battling me verbally to fight this realization. It won't change the situation as it is: bots on wikipedia are not controlled.
As someone else pointed out, if a meat editor had created these articles they would have been permanently blocked as a vandal once they were told of the problem and failed to stop. This block, in the case of anybot, would have occurred after the test run, not after 4000 articles.
I don't think bot owners want to hear this, and I've said my piece, and taken enough insults in exchange. This is a familiar wikipedia response to an editor, particularly an IP or newly registered user, pointing out a problem with wikipedia. This is how wikipedia winds up again with egg on its face in the news: wikipedia editors don't want to hear what's wrong. That's why this mess wasn't cleaned up in February: established editors refused to listen, a phycology editor, and the bot owner, and once a bot is approved its owner is allowed to do what he wants. -- 69.226.103.13 ( talk) 20:21, 27 June 2009 (UTC)
I understand that BAG gets a fair number of passers-by with strange ideas, but the regulars on this page should note that Anybot has done a vast amount of damage, and has caused many people, particularly 69.226.103.13 and 213.214.136.54, to waste an enormous amount of time. It's not BAG's job to monitor every bot, or to clean up mistakes. However, it would be decent to acknowledge that a big problem has occurred, and enquire whether something could be learned from the incident. One lesson seems clear: When a bot is approved to mass create articles, and if the subject matter is such that normal editors can't tell if the content is good or bad, BAG should apply a condition that periodic checking by the relevant subject project must occur: a pause of at least one week after every 100 new pages, and positive approval on the project page (not absence of objections), until the subject project page approves an uninterrupted run. That condition should be required regardless of current consensus on the subject project page.
Also, the statement by the bot owner that "somebody has been running the bot without my knowledge" needs serious investigation, or at least an acknowledgement that BAG members would like to investigate but are unable to do so due to lack of time or whatever. The particular bot owner is not important, it's the general claim that needs investigation.
Finally, I would like to add my voice to those who have elsewhere thanked 69.226.103.13. Thanks for the enormous amount of effort you have applied to investigating and resolving this situation. I know that ThaddeusB has issued somewhat of an apology, but I urge ThaddeusB to strike out the two sentences above starting "First of all...". Under normal circumstances, the text above would be very satisfactory, but owing to the particular circumstances of this case, it is not. Johnuniq ( talk) 04:52, 29 June 2009 (UTC)
I don't come here very often, but when I do, I consistently get a defensive, even rudely so, response, as though the BAG thinks it has a mandate to defend all bot operators against the horde of clueless non-coding lusers who don't appreciate their work.
I once came here to register my dissent against the BAG's decision to approve a bot that reverted newbs on certain articles for the sole reason that they were newbs. I had, and still have, serious philosophical objection to such a bot. I carefully laid out those objections. The response was "Bots make mistakes. So do you. Show some good faith." [25] Sadly, this is a fairly typical BAG response.
In the present case, I brought to the BAG a request for information on how I could document the Anybot situation, so that the BAG would take the situation into account when considering future applications. I concede that I started off too aggressively, so I won't take you to task for the defensive response. But for a BAG member to then rush off to the main discussion thread and precipitately declare that it was "all heat and no light" was unhelpful to say the least.
I really think there needs to be a change of focus here. "BAG" does not stand for "bot advocacy group". At the very least, you guys should be maintaining files on each bot you approve, soliciting feedback on performance, and proactively responding to reported problems.
Hesperian 05:40, 29 June 2009 (UTC)
I'm sorry, I tried to read this page, I honestly did. But the only thing said in the mess above is a bunch of pointing fingers. Hey guys, a protip on actually fixing things, is actually coming up with something constructive instead of ripping into each other. Furthermore, I apologize to 69.226.103.13 for not knowing the bot approval process. Q T C 00:41, 30 June 2009 (UTC)
BAG isn't a magical mythical entity that can catch subtle logic errors in all programming. As has been pointed out the bugs found during the trial process were vetted and reported to be fixed.
As also has been pointed out, BAG takes into account issues raised by the community during the approval process. Again as has been pointed out, community response during the approval was minimal.
BAG, like all the other various people on Wikipedia, is made up of volunteers who do this in addition to the things they do elsewhere on Wikipedia. It would be a pointless waste of effort to sit here all day checking every bot edit. BAG like AIV/3RR/UAV depend on people to report the problem.
Unapproved bots are block. End of story. However like above, we cannot go around checking every single edit made on Wikipedia to see if it conforms to all the previously approved tasks, like above it depends on people reporting this.
Thank you for Assuming Good Faith. It's hard to reply to people whose only comments are making baseless accusations against you.
Please do the same.
If you'd like to have an honest discourse on ways to improve the process please feel free to start a new topic. Q T C 00:53, 30 June 2009 (UTC)
The point of this post addressing me personally, and of Anomie's, are to avoid the topic. Again, no assumption necessary, the lack of input BAG coupled with the offensive defensiveness, when threads are not entirely ignored, is the evidence.
The BAG shows no reason it should have authority to give bots the go ahead. It does not monitor the bots. It does not check the code. It takes lack of community input as consensus. It reads the input anyway it wants. It ignores concerns posted about the bot.
Bureaucrats and the wikipedia community should find another venue to address bots for the encyclopedia, a venue where questions are answered, where threads raised by IPs aren't completely ignored when they're about issues that could bring the encyclopedia into disrepute.
Anybot made a mess due to its poor programming, its owner being the one given the power to unblock it, the BAG and the bot owner not responding to concerns (like my unanswered thread above) and personally insulting editors who raise legitimate issues in an apparent attempt to sidetrack the legitimate issue. Again, an opinion formed from the evidence on BAG boards.
-- 69.226.103.13 ( talk) 04:28, 30 June 2009 (UTC)
My 2 cents.. a bot shouldn't be "writing" articles anyway and I hope such a bot is never approved again. - ALLST✰R▼ echo wuz here 08:23, 30 June 2009 (UTC)
I've tried to say something that would lead to discussion on improving the reliability of bots operated on wikipedia. This is part of working as a team. This group is not ready for that discussion, because it involves tough issues like, should the BAG be the group with bot-authorization powers, and it involves working with a larger team: team wikipedia. The larger wikipedia community may want to address this question some time.
Bureacrats should question whether they should flag bots on the say-so of a group that denies any responsibility for how bots are operated. That's my opinion. This group is not interested. I can't change that. -- 69.226.103.13 ( talk) 18:06, 30 June 2009 (UTC)
By "do more", I specifically mean adding this to Wikipedia:Bots/Requests for approval/InputInit:
<!--Source code available: e.g. a link to the source code, "To BAG/Admins by request", "Standard pywikipedia"/"AWB"/etc. Be sure the bot account's password is not given out! --> '''[[Source code]] available:'''
One of the things BAG is supposed to do is ensure that bots are technically sound; having the source code can help us do that. Note that I'm not proposing we require source code, just that we start specifically asking for it. Unless there are objections, I'll add this in a few days. Anomie ⚔ 20:56, 30 June 2009 (UTC)
I'd also suggest asking "Exclusion compliant?" in the request, just as a suggestion for people. – Quadell ( talk) 14:02, 1 July 2009 (UTC)
I added the fields, leaving out the "To BAG/Admins by request". Anomie ⚔ 12:40, 3 July 2009 (UTC)
See Wikipedia:Bots/Requests for approval/BOTijo 2. This may have been approved before case-insensitivity in the search field was implemented and the task may now need to be revoked - it seems to be creating many unnecessary redirects. – xeno talk 21:12, 24 June 2009 (UTC)
bugzilla:19882 - for interested parties... – xeno talk 19:58, 22 July 2009 (UTC)
Can this bot be withdrawn please? The way the bot is performing is not satisfactory. See here, here and here. The last one is a case of the bot ignoring a nobot instruction. What the bot is trying to achieved can be achieved faster, and with less damage, by human editors. Mjroots ( talk) 05:45, 3 July 2009 (UTC)
I think most bots check templatelinks (so template redirects don't really matter); trying to parse wikitext would be insane.... -- MZMcBride ( talk) 14:39, 10 July 2009 (UTC)
There's a proposal at BON to make a minor change to the way ClueBot clears the template sandboxes. I don't think this requires a BRfA, since there is no opposition to the proposal and it's rather minor. Please add your opinion to the thread - Kingpin 13 ( talk) 03:38, 18 August 2009 (UTC)
Please see Wikipedia:Village pump (policy)#Proposal: Any large-scale semi-/automated article creation task require BRFA and comment (there, not here). Thanks! – xeno talk 18:15, 18 August 2009 (UTC)
Would anyone mind if I started this task back up again? It's been awhile since I did it. I know the prod time frame was upped to 7 days from 5, so I'd up my wait time from 7 to 9 days. I'm probably not required to ask this, but I figured it can't hurt.-- Rockfang ( talk) 09:12, 11 September 2009 (UTC)
Someone had a good regex for catching unlabelled dabs to skip them, like "' ' ' *(can refer to|can be one of|is one of)". But I can'tfind it. Anyone? Rich Farmbrough, 17:04, 12 September 2009 (UTC).
I think this may have been proposed before or informally discussed on IRC, but never went anywhere. There are very few bots that we can really apply "precedent" to, mainly because even if tasks are similar, most bots use custom-made code. However, interwiki bots typically all use Pywikipedia's interwiki.py. So I propose a new system for expedited approval of certain interwiki bot requests:
Its unrealistic to hold operators who run bots on half a dozen projects that aren't their home project (including enwiki) to the same standards that we hold operators who only operate bots on their home project. Given the reliability of interwiki.py compared to the average bot script, its also unnecessary. As long as the operator is aware of the known issues (don't run it in the template namespace ... are there others?), there shouldn't be any problems. Mr. Z-man 15:51, 1 August 2009 (UTC)
I just wrote {{ Delayed notice}}. Since lots of you code bots, check requests, do trial runs, etc..., this could prove useful in helping you keep track of stuff. Not sure where the best place on WP:BOTS was to post this, but this seems the highest traffic talk page (and thus has a higher outreach). Move this somewhere else if you think there's a better place for it. Headbomb { ταλκ κοντριβς – WP Physics} 14:13, 15 August 2009 (UTC)
Very handy. Rich Farmbrough, 14:12, 17 September 2009 (UTC).
It was recently agreed on Wikipedia:Village pump that any large-scale automated or semi-automated article creation task should require BRFA. One concern was that it would be impossible to follow up, has this been the case? Take for instance Sasata's recent large creation of fungi articles. Or Fergananim's very short articles on medieval Irish aboots, like Gillabhrenainn Ua hAnradhain. According to the new regulations these should both require approval, but I can't see that this has been done? Lampman ( talk) 14:57, 28 September 2009 (UTC)
Why is it bot policy that a bot will most likely be approved after a community discussion? Isn't it that a decision to approve a trial will be made after discussion?
After a reasonable amount of time has passed for community input, an approvals group member will most likely approve a trial for your bot and move the request to this section.
What? -- 69.225.5.4 ( talk) 18:23, 29 September 2009 (UTC)
Can we give more than 3 minutes for interested users to examine trial runs? [26] There seem to be many excuses for why community consensus is not needed, not given, no time for it. In this particular bot case, the task is straight-forward, responsible and responsive bot owner, dealing with deprecated code, etc., etc. But, sometimes I want to examine the trial runs after they have been run, but before the final approval, to see if they are problems that show up during the trial. A good reason for doing trials in the first place is to examine the results.
3 minutes is not enough time, and I don't see the urgency in approving this bot in 3 minutes. A couple of days for interested users to examine the trial run is not unreasonable, imo, no matter what the task.
One reason for instruction creep, by the way, is that editors seem to other editors to be overlooking common courtesies and common sense. I don't see why the instructions should say wait 2 days or wait more than 3 minutes, except that it is apparently not obvious that waiting more than 3 minutes gives time for community input.
There was no urgency in approving this bot, so allowing more than 3 minutes for the trial run to be examined by interested parties would have been a simple courtesy. -- IP69.226.103.13 ( talk) 20:58, 22 October 2009 (UTC)
So, it boils down to: after community input, a BAG member "will most likely approve a trial for your bot," (without any reference to the community input), then based entirely on technical functionality, the bot will be quickly approved after the trial. The bot owner is solely responsible for the actions of the bot.
So, BAG does nothing, but allow for a community input board then fast forward bots to be flagged by bureaucrats, or whoever flags bots... Interesting.
I will then move forward with this understanding of BAG's role on en.wiki. -- 69.226.111.130 ( talk) 20:49, 23 October 2009 (UTC)
We are not a bureaucracy, we can just take action without putting everything up for discussion first. If the community objects then we can act on it, otherwise we can just get the work done. If something goes wrong with a bot then that is what the revert button is for. Chillum 04:58, 24 October 2009 (UTC)
BAGbot seems to be mostly dead lately and ST47 doesn't seem to be around anymore, so Wikipedia:BAG/Status isn't being updated and users aren't being notified when {{ OperatorAssistanceNeeded}} is used (it did a couple other things, but these 2 were probably the most important). I was going to make a replacement, but can't seem to find the time to finish it (i.e. de-crappify my hastily thrown together code). If someone wants to write a replacement for it, that would be much appreciated. If someone wants my code to start with (written in Python, using my framework), I don't recall if the current version actually works or not. Mr. Z-man 04:56, 27 October 2009 (UTC)
So, suppose someone wanted to run a bot that did admin activities, but was not themselves an admin. (I can't program, so no, this isn't asking about me.) Is this allowed? If it's not been considered, should it be allowed? Also, what about someone who is an admin, runs a useful bot, and is desysoped? (Or, what in the case where a regular user who runs a bot is banned.) I'm just curious how such issues are approached, so I can better know policy. Irbisgreif ( talk) 08:10, 15 October 2009 (UTC)
There was a remark that this bot was approved without community consensus. [27] ("Bots seem to get approved based on a technical evaluation rather than on whether they conform to bot policy by only making edits that have consensus, as happened here.")
The bot was approved in two days with no input from anyone else in the RFBA process, other than bot operator and single BAG member who approved the bot for editing thousands of mainspace articles after examing some trial edits made by the bot before it was approved for trial edits, also, thereby, eliminating the opportunity for community input on the trial run. [28]
This bot only edits pages which have the parameter blank already, but the parameter does not show up in the article if not filled in (| oclc = | dewey = congress = ), and whether it should be filled in by a bot should be discussed with the wider community to gain consensus for the bot task.
I would like to see some time pass before bots that impact article space widely are approved.
Bot policy requires that a bot "performs only tasks for which there is consensus," means that bots should not be approved for main space tasks without community input. One BAG member does not constitute community input, imo.
Link to discussion calling OCLC linkspam-controversy.
[30] This link is not about CobraBot. I include it because a quick search shows that OCLCs are something that generates a lot of discussion on en.wiki. This discussion mentions, for instance, that consensus shows "OCLCs are considered superfluous when ISBNs are present." This discussion shows that, contrary to being approved, the CobraBot task maybe should have been denied as there might not be community consensus for the task at all. Consensus is required by bot policy. None was asked for in this approval. No time for community input was allowed before community approval. A prior bot was stopped from doing this task by the community. Maybe this bot task should not have been approved against community consensus.
-- 69.225.5.183 ( talk) 07:33, 18 October 2009 (UTC)
1RR with bots is a good rule to follow with bots.
Adding thousands of links without community input is a major concern. However, in the case of mainspace edits that contribute to thousands of article additions or changes, I would like to see community input at least given a chance in the future, and anything that makes this explicit to BAG members would be a way of addressing the situation.
At this point, however, I would also like community input about rolling back the bot edits, since they were made without community input, and they link externally. This should not have been done without major community input. And, in the case of future editing mainspace with a bot adding external links, I think the default value should be to not do so if the community has not positively spoken for adding the link. -- 69.225.5.183 ( talk) 02:57, 21 October 2009 (UTC)
I think that "say no to linkspam" says it all, no matter what the age. There was no consensus to actively link to this site, the bot move forward without gaining any community consensus, making en.wiki the "feeder site" to thousands of links to worldcat. The community should decide whether or not the infoboxes provide links to this particular website, not BAG, particularly since BAG's fallback is to generally approve a trial, then approve the bot for flagging based only on technical issues.
BAG itself seems to indicate there is no design for community input: a trial is "most likely" approved, without rgard to community input, then the bot is approved solely on technical issues. Linking thousands of wikipedia pages to worldcat required community consensus, not rapid approval. If this is done here it could be an easy avenue for vandalism. -- 69.226.111.130 ( talk) 21:05, 23 October 2009 (UTC)
"I note that User:Cybercobra commented that the bot was being suspended "pending an WP:ANI thread"[184]. If that was changed to "pending a much wider consensus that this is an appropriate task for a bot than the one person who approved it" I would be willing to close the discussion here, because it would not need administrator action such as blocking. I think that there's a much wider issue at stake here about the fact that one editor can put up a bot for approval, and it can get passed by one other editor because it works, without any consideration as to whether there is any consensus about whether the bot's actions are acceptable. At least if we are going to allow that to happen we should have an understanding that a bot operator should suspend a bot, pending discussion, in response to a good faith request by an established editor. WP:BRD is a well-known adage, but, when a bot is doing lots of bold edits it's impossible for a human to maintain the same pace to revert. Phil Bridger (talk) 23:05, 28 September 2009 (UTC)"
I've added a new section to the debate and have been reverted. Twice. Without any reverted politely including the insight as to where, besides "a new section" subsequent comments should be made.
So, if not in a new section according to the directions, where should the subsequent comments be made? [31]
Please, could BAG be more accurate in the directions? So many comments about users don't want to participate, but when editors do participate according to directions they are rudely reverted without any help.
So, where? And put that location on the BRFA closure template. -- IP69.226.103.13 ( talk) 16:49, 29 October 2009 (UTC)
While reviewing Betacommands Arbcom decisions and etc., I see that the time of approval and BAG's lack of monitoring community consensus have been raised as issues before. I would like the waiting time for post-trial approval to be at least a week. I would also like bots not to be approved when there is no community consensus. As Cobrabot task 1 had no community consensus I would like it blocked and its flag removed. This changes the RFBA for Cobrabot task 2. I would like that revisited, also, in light of the speedy approval. -- IP69.226.103.13 ( talk) 19:15, 29 October 2009 (UTC)
Of course I'm not arguing sincerely. Every time I post a sincere comment for discussion I get derailed, by Betacommand and you his ardent supporter, with ridiculous comments about "waiting for Godot," when it's well know what community consensus is, by hyperbole about waiting forever, and closing down BAG. I can't argue any of that with sincerity, as its purpose was not to raise sincere issues but to attack me personally and avoid discussing the issues at all. --
IP69.226.103.13 (
talk) 20:50, 29 October 2009 (UTC)
In fairness to IP69, the CobraBot 1 BRFA was not appealed by anyone (I think that's what they're trying to pursue here, along with several other things) and is still valid; I only paused running it until the discussions about it were resolved (with no consensus against the task); the reason task 1 isn't running currently is because CobraBot successfully completed its pass over all the articles in Wikipedia using {{ Infobox book}}. If IP69 wants to appeal task 1, they are free to do so. -- Cybercobra (talk) 21:32, 29 October 2009 (UTC)
I hadn't considered that. I personally think modifying the template to make it an inactive link would be preferable, kinda the best of both worlds, since that was one of the big complaints about the OCLC< but others may have ideas about the best course of action, if any. -- IP69.226.103.13 ( talk) 21:43, 29 October 2009 (UTC)
I posted a discussion as suggested. [33] I notified everyone who had commented on that page. I have not notified anyone who commented in the AN/I, although there may be other interested users from that list. I will also post a link to this discussion at the Village Pump. -- 69.226.106.109 ( talk) 06:55, 30 October 2009 (UTC)
I searched to find discussions about the most important part of the OCLC issue, and I found a place where it was being discussed. I invited editors who had expressed an opinion in the past, inviting all editors in a discussion, whatever side of the issue they were on, delivering a neutral invitation and not opening the discussion with my own opinion on the issue. Invited editors came by, expressed their opinions on the matter. It appears there is support for linking the OCLC number externally to the worldcat website. Not just by numbers, but generally there were positive expressions for the functionality of the usage.
I have an additional concern about the linking, that I would like to be addressed, there, but I don't think it stands in the way of this issue.
In my opinion this is one very positive and straight-forward way to gain community input on a bot: 1. identify any issues that are community concerns rather than mere technical concerns. 2. find the community this impacts 3a. neutrally invite members of that community to a neutral and logical location for discussing the issue 3b. create the discussion.
It seems to me this isn't that hard. The animosity of the bot boards toward outsider opinions makes it hard have an opinion here. Wikipedia isn't the only place where I participate in discussions with strangers on the web, but it's the only place I have such a negative reputation. I'm not the only one to mention how hostile the bot boards are to outsiders. If you want community input, you have to learn to identify the community as people who are potentially outsiders. If someone is not concerned in a negative way about an issue, they may not bother to speak up. If they are concerned, and they speak up here, they need to be deal with for their concerns directly, in a civil manner at all times.
Once the underlying community impact concerns are dealt with the bot is just a matter of technical approval. Does it do what it says, is the operative willing and able to respond to community issues that may arise, a trial run, does the trial run raise any glitches. None of this really needs more than monitoring by bots members, if they are monitoring that. It also kinda negates the issue of timelines if the community support is in place and the bot is reduced to technical matters.
So, imo, this is how it can be done in a way that gets community support for bots. Find the community that is impacted, politely and neutrally seek their input, allow them time to speak, then move forward on technical matters. It seems to me, also, from the bots policy, that this actually what was originally intended for how the bot approval process should work. However, bot members have moved from failing to get community input, due, imo, to how hostile this area is to outsiders, to saying that community consensus is not necessary or that if the community doesn't offer any negative input then the bot can go ahead.
-- 69.225.3.198 ( talk) 21:53, 2 November 2009 (UTC)
To satisfy the concerns of the above and punt issues back to the community before mistakes can be made, and following on from the above, I propose the following:
Issues may arise around "appropriate", but I do not think we should formalise this in some policy. Instead, as guidance, the question must be of impact - if the bot edits mainspace content then an attempt must be made by the operator to get some wider approval.
This is not wildly variable to what we do now, except we place the onus on the bot op to obtain consensus in advance of an approval request, allowing us to concentrate on technical aspects. All we have to do, then, is to be happy that the criterion in bot policy have been met. This can also give the flagging crats a discussion that is independent of BAG.
To conclude, a minor change, but one that improves our workflow, cuts down on erroneous approvals (at least from a consensus point of view) and improves the appearance of BAG and what we do. Fritzpoll ( talk) 14:14, 3 November 2009 (UTC)
{{
BOTREQ|advertise}}
more liberally. And IMO, whether the community consensus happens before the request or whether the request gets put on hold while consensus is sought is no big deal, since either way BAG will have to look over the consensus discussion and since the major problem with our workflow is in most BAGgers being relatively inactive rather than in having too many discussions to manage.
Anomie
⚔ 16:56, 3 November 2009 (UTC)
"If your task could be controversial (e.g. most bots making non-maintenance edits to articles and most bots posting messages on user talk pages), seek consensus for the task in the appropriate fora. Common places to start include WP:Village pump (proposals) and the talk pages of the relevant policies, guidelines, templates, and/or WikiProjects. Link to this discussion from your request for approval."
This looks fine to me. The instructions are not as onerous, imo, as many on wikipedia, and they're actually written and designed for the inexperienced user to be able to follow them. A bit strange the place on wikipedia where a template can actually be followed by a new user is the template least likely to be used by an inexperienced user.
And, thanks Anomie, for changing the wording earlier on the approval of a trial. -- 69.225.3.198 ( talk) 00:32, 4 November 2009 (UTC)
Speaking of uncontroversial tasks, is there any way to get the process moving on RSElectionBot? I fulfilled the bot request within about 8 hours of when I was asked, but now it looks like the bot's going to just stagnate on BRFA and the election will be over before anything happens with it. rspεεr ( talk) 09:30, 3 December 2009 (UTC)
There are a number of uncontentious, trivial changes (for example, changing 'Image': links to 'File:') for which no bot solely performing them is likely to be approved, even if they were all grouped together.
AWB applies those fixes as a matter of course when doing other work.
Would it be reasonable to have a "bot" that represents these trivial fixes, and approve each fix; this "bot" could be then coded by the various framework maintainers and it's approved tasks could be applied by any real bot in addition to their own approved tasks? Josh Parris 03:39, 30 December 2009 (UTC)
Is there any automated process for cleaning out Category:Open_Wikipedia_bot_requests_for_approval? Josh Parris 03:39, 3 January 2010 (UTC)
I'd like to make some bulk edits such as removing {{ Gaelic Games in Ireland}} from pages that it shouldn't be listed on or adding {{ GaelicGamesProject}} to pages in Category:Gaelic Athletic Association and it's branches which aren't tagged and as such am not requesting a new bot. Do I need to request approval here or can User:GnevinAWB be changed to allow automatic editing ? Gnevin ( talk) 16:34, 8 January 2010 (UTC)
Hi. Since User:Fritzpoll has left, that leaves his bot inactive. I've offered to take over Wikipedia:Bots/Requests for approval/FritzpollBot 4 for him, and am expecting the source code shortly. So I'd like an okay from a BAG member (other than me) to run this (previously approved) task with User:KingpinBot. And was also wondering if they'd be any other volunteers to take over FritzpollBot's other tasks? Best, - Kingpin 13 ( talk) 13:22, 19 February 2010 (UTC)
AWB has a limit of 25K when creating lists of articles to edit. In order for the list tool to create a longer list, one needs a bot account. Note that this is true even if the number of pages that will actually be changed is far less than 25K. In my case, I am trying to fix issues in articles that transclude {{ Infobox single}}, o which there are more than 25K. The great majority of the articles won't be changed: filters in the settings of AWB will cause most articles to be skipped.
Another project that is directly related to this is an investigation of if/how to merge {{
Infobox song}} into {{Infobox single}}
. To do that, I want to find articles that use certain template parameters and make sure that my proposal for a merged template won't break any articles, or I will find and fix such articles first, etc. Either way, I need to be able to search all the transclusions of {{Infobox single}}
, not just the first 25K.
So... how do I get a bot account for this use? The request process here seems oriented to standalone bots. — John Cardinal ( talk) 21:51, 28 February 2010 (UTC)
During which step on the list should I have my bot readied? Should I have finished it before I even suggest it? I find the instructions here very confusing. Awesomeness talk 16:03, 5 March 2010 (UTC)
I am an experienced admin who maintains and runs the OpenOffice.org Forums as well as a couple of wikis using the Wikimedia engine, but I am also involved in Wikipedia as a normal editor. I am currently doing a project with another editor to add a CFS sufferers category to Bios of CFS sufferers very much the same way that many people who are HIV positive or suffer from other widespread illness are tagged. (See my sandpit CFS people). If we come to the point where we decide to implement these category insertions, then I can see that they are four sensible possible approaches:
{{fullurl:XXXXX|action=edit}}
links, fire up a new page for each, and use a small client-side greasemonkey script to automate this.(4) isn't one of the current standard bots, but the personal overheads of seeking approval for this just don't seem worth it. If this were one of my own wikis then I would just use (3). However, since I would be writing back to (main) name space articles, then I assume that the precautionary principle applies and I would therefore still need full approval. (1) is just tedious beyond words and such repetitive manual tasks are very prone to the risk of keying errors. So by my understanding I am left with (2) and as long as I include a per page visual check and the actual save is manual then this falls within Wikipedia:Bot policy#Assisted editing guidelines and therefore doesn't need the bureaucracy of formal approval. Am I correct in this? -- TerryE ( talk) 17:42, 8 March 2010 (UTC)
Greetings-- I'm hoping to put together a read-only bot that I can use to develop statistical sampling techniques of Wikipedia. Since the purpose of the bot is sampling, it shouldn't be a bandwidth hog in any way. In particular, not unlike some researchers, I'm hoping to compare site traffic with edit traffic in a statistical way. Would this kind of bot be approved? How can I develop the software before it is? Thanks, Owensmartin ( talk) 23:30, 10 March 2010 (UTC)
If you're not editing you don't need a separate bot account or approval (although an account with a bot flag may be useful as it allows for higher api queries). Please make sure you use a descriptive user agent with a contact email address in it or domas may ban your IP address. -- Chris 08:20, 11 March 2010 (UTC)
I'd like to transfer ownership of User:SuggestBot from myself to User:Nettrom, who has both more time and energy to keep it going and make it better. I've updated the bot's page itself, I think. Let me know if there's anything else that needs doing. -- ForteTuba ( talk) 21:10, 8 April 2010 (UTC)
Just wanted a quick go ahead from another BAG member to change KingpinBot's wikiproject tagging from using AWB to using C#. Cheers, - Kingpin 13 ( talk) 13:16, 24 May 2010 (UTC)
So I have followed the process documented for seeking approval for a bot. Now the question is, what do I actually have to do to get it reviewed and hopefully approved? -- Traveler100 ( talk) 08:19, 21 July 2010 (UTC)
If I want to achieve the same result by a slightly different method does this need a new bot and new approval or can it run on the existing bot user? To be more specific can User:People-photo-bot#Specification 2 (proposal) be run under the same bot as User:People-photo-bot#Specification 1 (active)? -- Traveler100 ( talk) 14:41, 15 August 2010 (UTC)
This is about the bot Ganeshbot4. The discussion about whether the bot should be approved for the task of creating gastropod articles is about this bot creating 600 species Conus articles. The bot owner says the bot was approved for creating thousands (or more) of gastropod species articles from WoRMS. I do not see this in the approval discussion.
The bot is creating bad articles. WoRMS is not as careful about its taxonomic experts as it should be. This bot is creating articles about species that are listed from a single 18th century identification followed by an amature's out-of-print book, that has been verified by the WoRMS "taxonomist"--an amature shell collector.
I was surprised by some of the gastropod species listed next to a new article I created. I attempted to verify the species were correct, but could find no sources except for WoRMS, the out-of-print book, and, now Wikipedia.
What gives? Was this bot approved to add all of WoRMS to Wikipedia? I think Wikipedia might be creating species. JaRoad ( talk) 22:40, 15 August 2010 (UTC)
I checked some articles created by the bot. They have incorrectly or non-validated taxonomic names, subspecies listed as the wrong species, bad descriptions, and the taxoboxes are junky. I have not found one article created by this bot that is correct. JaRoad ( talk) 23:30, 15 August 2010 (UTC)
I posted information about this bot at Wikipedia:Administrators' noticeboard/Incidents. I think the bot should be stopped until editors can clean up its articles. I think its articles should be deleted if editors are not willing to individually fact-check every one. This is irresponsible article creation by Wikipedai. A google search turns up these articles at the top of short lists. Please be more considerate to Wikiepia readers when approving the creation of 1000s of species articles from databases. Maybe most of the articles will be okay. But I checked 6, and the 6 I checked were all bad. The bot operator does not have good answers to questions about where the approvals are, who is making sure the articles are correct, or anything. JaRoad ( talk) 23:58, 15 August 2010 (UTC) [35]
I request this bot be blocked, its 15,000 articles individually checked and verified by gastropod editors or deleted if that is not possible, and that the bot's task for creating an unlimited number of species stubs be discussed formally on the requests for bot approval page rather than in a secondary discussion elsewhere.
In response to a post about another bot that created bad species articles ( with disastrous results. I still have nightmares about Wikipedia:Articles for deletion/Anybot's algae articles.) the bot operator said,
"This task is no way close to what AnyBot did. I had clearly mentioned that about 580 stubs will be created (nothing more). The list of articles that will be created are already listed in the data page (first column). The point of creating these articles is so that the Gastro team can come in and expand them. Regards, Ganeshk ( talk) 01:45, 18 March 2010 (UTC)"
I added the bold marks to Ganeshk's text, they are not in the original.
The bot operator aske for this approval of "580 stubs ... (nothing more)" for a single species, Conus, then decided he did not need approval for other families, ":The bot approval was for the task, to create species using the WoRMS as reference. I don't see myself getting approval for each mollusk family. As for the 100 edit restriction, they were lifted at this discussion. — Ganeshk ( talk) 04:13, 16 August 2010 (UTC)"
Again, I added the bold.
Maybe the misunderstanding is strong enough that bots need to be explained better to this editor before he continues to run any bots on Wikiepda. In the meantime, this bot should be blocked and its approval for creating 15000 gastropod stubs should be revoked. JaRoad ( talk) 05:25, 16 August 2010 (UTC) JaRoad ( talk) 05:25, 16 August 2010 (UTC)
Ganeshk, there's nothing at the link you gave that relates to lifting your restriction on edits. Could you find a diff please. Thanks. Elen of the Roads ( talk) 10:02, 17 August 2010 (UTC)
I read the bot approval as a technical approval to run the under the project supervision. On hindsight, I should have alerted the BAG that I am planning to do this and get their advise. The bot has created 15,000 articles so far. The Gastropod project members have been adding additional information to these stubs and have not found any major issues with them. The project members provide me with a introduction sentence and approval to run a family. The bot then creates the species within that family. The full list of bot-created articles is at User:Ganeshbot/Animalia/History. I will wait to hear what the next steps are. — Ganeshk ( talk) 12:27, 17 August 2010 (UTC)
The facts here seem clear:
To move forward, I suggest the following:
Opinions? Anomie ⚔ 16:18, 17 August 2010 (UTC)
The bot has been approved per Wikipedia talk:Bot Approvals Group/Archive 7#Wrong way of the close a BRFA. How the bot work has been for example also in the Wikipedia Signpost: Wikipedia:Wikipedia Signpost/2010-04-26/WikiProject report. The Bot has unified support by Wikiproject Gastropod members. Maybe Bot approval group would like to know in detail, how the bot work: 1) User:Ganesh will get clear instructions what to do by me or directly at Wikiproject Gastropod talkpage; 2) User:Ganesh will run the GaneshBot and it will do the task. That is all. I can personally guarantee that the bot runs OK. There is no need restrictions or restrains from "Bot Approvals Group" because nobody of the "Bot Approvals Group" is able to evaluate if informations generated by a bot are correct or not (see for example another overview what the bot have done User:Snek01/Taxonomy). By the way, it even seems that "Bot Approvals Group" is even unqualified to close BRFA correctly. So feel free to formalize bot's approval, but do not restrain useful work that is being done.
I will give you an example to compare:
-- Snek01 ( talk) 00:03, 19 August 2010 (UTC)
User:BrokenAnchorBot uses AutoWikiBrowser to make it's edits. The code behind it just exports an AWB xml settings file with simple, case sensitive replacements and a list of articles that need to be edited. My bot uses the fact that AWB can automatically show the replacements that were made in the edit summary. Sometimes AWB creates summaries like this because [[this is a long article name#and a section|and a label]] can end up being quite long, and edit summaries have a limit to how long they can be, breaking the link in the summary. I would like to improve these edit summaries by avoiding {or at least mitigating) such broken links. A smarter edit summary could omit the labels to make the text in the summary shorter, or do a few other things to be smarter about this. I'm not a windows guy, and learning C# doesn't appeal to me all that much, so I'm reluctant to take on the task of an AWB plugin myself. Should I submit a BRFA if I do decide to ditch AWB? I'll add any code to the repo before anything goes forward. Winston365 ( talk) 06:29, 26 July 2010 (UTC)
Would tagging articles en masse be wanted since I was considering learning programming to create a bot specifically designed to go through articles and tag them with the relevant issue needing addressing whether it be wikifying or a quick copyedit. If a bot like this would just be too hard or unwanted is my question. I was referred here by Arbitrarily0 a while ago but I never got to doing that (out of laziness). Ғяіᴆaз'§Đøøм | Champagne? 06:54, 13 September 2010 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 5 | Archive 6 | Archive 7 | Archive 8 | Archive 9 | Archive 10 |
In accordance with Wikipedia:Bot policy#Appeals and reexamination_of_approvals, this is a formal request for Anybot to be deflagged and indefinitely blocked.
Anybot has now had four major runs. The first run, in February, introduced many major errors, by admission of Martin, the bot operator. [1] The second run, in March and April, fixed some of these errors; but it didn't even come close to making these articles acceptable. From April on, Martin was being asked to address problems introduced by his bot, and did not do so. For example, on 6 March Rkitko pointed out that Anybot had wrongly described thousands of cyanobacteria as algae [2], and raised the matter again on 21 April [3], but as of today, 26 June, Rkitko hasn't received a reply [4] and these articles have still haven't been fixed. [5]
Anybot ran for a third time in May and June, and continued to introduce errors. It also exhibited unacceptable behaviours such as edit warring. [6] [7] Martin has stated that he did not run the bot at this time, and that whoever did run it was not authorised to do so; apparently anyone could run the bot by visiting a certain webpage; he did not bother to secure the page because he figured no-one knew of its existence— security through obscurity. [8] [9]
The extent of the problem did not become clear until the last couple of weeks, when 69.226.103.13, who appears to have expertise in this area, spoke out strongly on the matter at WT:PLANTS. There was a long discussion, during which it became clear that there were so many wrong articles, with so many errors, of some many different types, that the only way they could be fixed is they were individually manually repaired by a phycologist. This would take thousands, perhaps tens of thousands, of hours; it would probably be quicker to delete them and write them all from scratch. Therefore I sent all 4000 articles to AfD; consensus seems to be emerging there that they will need to be deleted. [10]
One result of the AfD discussion was that it finally prompted Martin to respond. Having discovered that the bot had been run without his authorisation, he blocked it. He then began working on a bot that would fix the errors. Once this bot was ready, he announced his intention of running it. A number of people objected to the idea that Anybot could be trusted to fix these errors. [11] [12] [13] But despite these objections, and calls for the bot to be deflagged, [14] [15] [16] [17] Martin unblocked the bot and set it going, apparently without a test run, and without notifying or seeking approval from the BAG.
This fourth run put a great many articles into a novel state, including introducing new errors, such as classifying diatoms as plants. [18] These were all new edits, not reverts; but disturbingly, every edit was marked as minor, and given the misleading edit summary "Restore article to last good version." [19] The bot also edited at least one article that it had never edited before, [20] despite Martin's assurance that it had only edited articles created by Anybot and not since edited by a human. [21] I have now reblocked the bot.
In summary, this bot has been a complete disaster from start to finish. Martin may have the best of intentions but he has presided over a monumental screwup and his bot cannot be trusted at any level. I am seeking to have Anybot deflagged and indefinitely blocked on the grounds that
Hesperian 03:05, 26 June 2009 (UTC)
You know, it would be nice if these articles were ever examined in a way that showed fewer errors instead of more in number and kind. The list with articles that anybot touched but did not create contains a whole new mess of errors, each unique, and each will have to be checked and fixed by a human.
Also, the bot's synonymies are wrong, so probably 100% of its redirects should also be deleted if they can't be 100% checked, although user Hesperian is planning to deal with that.
I hope BAG makes certain that bot created articles in the future are coded properly to NOT overwrite existing articles. [22] [23]
It's hard to understand how the bot was ever allowed to continue the first time it was noticed it was doing this. Obscure algae require expertise programmers may not have, but code that allows a bot to overwrite existing, unrelated text, is a major and inexcusable error.
This group tends to ignore comments made by IPs-you don't respond to my posts. But, if you, as a group, did not ignore IPs, someone might have caught and stopped this mess long before it reached this level. The IP 213.214.136.54 edited over a thousand articles, correcting the most egregious errors, and all of his/her edits and hard work are slated to be deleted.
IPs contribute a lot of excellence to wikipedia. I can't stop you from ignoring my every post, and setting an example to bot operators that this is how to act (as Martin acted), but the wikipedia community has decided over and over to allow anonymous IPs to edit.
If this group does not respect the community consensus, it's no wonder that it allows the creation of messes that put wikipedia in disrepute.
A group that can make this much work for other writers of the encyclopedia should be a part of the community, not a non-responsive law alone.
That's just my opinion on the matter. -- 69.226.103.13 ( talk) 06:29, 27 June 2009 (UTC)
Replies below are chronologically before many of the replies above. This break splits up comments and replies, which is unfortunate. - Jarry1250 [ humourous – discuss ] 20:18, 27 June 2009 (UTC)
On the question of what can you do? How about implement the most basic type of rules that programmers use when paid to write code? A simple start is demanding algorithms, maybe only from new programmers. Some programmers have many bots and few errors.
Any competent programmer can read another's algorithm and see they've missed the most basic things like initializing variables (probably why Martin's bot added bad lines of text and created bad taxoboxes, if the information had been gathered from a prior genus, but the next genus didn't mention spore types or its taxonomy, it just used leftover information), no protection against deleting an entire existing article. This is not genius level programming.
Then get specific approval from interested editors in their arena for all bots creating articles-don't just wait to see if anyone objects here, get positive approval. After a bot's initial run creating articles, post the list of its articles on wikiproject plants or somewhere and ask writers to check off on all of the articles.
-- 69.226.103.13 ( talk) 18:20, 27 June 2009 (UTC)
I think the anybot edits show allowing a bot to create articles has potential to create an even bigger mess than this one. This appears to be because of a lack of basic coding requirements and fundamental safeguards from the BAG prior to and once a bot is given approval.
Some writers are now battling me verbally to fight this realization. It won't change the situation as it is: bots on wikipedia are not controlled.
As someone else pointed out, if a meat editor had created these articles they would have been permanently blocked as a vandal once they were told of the problem and failed to stop. This block, in the case of anybot, would have occurred after the test run, not after 4000 articles.
I don't think bot owners want to hear this, and I've said my piece, and taken enough insults in exchange. This is a familiar wikipedia response to an editor, particularly an IP or newly registered user, pointing out a problem with wikipedia. This is how wikipedia winds up again with egg on its face in the news: wikipedia editors don't want to hear what's wrong. That's why this mess wasn't cleaned up in February: established editors refused to listen, a phycology editor, and the bot owner, and once a bot is approved its owner is allowed to do what he wants. -- 69.226.103.13 ( talk) 20:21, 27 June 2009 (UTC)
I understand that BAG gets a fair number of passers-by with strange ideas, but the regulars on this page should note that Anybot has done a vast amount of damage, and has caused many people, particularly 69.226.103.13 and 213.214.136.54, to waste an enormous amount of time. It's not BAG's job to monitor every bot, or to clean up mistakes. However, it would be decent to acknowledge that a big problem has occurred, and enquire whether something could be learned from the incident. One lesson seems clear: When a bot is approved to mass create articles, and if the subject matter is such that normal editors can't tell if the content is good or bad, BAG should apply a condition that periodic checking by the relevant subject project must occur: a pause of at least one week after every 100 new pages, and positive approval on the project page (not absence of objections), until the subject project page approves an uninterrupted run. That condition should be required regardless of current consensus on the subject project page.
Also, the statement by the bot owner that "somebody has been running the bot without my knowledge" needs serious investigation, or at least an acknowledgement that BAG members would like to investigate but are unable to do so due to lack of time or whatever. The particular bot owner is not important, it's the general claim that needs investigation.
Finally, I would like to add my voice to those who have elsewhere thanked 69.226.103.13. Thanks for the enormous amount of effort you have applied to investigating and resolving this situation. I know that ThaddeusB has issued somewhat of an apology, but I urge ThaddeusB to strike out the two sentences above starting "First of all...". Under normal circumstances, the text above would be very satisfactory, but owing to the particular circumstances of this case, it is not. Johnuniq ( talk) 04:52, 29 June 2009 (UTC)
I don't come here very often, but when I do, I consistently get a defensive, even rudely so, response, as though the BAG thinks it has a mandate to defend all bot operators against the horde of clueless non-coding lusers who don't appreciate their work.
I once came here to register my dissent against the BAG's decision to approve a bot that reverted newbs on certain articles for the sole reason that they were newbs. I had, and still have, serious philosophical objection to such a bot. I carefully laid out those objections. The response was "Bots make mistakes. So do you. Show some good faith." [25] Sadly, this is a fairly typical BAG response.
In the present case, I brought to the BAG a request for information on how I could document the Anybot situation, so that the BAG would take the situation into account when considering future applications. I concede that I started off too aggressively, so I won't take you to task for the defensive response. But for a BAG member to then rush off to the main discussion thread and precipitately declare that it was "all heat and no light" was unhelpful to say the least.
I really think there needs to be a change of focus here. "BAG" does not stand for "bot advocacy group". At the very least, you guys should be maintaining files on each bot you approve, soliciting feedback on performance, and proactively responding to reported problems.
Hesperian 05:40, 29 June 2009 (UTC)
I'm sorry, I tried to read this page, I honestly did. But the only thing said in the mess above is a bunch of pointing fingers. Hey guys, a protip on actually fixing things, is actually coming up with something constructive instead of ripping into each other. Furthermore, I apologize to 69.226.103.13 for not knowing the bot approval process. Q T C 00:41, 30 June 2009 (UTC)
BAG isn't a magical mythical entity that can catch subtle logic errors in all programming. As has been pointed out the bugs found during the trial process were vetted and reported to be fixed.
As also has been pointed out, BAG takes into account issues raised by the community during the approval process. Again as has been pointed out, community response during the approval was minimal.
BAG, like all the other various people on Wikipedia, is made up of volunteers who do this in addition to the things they do elsewhere on Wikipedia. It would be a pointless waste of effort to sit here all day checking every bot edit. BAG like AIV/3RR/UAV depend on people to report the problem.
Unapproved bots are block. End of story. However like above, we cannot go around checking every single edit made on Wikipedia to see if it conforms to all the previously approved tasks, like above it depends on people reporting this.
Thank you for Assuming Good Faith. It's hard to reply to people whose only comments are making baseless accusations against you.
Please do the same.
If you'd like to have an honest discourse on ways to improve the process please feel free to start a new topic. Q T C 00:53, 30 June 2009 (UTC)
The point of this post addressing me personally, and of Anomie's, are to avoid the topic. Again, no assumption necessary, the lack of input BAG coupled with the offensive defensiveness, when threads are not entirely ignored, is the evidence.
The BAG shows no reason it should have authority to give bots the go ahead. It does not monitor the bots. It does not check the code. It takes lack of community input as consensus. It reads the input anyway it wants. It ignores concerns posted about the bot.
Bureaucrats and the wikipedia community should find another venue to address bots for the encyclopedia, a venue where questions are answered, where threads raised by IPs aren't completely ignored when they're about issues that could bring the encyclopedia into disrepute.
Anybot made a mess due to its poor programming, its owner being the one given the power to unblock it, the BAG and the bot owner not responding to concerns (like my unanswered thread above) and personally insulting editors who raise legitimate issues in an apparent attempt to sidetrack the legitimate issue. Again, an opinion formed from the evidence on BAG boards.
-- 69.226.103.13 ( talk) 04:28, 30 June 2009 (UTC)
My 2 cents.. a bot shouldn't be "writing" articles anyway and I hope such a bot is never approved again. - ALLST✰R▼ echo wuz here 08:23, 30 June 2009 (UTC)
I've tried to say something that would lead to discussion on improving the reliability of bots operated on wikipedia. This is part of working as a team. This group is not ready for that discussion, because it involves tough issues like, should the BAG be the group with bot-authorization powers, and it involves working with a larger team: team wikipedia. The larger wikipedia community may want to address this question some time.
Bureacrats should question whether they should flag bots on the say-so of a group that denies any responsibility for how bots are operated. That's my opinion. This group is not interested. I can't change that. -- 69.226.103.13 ( talk) 18:06, 30 June 2009 (UTC)
By "do more", I specifically mean adding this to Wikipedia:Bots/Requests for approval/InputInit:
<!--Source code available: e.g. a link to the source code, "To BAG/Admins by request", "Standard pywikipedia"/"AWB"/etc. Be sure the bot account's password is not given out! --> '''[[Source code]] available:'''
One of the things BAG is supposed to do is ensure that bots are technically sound; having the source code can help us do that. Note that I'm not proposing we require source code, just that we start specifically asking for it. Unless there are objections, I'll add this in a few days. Anomie ⚔ 20:56, 30 June 2009 (UTC)
I'd also suggest asking "Exclusion compliant?" in the request, just as a suggestion for people. – Quadell ( talk) 14:02, 1 July 2009 (UTC)
I added the fields, leaving out the "To BAG/Admins by request". Anomie ⚔ 12:40, 3 July 2009 (UTC)
See Wikipedia:Bots/Requests for approval/BOTijo 2. This may have been approved before case-insensitivity in the search field was implemented and the task may now need to be revoked - it seems to be creating many unnecessary redirects. – xeno talk 21:12, 24 June 2009 (UTC)
bugzilla:19882 - for interested parties... – xeno talk 19:58, 22 July 2009 (UTC)
Can this bot be withdrawn please? The way the bot is performing is not satisfactory. See here, here and here. The last one is a case of the bot ignoring a nobot instruction. What the bot is trying to achieved can be achieved faster, and with less damage, by human editors. Mjroots ( talk) 05:45, 3 July 2009 (UTC)
I think most bots check templatelinks (so template redirects don't really matter); trying to parse wikitext would be insane.... -- MZMcBride ( talk) 14:39, 10 July 2009 (UTC)
There's a proposal at BON to make a minor change to the way ClueBot clears the template sandboxes. I don't think this requires a BRfA, since there is no opposition to the proposal and it's rather minor. Please add your opinion to the thread - Kingpin 13 ( talk) 03:38, 18 August 2009 (UTC)
Please see Wikipedia:Village pump (policy)#Proposal: Any large-scale semi-/automated article creation task require BRFA and comment (there, not here). Thanks! – xeno talk 18:15, 18 August 2009 (UTC)
Would anyone mind if I started this task back up again? It's been awhile since I did it. I know the prod time frame was upped to 7 days from 5, so I'd up my wait time from 7 to 9 days. I'm probably not required to ask this, but I figured it can't hurt.-- Rockfang ( talk) 09:12, 11 September 2009 (UTC)
Someone had a good regex for catching unlabelled dabs to skip them, like "' ' ' *(can refer to|can be one of|is one of)". But I can'tfind it. Anyone? Rich Farmbrough, 17:04, 12 September 2009 (UTC).
I think this may have been proposed before or informally discussed on IRC, but never went anywhere. There are very few bots that we can really apply "precedent" to, mainly because even if tasks are similar, most bots use custom-made code. However, interwiki bots typically all use Pywikipedia's interwiki.py. So I propose a new system for expedited approval of certain interwiki bot requests:
Its unrealistic to hold operators who run bots on half a dozen projects that aren't their home project (including enwiki) to the same standards that we hold operators who only operate bots on their home project. Given the reliability of interwiki.py compared to the average bot script, its also unnecessary. As long as the operator is aware of the known issues (don't run it in the template namespace ... are there others?), there shouldn't be any problems. Mr. Z-man 15:51, 1 August 2009 (UTC)
I just wrote {{ Delayed notice}}. Since lots of you code bots, check requests, do trial runs, etc..., this could prove useful in helping you keep track of stuff. Not sure where the best place on WP:BOTS was to post this, but this seems the highest traffic talk page (and thus has a higher outreach). Move this somewhere else if you think there's a better place for it. Headbomb { ταλκ κοντριβς – WP Physics} 14:13, 15 August 2009 (UTC)
Very handy. Rich Farmbrough, 14:12, 17 September 2009 (UTC).
It was recently agreed on Wikipedia:Village pump that any large-scale automated or semi-automated article creation task should require BRFA. One concern was that it would be impossible to follow up, has this been the case? Take for instance Sasata's recent large creation of fungi articles. Or Fergananim's very short articles on medieval Irish aboots, like Gillabhrenainn Ua hAnradhain. According to the new regulations these should both require approval, but I can't see that this has been done? Lampman ( talk) 14:57, 28 September 2009 (UTC)
Why is it bot policy that a bot will most likely be approved after a community discussion? Isn't it that a decision to approve a trial will be made after discussion?
After a reasonable amount of time has passed for community input, an approvals group member will most likely approve a trial for your bot and move the request to this section.
What? -- 69.225.5.4 ( talk) 18:23, 29 September 2009 (UTC)
Can we give more than 3 minutes for interested users to examine trial runs? [26] There seem to be many excuses for why community consensus is not needed, not given, no time for it. In this particular bot case, the task is straight-forward, responsible and responsive bot owner, dealing with deprecated code, etc., etc. But, sometimes I want to examine the trial runs after they have been run, but before the final approval, to see if they are problems that show up during the trial. A good reason for doing trials in the first place is to examine the results.
3 minutes is not enough time, and I don't see the urgency in approving this bot in 3 minutes. A couple of days for interested users to examine the trial run is not unreasonable, imo, no matter what the task.
One reason for instruction creep, by the way, is that editors seem to other editors to be overlooking common courtesies and common sense. I don't see why the instructions should say wait 2 days or wait more than 3 minutes, except that it is apparently not obvious that waiting more than 3 minutes gives time for community input.
There was no urgency in approving this bot, so allowing more than 3 minutes for the trial run to be examined by interested parties would have been a simple courtesy. -- IP69.226.103.13 ( talk) 20:58, 22 October 2009 (UTC)
So, it boils down to: after community input, a BAG member "will most likely approve a trial for your bot," (without any reference to the community input), then based entirely on technical functionality, the bot will be quickly approved after the trial. The bot owner is solely responsible for the actions of the bot.
So, BAG does nothing, but allow for a community input board then fast forward bots to be flagged by bureaucrats, or whoever flags bots... Interesting.
I will then move forward with this understanding of BAG's role on en.wiki. -- 69.226.111.130 ( talk) 20:49, 23 October 2009 (UTC)
We are not a bureaucracy, we can just take action without putting everything up for discussion first. If the community objects then we can act on it, otherwise we can just get the work done. If something goes wrong with a bot then that is what the revert button is for. Chillum 04:58, 24 October 2009 (UTC)
BAGbot seems to be mostly dead lately and ST47 doesn't seem to be around anymore, so Wikipedia:BAG/Status isn't being updated and users aren't being notified when {{ OperatorAssistanceNeeded}} is used (it did a couple other things, but these 2 were probably the most important). I was going to make a replacement, but can't seem to find the time to finish it (i.e. de-crappify my hastily thrown together code). If someone wants to write a replacement for it, that would be much appreciated. If someone wants my code to start with (written in Python, using my framework), I don't recall if the current version actually works or not. Mr. Z-man 04:56, 27 October 2009 (UTC)
So, suppose someone wanted to run a bot that did admin activities, but was not themselves an admin. (I can't program, so no, this isn't asking about me.) Is this allowed? If it's not been considered, should it be allowed? Also, what about someone who is an admin, runs a useful bot, and is desysoped? (Or, what in the case where a regular user who runs a bot is banned.) I'm just curious how such issues are approached, so I can better know policy. Irbisgreif ( talk) 08:10, 15 October 2009 (UTC)
There was a remark that this bot was approved without community consensus. [27] ("Bots seem to get approved based on a technical evaluation rather than on whether they conform to bot policy by only making edits that have consensus, as happened here.")
The bot was approved in two days with no input from anyone else in the RFBA process, other than bot operator and single BAG member who approved the bot for editing thousands of mainspace articles after examing some trial edits made by the bot before it was approved for trial edits, also, thereby, eliminating the opportunity for community input on the trial run. [28]
This bot only edits pages which have the parameter blank already, but the parameter does not show up in the article if not filled in (| oclc = | dewey = congress = ), and whether it should be filled in by a bot should be discussed with the wider community to gain consensus for the bot task.
I would like to see some time pass before bots that impact article space widely are approved.
Bot policy requires that a bot "performs only tasks for which there is consensus," means that bots should not be approved for main space tasks without community input. One BAG member does not constitute community input, imo.
Link to discussion calling OCLC linkspam-controversy.
[30] This link is not about CobraBot. I include it because a quick search shows that OCLCs are something that generates a lot of discussion on en.wiki. This discussion mentions, for instance, that consensus shows "OCLCs are considered superfluous when ISBNs are present." This discussion shows that, contrary to being approved, the CobraBot task maybe should have been denied as there might not be community consensus for the task at all. Consensus is required by bot policy. None was asked for in this approval. No time for community input was allowed before community approval. A prior bot was stopped from doing this task by the community. Maybe this bot task should not have been approved against community consensus.
-- 69.225.5.183 ( talk) 07:33, 18 October 2009 (UTC)
1RR with bots is a good rule to follow with bots.
Adding thousands of links without community input is a major concern. However, in the case of mainspace edits that contribute to thousands of article additions or changes, I would like to see community input at least given a chance in the future, and anything that makes this explicit to BAG members would be a way of addressing the situation.
At this point, however, I would also like community input about rolling back the bot edits, since they were made without community input, and they link externally. This should not have been done without major community input. And, in the case of future editing mainspace with a bot adding external links, I think the default value should be to not do so if the community has not positively spoken for adding the link. -- 69.225.5.183 ( talk) 02:57, 21 October 2009 (UTC)
I think that "say no to linkspam" says it all, no matter what the age. There was no consensus to actively link to this site, the bot move forward without gaining any community consensus, making en.wiki the "feeder site" to thousands of links to worldcat. The community should decide whether or not the infoboxes provide links to this particular website, not BAG, particularly since BAG's fallback is to generally approve a trial, then approve the bot for flagging based only on technical issues.
BAG itself seems to indicate there is no design for community input: a trial is "most likely" approved, without rgard to community input, then the bot is approved solely on technical issues. Linking thousands of wikipedia pages to worldcat required community consensus, not rapid approval. If this is done here it could be an easy avenue for vandalism. -- 69.226.111.130 ( talk) 21:05, 23 October 2009 (UTC)
"I note that User:Cybercobra commented that the bot was being suspended "pending an WP:ANI thread"[184]. If that was changed to "pending a much wider consensus that this is an appropriate task for a bot than the one person who approved it" I would be willing to close the discussion here, because it would not need administrator action such as blocking. I think that there's a much wider issue at stake here about the fact that one editor can put up a bot for approval, and it can get passed by one other editor because it works, without any consideration as to whether there is any consensus about whether the bot's actions are acceptable. At least if we are going to allow that to happen we should have an understanding that a bot operator should suspend a bot, pending discussion, in response to a good faith request by an established editor. WP:BRD is a well-known adage, but, when a bot is doing lots of bold edits it's impossible for a human to maintain the same pace to revert. Phil Bridger (talk) 23:05, 28 September 2009 (UTC)"
I've added a new section to the debate and have been reverted. Twice. Without any reverted politely including the insight as to where, besides "a new section" subsequent comments should be made.
So, if not in a new section according to the directions, where should the subsequent comments be made? [31]
Please, could BAG be more accurate in the directions? So many comments about users don't want to participate, but when editors do participate according to directions they are rudely reverted without any help.
So, where? And put that location on the BRFA closure template. -- IP69.226.103.13 ( talk) 16:49, 29 October 2009 (UTC)
While reviewing Betacommands Arbcom decisions and etc., I see that the time of approval and BAG's lack of monitoring community consensus have been raised as issues before. I would like the waiting time for post-trial approval to be at least a week. I would also like bots not to be approved when there is no community consensus. As Cobrabot task 1 had no community consensus I would like it blocked and its flag removed. This changes the RFBA for Cobrabot task 2. I would like that revisited, also, in light of the speedy approval. -- IP69.226.103.13 ( talk) 19:15, 29 October 2009 (UTC)
Of course I'm not arguing sincerely. Every time I post a sincere comment for discussion I get derailed, by Betacommand and you his ardent supporter, with ridiculous comments about "waiting for Godot," when it's well know what community consensus is, by hyperbole about waiting forever, and closing down BAG. I can't argue any of that with sincerity, as its purpose was not to raise sincere issues but to attack me personally and avoid discussing the issues at all. --
IP69.226.103.13 (
talk) 20:50, 29 October 2009 (UTC)
In fairness to IP69, the CobraBot 1 BRFA was not appealed by anyone (I think that's what they're trying to pursue here, along with several other things) and is still valid; I only paused running it until the discussions about it were resolved (with no consensus against the task); the reason task 1 isn't running currently is because CobraBot successfully completed its pass over all the articles in Wikipedia using {{ Infobox book}}. If IP69 wants to appeal task 1, they are free to do so. -- Cybercobra (talk) 21:32, 29 October 2009 (UTC)
I hadn't considered that. I personally think modifying the template to make it an inactive link would be preferable, kinda the best of both worlds, since that was one of the big complaints about the OCLC< but others may have ideas about the best course of action, if any. -- IP69.226.103.13 ( talk) 21:43, 29 October 2009 (UTC)
I posted a discussion as suggested. [33] I notified everyone who had commented on that page. I have not notified anyone who commented in the AN/I, although there may be other interested users from that list. I will also post a link to this discussion at the Village Pump. -- 69.226.106.109 ( talk) 06:55, 30 October 2009 (UTC)
I searched to find discussions about the most important part of the OCLC issue, and I found a place where it was being discussed. I invited editors who had expressed an opinion in the past, inviting all editors in a discussion, whatever side of the issue they were on, delivering a neutral invitation and not opening the discussion with my own opinion on the issue. Invited editors came by, expressed their opinions on the matter. It appears there is support for linking the OCLC number externally to the worldcat website. Not just by numbers, but generally there were positive expressions for the functionality of the usage.
I have an additional concern about the linking, that I would like to be addressed, there, but I don't think it stands in the way of this issue.
In my opinion this is one very positive and straight-forward way to gain community input on a bot: 1. identify any issues that are community concerns rather than mere technical concerns. 2. find the community this impacts 3a. neutrally invite members of that community to a neutral and logical location for discussing the issue 3b. create the discussion.
It seems to me this isn't that hard. The animosity of the bot boards toward outsider opinions makes it hard have an opinion here. Wikipedia isn't the only place where I participate in discussions with strangers on the web, but it's the only place I have such a negative reputation. I'm not the only one to mention how hostile the bot boards are to outsiders. If you want community input, you have to learn to identify the community as people who are potentially outsiders. If someone is not concerned in a negative way about an issue, they may not bother to speak up. If they are concerned, and they speak up here, they need to be deal with for their concerns directly, in a civil manner at all times.
Once the underlying community impact concerns are dealt with the bot is just a matter of technical approval. Does it do what it says, is the operative willing and able to respond to community issues that may arise, a trial run, does the trial run raise any glitches. None of this really needs more than monitoring by bots members, if they are monitoring that. It also kinda negates the issue of timelines if the community support is in place and the bot is reduced to technical matters.
So, imo, this is how it can be done in a way that gets community support for bots. Find the community that is impacted, politely and neutrally seek their input, allow them time to speak, then move forward on technical matters. It seems to me, also, from the bots policy, that this actually what was originally intended for how the bot approval process should work. However, bot members have moved from failing to get community input, due, imo, to how hostile this area is to outsiders, to saying that community consensus is not necessary or that if the community doesn't offer any negative input then the bot can go ahead.
-- 69.225.3.198 ( talk) 21:53, 2 November 2009 (UTC)
To satisfy the concerns of the above and punt issues back to the community before mistakes can be made, and following on from the above, I propose the following:
Issues may arise around "appropriate", but I do not think we should formalise this in some policy. Instead, as guidance, the question must be of impact - if the bot edits mainspace content then an attempt must be made by the operator to get some wider approval.
This is not wildly variable to what we do now, except we place the onus on the bot op to obtain consensus in advance of an approval request, allowing us to concentrate on technical aspects. All we have to do, then, is to be happy that the criterion in bot policy have been met. This can also give the flagging crats a discussion that is independent of BAG.
To conclude, a minor change, but one that improves our workflow, cuts down on erroneous approvals (at least from a consensus point of view) and improves the appearance of BAG and what we do. Fritzpoll ( talk) 14:14, 3 November 2009 (UTC)
{{
BOTREQ|advertise}}
more liberally. And IMO, whether the community consensus happens before the request or whether the request gets put on hold while consensus is sought is no big deal, since either way BAG will have to look over the consensus discussion and since the major problem with our workflow is in most BAGgers being relatively inactive rather than in having too many discussions to manage.
Anomie
⚔ 16:56, 3 November 2009 (UTC)
"If your task could be controversial (e.g. most bots making non-maintenance edits to articles and most bots posting messages on user talk pages), seek consensus for the task in the appropriate fora. Common places to start include WP:Village pump (proposals) and the talk pages of the relevant policies, guidelines, templates, and/or WikiProjects. Link to this discussion from your request for approval."
This looks fine to me. The instructions are not as onerous, imo, as many on wikipedia, and they're actually written and designed for the inexperienced user to be able to follow them. A bit strange the place on wikipedia where a template can actually be followed by a new user is the template least likely to be used by an inexperienced user.
And, thanks Anomie, for changing the wording earlier on the approval of a trial. -- 69.225.3.198 ( talk) 00:32, 4 November 2009 (UTC)
Speaking of uncontroversial tasks, is there any way to get the process moving on RSElectionBot? I fulfilled the bot request within about 8 hours of when I was asked, but now it looks like the bot's going to just stagnate on BRFA and the election will be over before anything happens with it. rspεεr ( talk) 09:30, 3 December 2009 (UTC)
There are a number of uncontentious, trivial changes (for example, changing 'Image': links to 'File:') for which no bot solely performing them is likely to be approved, even if they were all grouped together.
AWB applies those fixes as a matter of course when doing other work.
Would it be reasonable to have a "bot" that represents these trivial fixes, and approve each fix; this "bot" could be then coded by the various framework maintainers and it's approved tasks could be applied by any real bot in addition to their own approved tasks? Josh Parris 03:39, 30 December 2009 (UTC)
Is there any automated process for cleaning out Category:Open_Wikipedia_bot_requests_for_approval? Josh Parris 03:39, 3 January 2010 (UTC)
I'd like to make some bulk edits such as removing {{ Gaelic Games in Ireland}} from pages that it shouldn't be listed on or adding {{ GaelicGamesProject}} to pages in Category:Gaelic Athletic Association and it's branches which aren't tagged and as such am not requesting a new bot. Do I need to request approval here or can User:GnevinAWB be changed to allow automatic editing ? Gnevin ( talk) 16:34, 8 January 2010 (UTC)
Hi. Since User:Fritzpoll has left, that leaves his bot inactive. I've offered to take over Wikipedia:Bots/Requests for approval/FritzpollBot 4 for him, and am expecting the source code shortly. So I'd like an okay from a BAG member (other than me) to run this (previously approved) task with User:KingpinBot. And was also wondering if they'd be any other volunteers to take over FritzpollBot's other tasks? Best, - Kingpin 13 ( talk) 13:22, 19 February 2010 (UTC)
AWB has a limit of 25K when creating lists of articles to edit. In order for the list tool to create a longer list, one needs a bot account. Note that this is true even if the number of pages that will actually be changed is far less than 25K. In my case, I am trying to fix issues in articles that transclude {{ Infobox single}}, o which there are more than 25K. The great majority of the articles won't be changed: filters in the settings of AWB will cause most articles to be skipped.
Another project that is directly related to this is an investigation of if/how to merge {{
Infobox song}} into {{Infobox single}}
. To do that, I want to find articles that use certain template parameters and make sure that my proposal for a merged template won't break any articles, or I will find and fix such articles first, etc. Either way, I need to be able to search all the transclusions of {{Infobox single}}
, not just the first 25K.
So... how do I get a bot account for this use? The request process here seems oriented to standalone bots. — John Cardinal ( talk) 21:51, 28 February 2010 (UTC)
During which step on the list should I have my bot readied? Should I have finished it before I even suggest it? I find the instructions here very confusing. Awesomeness talk 16:03, 5 March 2010 (UTC)
I am an experienced admin who maintains and runs the OpenOffice.org Forums as well as a couple of wikis using the Wikimedia engine, but I am also involved in Wikipedia as a normal editor. I am currently doing a project with another editor to add a CFS sufferers category to Bios of CFS sufferers very much the same way that many people who are HIV positive or suffer from other widespread illness are tagged. (See my sandpit CFS people). If we come to the point where we decide to implement these category insertions, then I can see that they are four sensible possible approaches:
{{fullurl:XXXXX|action=edit}}
links, fire up a new page for each, and use a small client-side greasemonkey script to automate this.(4) isn't one of the current standard bots, but the personal overheads of seeking approval for this just don't seem worth it. If this were one of my own wikis then I would just use (3). However, since I would be writing back to (main) name space articles, then I assume that the precautionary principle applies and I would therefore still need full approval. (1) is just tedious beyond words and such repetitive manual tasks are very prone to the risk of keying errors. So by my understanding I am left with (2) and as long as I include a per page visual check and the actual save is manual then this falls within Wikipedia:Bot policy#Assisted editing guidelines and therefore doesn't need the bureaucracy of formal approval. Am I correct in this? -- TerryE ( talk) 17:42, 8 March 2010 (UTC)
Greetings-- I'm hoping to put together a read-only bot that I can use to develop statistical sampling techniques of Wikipedia. Since the purpose of the bot is sampling, it shouldn't be a bandwidth hog in any way. In particular, not unlike some researchers, I'm hoping to compare site traffic with edit traffic in a statistical way. Would this kind of bot be approved? How can I develop the software before it is? Thanks, Owensmartin ( talk) 23:30, 10 March 2010 (UTC)
If you're not editing you don't need a separate bot account or approval (although an account with a bot flag may be useful as it allows for higher api queries). Please make sure you use a descriptive user agent with a contact email address in it or domas may ban your IP address. -- Chris 08:20, 11 March 2010 (UTC)
I'd like to transfer ownership of User:SuggestBot from myself to User:Nettrom, who has both more time and energy to keep it going and make it better. I've updated the bot's page itself, I think. Let me know if there's anything else that needs doing. -- ForteTuba ( talk) 21:10, 8 April 2010 (UTC)
Just wanted a quick go ahead from another BAG member to change KingpinBot's wikiproject tagging from using AWB to using C#. Cheers, - Kingpin 13 ( talk) 13:16, 24 May 2010 (UTC)
So I have followed the process documented for seeking approval for a bot. Now the question is, what do I actually have to do to get it reviewed and hopefully approved? -- Traveler100 ( talk) 08:19, 21 July 2010 (UTC)
If I want to achieve the same result by a slightly different method does this need a new bot and new approval or can it run on the existing bot user? To be more specific can User:People-photo-bot#Specification 2 (proposal) be run under the same bot as User:People-photo-bot#Specification 1 (active)? -- Traveler100 ( talk) 14:41, 15 August 2010 (UTC)
This is about the bot Ganeshbot4. The discussion about whether the bot should be approved for the task of creating gastropod articles is about this bot creating 600 species Conus articles. The bot owner says the bot was approved for creating thousands (or more) of gastropod species articles from WoRMS. I do not see this in the approval discussion.
The bot is creating bad articles. WoRMS is not as careful about its taxonomic experts as it should be. This bot is creating articles about species that are listed from a single 18th century identification followed by an amature's out-of-print book, that has been verified by the WoRMS "taxonomist"--an amature shell collector.
I was surprised by some of the gastropod species listed next to a new article I created. I attempted to verify the species were correct, but could find no sources except for WoRMS, the out-of-print book, and, now Wikipedia.
What gives? Was this bot approved to add all of WoRMS to Wikipedia? I think Wikipedia might be creating species. JaRoad ( talk) 22:40, 15 August 2010 (UTC)
I checked some articles created by the bot. They have incorrectly or non-validated taxonomic names, subspecies listed as the wrong species, bad descriptions, and the taxoboxes are junky. I have not found one article created by this bot that is correct. JaRoad ( talk) 23:30, 15 August 2010 (UTC)
I posted information about this bot at Wikipedia:Administrators' noticeboard/Incidents. I think the bot should be stopped until editors can clean up its articles. I think its articles should be deleted if editors are not willing to individually fact-check every one. This is irresponsible article creation by Wikipedai. A google search turns up these articles at the top of short lists. Please be more considerate to Wikiepia readers when approving the creation of 1000s of species articles from databases. Maybe most of the articles will be okay. But I checked 6, and the 6 I checked were all bad. The bot operator does not have good answers to questions about where the approvals are, who is making sure the articles are correct, or anything. JaRoad ( talk) 23:58, 15 August 2010 (UTC) [35]
I request this bot be blocked, its 15,000 articles individually checked and verified by gastropod editors or deleted if that is not possible, and that the bot's task for creating an unlimited number of species stubs be discussed formally on the requests for bot approval page rather than in a secondary discussion elsewhere.
In response to a post about another bot that created bad species articles ( with disastrous results. I still have nightmares about Wikipedia:Articles for deletion/Anybot's algae articles.) the bot operator said,
"This task is no way close to what AnyBot did. I had clearly mentioned that about 580 stubs will be created (nothing more). The list of articles that will be created are already listed in the data page (first column). The point of creating these articles is so that the Gastro team can come in and expand them. Regards, Ganeshk ( talk) 01:45, 18 March 2010 (UTC)"
I added the bold marks to Ganeshk's text, they are not in the original.
The bot operator aske for this approval of "580 stubs ... (nothing more)" for a single species, Conus, then decided he did not need approval for other families, ":The bot approval was for the task, to create species using the WoRMS as reference. I don't see myself getting approval for each mollusk family. As for the 100 edit restriction, they were lifted at this discussion. — Ganeshk ( talk) 04:13, 16 August 2010 (UTC)"
Again, I added the bold.
Maybe the misunderstanding is strong enough that bots need to be explained better to this editor before he continues to run any bots on Wikiepda. In the meantime, this bot should be blocked and its approval for creating 15000 gastropod stubs should be revoked. JaRoad ( talk) 05:25, 16 August 2010 (UTC) JaRoad ( talk) 05:25, 16 August 2010 (UTC)
Ganeshk, there's nothing at the link you gave that relates to lifting your restriction on edits. Could you find a diff please. Thanks. Elen of the Roads ( talk) 10:02, 17 August 2010 (UTC)
I read the bot approval as a technical approval to run the under the project supervision. On hindsight, I should have alerted the BAG that I am planning to do this and get their advise. The bot has created 15,000 articles so far. The Gastropod project members have been adding additional information to these stubs and have not found any major issues with them. The project members provide me with a introduction sentence and approval to run a family. The bot then creates the species within that family. The full list of bot-created articles is at User:Ganeshbot/Animalia/History. I will wait to hear what the next steps are. — Ganeshk ( talk) 12:27, 17 August 2010 (UTC)
The facts here seem clear:
To move forward, I suggest the following:
Opinions? Anomie ⚔ 16:18, 17 August 2010 (UTC)
The bot has been approved per Wikipedia talk:Bot Approvals Group/Archive 7#Wrong way of the close a BRFA. How the bot work has been for example also in the Wikipedia Signpost: Wikipedia:Wikipedia Signpost/2010-04-26/WikiProject report. The Bot has unified support by Wikiproject Gastropod members. Maybe Bot approval group would like to know in detail, how the bot work: 1) User:Ganesh will get clear instructions what to do by me or directly at Wikiproject Gastropod talkpage; 2) User:Ganesh will run the GaneshBot and it will do the task. That is all. I can personally guarantee that the bot runs OK. There is no need restrictions or restrains from "Bot Approvals Group" because nobody of the "Bot Approvals Group" is able to evaluate if informations generated by a bot are correct or not (see for example another overview what the bot have done User:Snek01/Taxonomy). By the way, it even seems that "Bot Approvals Group" is even unqualified to close BRFA correctly. So feel free to formalize bot's approval, but do not restrain useful work that is being done.
I will give you an example to compare:
-- Snek01 ( talk) 00:03, 19 August 2010 (UTC)
User:BrokenAnchorBot uses AutoWikiBrowser to make it's edits. The code behind it just exports an AWB xml settings file with simple, case sensitive replacements and a list of articles that need to be edited. My bot uses the fact that AWB can automatically show the replacements that were made in the edit summary. Sometimes AWB creates summaries like this because [[this is a long article name#and a section|and a label]] can end up being quite long, and edit summaries have a limit to how long they can be, breaking the link in the summary. I would like to improve these edit summaries by avoiding {or at least mitigating) such broken links. A smarter edit summary could omit the labels to make the text in the summary shorter, or do a few other things to be smarter about this. I'm not a windows guy, and learning C# doesn't appeal to me all that much, so I'm reluctant to take on the task of an AWB plugin myself. Should I submit a BRFA if I do decide to ditch AWB? I'll add any code to the repo before anything goes forward. Winston365 ( talk) 06:29, 26 July 2010 (UTC)
Would tagging articles en masse be wanted since I was considering learning programming to create a bot specifically designed to go through articles and tag them with the relevant issue needing addressing whether it be wikifying or a quick copyedit. If a bot like this would just be too hard or unwanted is my question. I was referred here by Arbitrarily0 a while ago but I never got to doing that (out of laziness). Ғяіᴆaз'§Đøøм | Champagne? 06:54, 13 September 2010 (UTC)