This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 55 | Archive 56 | Archive 57 | Archive 58 | Archive 59 | Archive 60 | → | Archive 65 |
Per consensus
here, can we have a bot roaming around to find all instances of the template and its associated redirects on video game articles to remove the |rating=
or |ratings=
parameter?
TeleComNasSprVen (
talk •
contribs) 22:54, 26 December 2013 (UTC)
This bot would change all templates that need to be substituted. For example, if somebody put {{ AfD}} instead of {{ subst:AfD}}, the bot would fix it. buff bills 7701 16:42, 27 December 2013 (UTC)
{{
substituted|auto=yes}}
on the template (and temporarily add it to
User:AnomieBOT/TemplateSubster force if necessary). For existing templates, be sure there is consensus for the template to be substituted by a bot before doing so.
Anomie
⚔ 16:48, 27 December 2013 (UTC)Hi ya'll. Would it be possible for a bot to run through all of the articles on this list (NA-class, unassessed quality articles for WP:WikiProject Michigan) and set the talk page project assessment to "class=redirect"? There are currently 1005 redirects on the list; it would be quite possible to do with AWB, but it would seem much easier to do with a bot. I am, however, a complete newb when it comes to bots, so my idea of "easier" may in fact be...not so much. Thanks in advance, Dana boomer ( talk) 21:34, 29 December 2013 (UTC)
What if we had a bot that could create redirects upon command? I'm imagining the following features:
We wouldn't need to worry about false positives or other "normal" bot mistakes, since the bot would only edit based on very specific human instructions. I'm thinking of this after discovering a pile of redirects to create: 49 of the 50 lines at 0 this edition of the sandbox need to be created, and while any of them is simple, I don't feel like spending the time (or asking anyone else to spend the time) creating all of them. Perfect bot task, if I'm not misunderstanding something. Nyttend ( talk) 18:39, 20 December 2013 (UTC)
Is there a bot that can still be tasked with removing date links? There are a set of pages Wikipedia:WikiProject Missing encyclopedic articles/DNB Epitome 01 through to Wikipedia:WikiProject Missing encyclopedic articles/DNB Epitome 63 which carry linked years. These were originally created when linking dates was still in fashion. It would help with moving information from these Wikipedia pages to article pages if the links were removed. It would also help if dashes between the years to be changed, were changed to ndashs.
If there is no such regular bot job set up to do it, here are the expressions (that covers most cases -- but only tested on one page) for those bots based on AWB.
find | replace | regex | notes |
---|---|---|---|
– | – | no | |
— | — | no | |
(\]\] *\?.*)-(.*\[\[) | $1–$2 | yes | |
(\]\].*)-.*c\..*\[\[ | $1– c. [[ | yes | |
]]-[[ | ]]–[[ | no | |
([0-9]*\]\])([0-9]) | $1 $2 | yes | case where one number is next to another split them |
[\[([0-9]*)\]\] | $1 | yes |
list of pages
|
---|
|
-- PBS ( talk) 11:39, 21 December 2013 (UTC)
\[\[([12]?\d{1,3})\]\]
so it doesn't change valid non-year wikilinks such as
8086 and
80486.
GoingBatty (
talk) 14:57, 21 December 2013 (UTC)
find | replace | regex | notes |
---|---|---|---|
– | – | no | |
— | — | no | |
(\]\] *\?.*)-\s*(.*\[\[) | $1–$2 | yes | also remove a space after the dash |
(\]\].*)-.*c\..*\[\[ | $1– c. [[ | yes | |
\]\]-\s*\[\[ | ]]–[[ | yes | also remove a space after the dash |
([0-9]*\]\]\??)\s*(\d{4}) | $1–$2 | yes | case where one year is next to another, add a – |
([0-9]*\]\])\s*[79]([-\)]) | $1?$2 | yes | change a 7 or 9 after the year to a ? |
([0-9]*\]\])([0-9]) | $1 $2 | yes | case where one number is next to another split them |
[\[([0-9]*)\]\]\s\? | $1? | yes | remove a space between the year and the ? |
[\!li]\s*\[\[(\d{3})\]\] | 1$1 | yes | change a ! or l in front of a three-digit year to a four digit year |
\[\[([12]?\d{1,3})\]\]? | $1 | yes | only change valid years |
\[\[(\d{1,3})7\]\]? | $1? | yes | change a three-digit year followed by a 7 (which looks like an invalid four-digit year ending in 7) to three-digit year followed by a 7 |
Hello. Following
this discussion at the Footy project, please would it possible for all the instances of urls referencing a specific site to be changed from .htm to .html. The urls to be changed are of the form www.neilbrown.newcastlefans.com/
followed by a variable part and then .htm, e.g. www.neilbrown.newcastlefans.com/player/barriethomas.htm
. Some links to the site have already been fixed, but there are still quite a lot broken. Thanks in advance,
Struway2 (
talk) 22:00, 29 December 2013 (UTC)
(http:\/\/www\.neilbrown\.newcastlefans\.com\/(.*?).htm)(?!l)
$1l
@ GoingBatty: Think we all want to use the template when it's up to the job, but as an immediate fix we do appear to have a consensus for mending the URLs. Thanks for requesting clarification. cheers, Struway2 ( talk) 17:17, 30 December 2013 (UTC)
My idea is a bot that automatically updates sports statistics. Preferably, it would get statistics from the assorted [sport]-reference.com sites (e.g. baseball-reference.com basketball-reference.com etc.). It would pull information from an athletes' page every so often and update that athlete's Wikipedia article.
Thanks! Newyorkadam ( talk) 18:49, 30 December 2013 (UTC)Newyorkadam
The above reminded me I have long considered a request for bot updating of top-10 tennis ranking navboxes for ATP (men) and WTA (women). There are many in Category:ATP Tour navigational boxes and Category:WTA Tour navigational boxes, for example {{ Top ten Argentine male singles tennis players}} and {{ Top Australian female tennis players (doubles)}}. There are also continents like {{ Top ten male singles tennis players of countries in the Asian Tennis Federation}} and {{ Top ten European female doubles tennis players}}. And this one for the World has its own design: {{ Top ten tennis players}}. It has been off-season for ATP and WTA for 1-2 months with few ranking changes caused by small tournaments, but that will change next week. The ATP and WTA seasons start around 1 January and last 10-11 months.
Rankings are usually published each Monday (except the middle of the four two-week Grand Slam tournaments) at http://www.atpworldtour.com/Rankings/Singles.aspx and http://www.wtatennis.com/singles-rankings (they are different organizations with different formats and publishing times). In addition to updating the navboxes, a bot should ideally also add or remove them on the player biographies when a player moves in or out of a navbox. Category:Tennis templates also has some non-navbox ranking templates for permanent display in general tennis articles: {{ Current Men's Singles ATP Rankings}}, {{ Current Men's Doubles Individual ATP Rankings}}, {{ Current Men's Doubles Team ATP Rankings}} (not currently used), {{ Current Women's Singles WTA Rankings}}, {{ Current Women's Doubles Individual WTA Rankings}}. I don't know the copyright rules but two articles dedicated to longer ATP and WTA rankings like top-100 or more would also be nice. World rankings are very important in tennis because they determine the seeds and players in nearly all tournaments (except a few wild cards). Tennis is probably the biggest spectator sport for women.
An ambitious bot operator could also consider offering a bot to other languages, maybe by letting the bot provide raw data and call local templates for text and design. German and Italian have many tennis ranking navboxes. There are also some in other languages, and they might want more if they don't have to update them. I don't know whether there is a practical way to do it at Wikidata. The ranking of individual players for display in their own biography could of course be bot-maintained at Wikidata, along with other player stats. PrimeHunter ( talk) 00:06, 31 December 2013 (UTC)
Hello again,
some months (or even years?) ago, I requested a mass-move and following orthography check allover the Romanian topics: Şş and Ţţ (with cedilla) are wrong, Șș and Țț (with diacritic comma) are correct. I don't remember who did it finally, but it was done.
I now see several "cedilla-s" and "cedilla-t" coming again: Could please somebody (or even the same who did it in the past) check the whole category (including the category itself) Category:Communes of Ştefan Vodă district?
Thank you (and a happy new year)! —[ ˈjøː ˌmaˑ] 11:09, 1 January 2014 (UTC)
Can someone please remove all instances of the phrase "the highest and most prestigious award for gallantry in the face of the enemy that can be awarded to British and Commonwealth forces." This clumsy and entirely unnecessary phase is repeated about 1,310 times on the Wiki; it is part of every biograph of everyone who has ever been awarded the VC. The nature of internal links means readers if they're not familiar with the award are just a click away from it. Those that are aware of what a VC is, and if you're looking at English language military biographies, you ought to really know this, don't need this extra information. Barney the barney barney ( talk) 19:52, 27 December 2013 (UTC)
Yet Another Redirect Cleanup Bot ( BRFA · contribs · actions log · block log · flag log · user rights) has been retired due to inactivity. Is there an active admin willing to operate a bot to perform this task in its place? The source code of the retired bot is available. WJBscribe (talk) 12:31, 6 January 2014 (UTC)
I would request that someone revive my request, and add missing {{ mergeto}} and {{ mergefrom}}tags on articles found in Category:All_articles_to_be_merged. Work on this was being done previously, but real life concerns put it on the back burner. I would also request that a talk page message be left by the bot explaining why the tag has been added, and requesting that both tags be removed (target and subject articles) if there is no consensus to merge.
The goal of this would be to reduce the backlog of old merge requests (4 years) at WP:PMG, by having regular editors tag care of old, unfulfilled merge requests. -- Nick Penguin( contribs) 18:12, 2 January 2014 (UTC)
Hi, I would like to revive my request archived here: /info/en/?search=Wikipedia:Bot_requests/Archive_57
The request: Hi All, I am Dr. Noa Rappaport, scientific leader of the MalaCards database of human diseases. Following a suggestion by Andrew Su ( /info/en/?search=Wikipedia:WikiProject_Molecular_and_Cellular_Biology/Proposals#MalaCards_-_www.malacards.org) we were asked to write a bot that updates the disease box external references within disease entries in Wikipedia: /info/en/?search=User:ProteinBoxBot/Phase_3#Disease. We found it to be a non trivial task. Does anyone know of any such bot that exists or can help us write it ? Mapping data is found here: https://en.wikipedia.org/?title=User:Noa.rappaport/Malacard_mappings Thanks. Noa.rappaport ( talk) 08:36, 9 January 2014 (UTC)
Hi all. Based on an emerging consensus at the DS review, I have to establish that it is possible to automate the logging of discretionary sanctions notices. At the moment, when users are given notice of discretionary sanctions, the notice is logged at the related arbitration decision page. When the new Alert template comes into use, we hope to replace manual logging with bot-assisted automated logging. This is the intended behaviour:
|1=
of the template instance.The idea is basically to take the stigma out of discretionary sanctions alerts (which are currently known as notices). Eliminating "naming and shaming" logs, in favour of a neutral tracking spreadsheet maintained by a stable bot seems the ideal solution. The template is tracked using a Z-number template, though obviously a bot that checked that template a mere few times a day would still miss notices given and reverted between checks. Does this sound like something it would be possible? If so, is anybody willing to have their bots do this? Thanks for your consideration. Regards, AGK [•] 13:43, 10 January 2014 (UTC)
I've discovered that many pages in
Category:All unreferenced BLPs actually have external links, which count as references and thus the BLP is not unsourced. Seven out of the first ten pages in the category excluding those with PROD BLP tags,
1,
2,
3,
4,
5,
6,
7,
8,
9, and
10, have external links of some sort. It seems like a job for a bot to go through and maintain the category by tagging any page with external links by doing something like adding |links=yes
to the BLP unsourced template. A human would then review the article and judge if the links are appropriate, in which case they would remove the tag, maybe replacing it with {{
BLP sources}}, or better yet cleaning up and further sourcing the article. I'd expect it would be too controversial to have the bot remove/replace the tag based on the presence of a link, which is why I recommend tagging it. (I thought a list of the relevant pages could be generated in one step with an
API query, but for whatever reason it only displays some ELs.) Anyway, this seems pretty straightforward, but please let me know if there are any problems or if I need to gain consensus for this. Cheers, ~
Hue
Sat
Lum 23:52, 10 January 2014 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
I would like to create a bot that would clean the Sandboxes out and revert vandalism. Do I need to create another account before I do this? Thanks! Yoshi24517 ( talk) 19:31, 9 January 2014 (UTC)
Is there anything that can send me the diffs of the edits on my talkpage to an email? It would be a useful time saver. 930913( Congratulate) 09:12, 13 January 2014 (UTC)
The non-free content criteria require that each fair-use image bear "the name of each article ... in which fair use is claimed for the item, and a separate, specific non-free use rationale for each use of the item". Sometimes, non-free images are placed into articles which are not named in the FUR, which means that the use of the image in that article contravenes WP:NFCCP#10c. There does not at present appear to be any automated means for detecting such misuse.
I would like to request that a bot be tasked (or created if necessary) which ensures that the non-free content criteria are adhered to, so far as is possible with a bot - clearly, a bot cannot judge WP:NFCCP#5. -- Redrose64 ( talk) 23:07, 8 January 2014 (UTC)
Should we have a bot to remove the word "currently" from every article? Any sentence with "currently" should probably rewritten altogether because "currently" is a sign of a situation expected to be temporary, in which case the sentence should be written now in such a way that it will still make sense a year from now:
In the absence of a good rewrite, however, we can at least get rid of the "currently". I know this seems like six of one, half a dozen of the other, but it's a Wikipedia pet peeve of mine because it looks absurd when I come across outdated content that contains that word. Consider the situation a year from now, if a reader comes across the article and sees "Marsh is appearing in a revival of Barefoot in the Park". Suppose Marsh has by then moved on to another production. In that case, the sentence is false whether or not it contains the word "currently". But without it, at least the sentence isn't sitting there insisting in vain on its own currency. —Largo Plazo ( talk) 12:56, 16 January 2014 (UTC)
Alternatively, I suppose we could define an inline template, similar to {{ who}} or {{ citation needed}}, that the bot could insert after the word "currently", that would display some kind of text encapsulating my thoughts from above regarding the desirability of a rewrite. The template could group the articles into a category for use by editors looking to make improvements or keep track of potentially outdated assertions. —Largo Plazo ( talk) 13:04, 16 January 2014 (UTC)
{{
when}}
. --
Redrose64 (
talk) 14:45, 16 January 2014 (UTC)
Since I had a mistake in the 'translation' of texts that are used to generate COIBot-pages, there are now 2 x ## links (fill in the number of bot-generated pages) pointing to mainspace pages:
Which should be
(the former are the links on meta, not on en.wikipedia). The bot from now on will write them correctly (and update them in reports that it updates, see the changed line at the bottom of this diff.
Is someone operating a bot that solves these, who would be willing to perform the change for all pages that link to the first two (most should be in Wikipedia:WikiProject Spam-space, but there may be some lying around elsewhere as well - better to do all of them). Afterwards, the two mainspace CNR's can be deleted. Thanks! -- Dirk Beetstra T C 14:00, 11 January 2014 (UTC)
I would like a bot to find Counties of the United States which are not using a photo in the ex image variable within the Template:Infobox U.S. County and then put into Category:U.S. Counties Missing Ex Image. Please note that some of the Infoboxes don't even have "ex image" in the template as this variable was added after many of the Infoboxes were put in place. This would help in quickly finding counties with need a visual representation of the county. Is this possible? - Ichabod ( talk) 01:30, 14 January 2014 (UTC)
Is there anyone who is currently running a project tagging bot. I need all the new articles listed in categories (but not subcategories) listed at WP:CHIBOTCATS to be tagged with {{ WikiProject Chicago}}.-- TonyTheTiger ( T / C / WP:FOUR / WP:CHICAGO / WP:WAWARD) 17:00, 17 January 2014 (UTC)
Hello again,
some months (or even years?) ago, I requested a mass-move and following orthography check allover the Romanian topics: Şş and Ţţ (with cedilla) are wrong, Șș and Țț (with diacritic comma) are correct. I don't remember who did it finally, but it was done.
I now see several "cedilla-s" and "cedilla-t" coming again: Could please somebody (or even the same who did it in the past) check the whole category (including the category itself) Category:Communes of Ştefan Vodă district?
Thank you (and a happy new year)! —[ ˈjøː ˌmaˑ] 11:09, 1 January 2014 (UTC)
At Draft talk:Template:Redirect documentation a template to be called {{ Redirect documentation}} is being developed. This will require there to be a parameter, either unamed, 1= or current= to have any use. Would it be possible for a bot to leave a message on the talk page of any user who adds this template without a parameter or with an empty parameter?
If possible, someone who does it more than once should get no more than one message per day that lists all the redirects that need to be addressed. It shouldn't ever leave more than one message per redirect.
Still being discussed, but there is possibility that this template will autocategorise a page as a redirect unless a parameter is set. Would it be possible for a bot to check uses of this template and (a) set the parameter if the page it is used on is not a redirect; (b) remove the parameter if the page is a redirect?
At the minute this is just a request to determine feasibility, as the template is not live. If it is feasible then I expect a request to actually implement it wont be too far in the future. Thryduulf ( talk) 11:26, 19 January 2014 (UTC)
Would it be possible for a bot to identify pages are that are redirects to pages that have one or more WikiProject banners, and if so tag the redirect talk page for the same WikiProjects and taskforces as the target page using class=redirect and importance=NA?
e.g. If
Foo redirects to
Bar and
Talk:Bar is tagged with a banner for WikiProject Trains and the Locomotives task force, then add {{
WikiProject Trains}}
to
Talk:Foo.
If there is standard logic for when to use the {{ WikiProject banner shell}} template, then apply that. Otherwise use it if the target does, don't if it doesn't.
If the redirect talk already has Project banners then the bot should just (a) make sure that they use class=redirect and (b) make sure that any living/not living tags match, even if the set of banners is not the same as those on the target. The thinking behind this is that the redirect might be more specific than the target and so different projects might apply - for example, The weather in London redirects to London, the redirect talk page is tagged for the Meterology project but not the Olympics/Paralympics project. I don't consider these differences to be sufficiently often or important that it should stand in the way of a bot just copying all by default.
After an initial run, then there will be a need for either periodic or continuous future runs as redirects are fluid. My gut feeling is that converting class=redirect to class=(something else) on pages that are not redirects is a different task? If I'm wrong on that I have no object to it being included.
This request is floating an idea to see if it is possible, I have not sought consensus for it anywhere (there is no point if it's not practical). If people here think that it is both possible and that consensus for it is needed then can you suggest where best to get that consensus (Village pump?). If it is possible and uncontroversial then please go ahead and work your magic! Thryduulf ( talk) 11:51, 19 January 2014 (UTC)
Just to make sure we are on the same page: Not all projects tag redirects. Moreover, I think if class is set to "Redirect" then importance is auto set to "NA" and it is not need to be added. -- Magioladitis ( talk) 13:02, 19 January 2014 (UTC)
Topic ban isn't hapening and I got a WikiGnome pat of happiness
|
---|
Since I'm in danger of being topic banned from AFC the following Bot Tasks need new Operators:
I'm happy to transfer a database dump that drives tasks 1 and 2 to the new operator and help the new operator get acclimated with the processes |
This bot would detect hoax articles and maybe hoax edits. The bot would do Google searches (including Books,News,and Scholar) for the article's name. After it discounts Wikipedia mirrors and any other sites with identical text or that definitely came after the article's creation, any articles with unusually low ghits will be reported to a queue. This tool might also detect non-notable articles. WorldCat and JSTOR might also be used. Pages created by users with not many edits might be profiled. Alexschmidt711 ( talk) 21:09, 22 January 2014 (UTC)
Hi, there is this website for the newspaper Deccan Chronicle that does not regularly maintain archives. Bcos of that, dead links are frequent. I therefore request that a bot regularly monitor DC references and automatically archive them on Internet archive/ Webcite as soon as they are added. Do u know of any such bot, or can u create any such? Kailash29792 ( talk) 09:08, 25 January 2014 (UTC)
This Category recently started to fill up again, due to the efforts of one of Theo's Little Bot's to add attribution information to self created images.
The issue in practically all of the case's is that they bot couldn't find a description.
I've found that often the description can be obtained by pulling the image caption from an article where the image is used.
Is it possible for someone to provide a bot does a caption pulling sweep in respect of that category on a Weekly basis?
The intention was that the Bot added something like :
'''Captioned:''' {{{caption}}}} in [[{{{article}}}]] where {{{capiton}}} is the caption pulled from an article and {{{article}}} is the relevant article name.
Sfan00 IMG ( talk) 11:16, 25 January 2014 (UTC)
If it's not already done....
www.archiveteam.org/index.php?title=URLTeam#bit.ly_aliases
has a list of URL shortners.
Surely a bot could go through these and expand them out? Sfan00 IMG ( talk) 20:05, 26 January 2014 (UTC)
This is a long story....but to begin, I joined Wikipedia on 18:23, September 29, 2011, to be exact. I joined for the sole purpose of bots; I was younger and naïve. I immediately asked for help on making a bot, but was told I didn't have enough experience. So, I found a home at the Illustration Graphics Lab. After making graphics for a while, I left for personal reasons. I returned with Google Code-In 2013, and found some cool tasks for Pywikibot. This re-sparked my interest in bot-making, and I decided to figure out something to make my bot do, something simple to start out with but really helpful around here. I know this is kind of a reverse-bot request, but does anyone have ideas that could help?
I'm not really that experienced (only 585 edits as of 00:30, 19 January 2014 (UTC)), but the edits I've made have been helpful (or so I hope). If you believe I'm not experienced yet, I'll work on editing more.
Thanks, Sn1pe! (talk) (edits) 00:30, 19 January 2014 (UTC)
So, a few users and myself are working to update Template:Infobox military installation and this will require removing some images parameters from the infoboxes with a bot. This includes "[[File:", "]]", and "|XXXpx]]." Would anyone be able to program a bot to quickly follow us once we update the infobox in the coming days so that we don't have random parameters appearing once everything is done? Thanks! Kevin Rutherford ( talk) 22:18, 28 January 2014 (UTC)
Hello again,
some months (or even years?) ago, I requested a mass-move and following orthography check allover the Romanian topics: Şş and Ţţ (with cedilla) are wrong, Șș and Țț (with diacritic comma) are correct. I don't remember who did it finally, but it was done.
I now see several "cedilla-s" and "cedilla-t" coming again: Could please somebody (or even the same who did it in the past) check the whole category (including the category itself) Category:Communes of Ştefan Vodă district?
Thank you (and a happy new year)! —[ ˈjøː ˌmaˑ] 11:09, 1 January 2014 (UTC)
Many, many thanks for implementation of my idea creation User:ReferenceBot > Take a look at Archive New REFBot request.
DPL bot, BracketBot and ReferenceBot are the best inventions of Wikipedia. It's time for a new Bot. We need the
See
Category:Articles with missing files. Cleaned today at 9:00, at 13:00 there were 43 new entries. 10 per hour.
If there is a Bot like DPL bot & BracketBot exists (sending a message after about 10 minutes), 90% of work to clean up the category would be saved.
Excuse my bad English. -- Frze > talk 13:48, 3 January 2014 (UTC) @ 930913: Many, many thanks @ John of Reading: @ StarryGrandma: @ Nyttend: @ Benzband: @ TheJJJunk: @ Jonesey95:
There are:
User:ImageRemovalBot does not detect such errors. -- Frze > talk 08:43, 4 January 2014 (UTC)
See [1] User contributions For KylieTastic 1/1/2014-4/1/2014 - There are at least >300 edits to clean up this category, 100 edits per day. Not necessary. -- Frze > talk 16:14, 4 January 2014 (UTC)
Hi, from my experience clearing up many of the last 11K problems in this category, and some of the new ones popping up since cleared, I would categorise the main issues as
For many issues if a bot posted to the users page like BracketBot that they had caused a file issue - it would help a lot.
Hope that helps — Cheers KylieTastic ( talk) 17:21, 4 January 2014 (UTC)
Thank youvery,very much, Kylie Tastic! Such a lot of work! Renaming sounds better: MissingFilesBot compared to ImageRemovalBot. Best wishes -- Frze > talk 19:06, 4 January 2014 (UTC)
[[file:http://...]]
. -
tucoxn\
talk 03:28, 10 January 2014 (UTC)@ Frze, I think User:ImageRemovalBot only catches files that were deleted from en.wikipedia and not Commons. Also, it doesn't catch rectify all the files that were deleted from en.wikipedia: 1, 2, 3, and 4. Maybe Carnildo, who runs that bot, would like to comment. - tucoxn\ talk 08:36, 10 January 2014 (UTC)
Coding... I've noticed no opposition to this idea but nobody else has taken the initiative to do it either. I'm collaborating with some other editors to get the coding worked out for en.wp. Considering comments from Bgwhite that " CommonsDelinker was essentially abandoned" and from Siebrand that there is "little to no development capacity for CommonsDelinker", I'm moving forward with a bot to take up the slack. Other projects have noticed the problems with CommonsDelinker and I plan to try to update their successful solution for use here. I'm looking forward to a successful collaborative process. More updates to come.... - tucoxn\ talk 21:37, 21 January 2014 (UTC)
Is there a bot I can use to add articles to a newly created geographical/administrative category. For example, if I want to create Category:West Palatinate and add in all articles in German Wiki's de:Kategorie:Westpfalz, is there a bot I can use to do this quickly rather than laboriously doing every article manually? Clearly one snag is that not every article in the German Wiki category yet has an English Wiki equivalent... -- Bermicourt ( talk) 17:14, 1 February 2014 (UTC)
My idea for a bot is one that fixes one specific part of grammar: an/a before a vowel/not before a vowel. For example:
I know this might be considered a Fully automatic spell-checking bots, which is not allowed according to the Frequently denied bots list, but I think this is much simpler and is less prone to mistakes.
Thoughts? - Newyorkadam ( talk) 02:05, 24 January 2014 (UTC)Newyorkadam
A bot that tried to edit in this way would inevitably result in false positives. Not to get all WP:BEANS, but using your example above, what if your bot encountered text like: "There are three types of animals: 'Type A animals', 'Type B animals', and 'Type C animals'." Your bot would be wrong to "fix" that sentence to read "...'Type An animals'....".
Or what about a sentence like "It is considered bad grammar to write 'a animal'." Your bot certainly shouldn't "fix" that sentence, but per your proposal, it would.
And that's leaving aside things like "a/an historic event", "a/an herb garden", "an honest man", "a unique problem", "an NHL goalie", and on and on.
Once you start really laying out what such a bot would actually do and the many mistakes it could make, you should be able to see why "Bots that attempt to fix spelling or grammar mistakes or apply templates such as {{ weasel words}} in an unattended fashion are denied because it is currently beyond the capability of artificial intelligence technology to create such a bot that would not make mistakes."
You are certainly welcome to create an AWB or AutoEd script that fixes such problems, but you'll need to confirm each edit manually to ensure that it does not (i.e. you do not) create errors where there were none. – Jonesey95 ( talk) 04:40, 24 January 2014 (UTC)
At
WP:CFD 2013 October 4, it was agreed to delete
Category:Archives of American Art related once the talk pages of the articles had been tagged with {{WikiProject Smithsonian Institution|class=|importance=|listas=|SIART=yes|SIART-importance=}}
.
Please can some kind bot owner do this? If you ping me when it is completed I will then arrange for deletion of the category. -- BrownHairedGirl (talk) • ( contribs) 22:08, 2 February 2014 (UTC)
Hello. I suspect this is an old chestnut, but, finding myself regularly reminded of it once again, here goes...
When editing, I find {{cite X}}s within <ref>s formatted in these kinds of ways...
{{cite book|pages=10–12|title=Islam: A Short History|author=Karen Armstrong|isbn=0-8129-6618-X|date=2000,2002}} {{cite book| pages=10–12| title=Islam: A Short History| author=Karen Armstrong| isbn=0-8129-6618-X| date=2000,2002}} {{cite book | pages = 10–12 | title = Islam: A Short History | author = Karen Armstrong | isbn = 0-8129-6618-X | date = 2000,2002 }} (etc)
– i.e. either without spacing before each parameter, or the pipe-character before the next parameter stuck to the end of the previous one, or with spaces either side of the pipe-character (and usually the same around equals-signs) – to be either less easy to read and/or more prone to undesirable linewrapping than this sort of approach:
{{cite book |pages=10–12 |title=Islam: A Short History |author=Karen Armstrong |isbn=0-8129-6618-X |date=2000,2002}}
...i.e. where there is a space preceding each pipe-character before the next parameter and no spaces either side of equals-signs, nor before the closing double curly-brackets. Might a bot (or, probably, bots) be tasked to work through <ref>s and format any/all {{cite X}}s they find in this sort of way, please..?
Sardanaphalus ( talk) 14:48, 31 January 2014 (UTC)
PS I forgot to add the following type of format to the list above:
{{cite book | pages = 10–12 | title = Islam: A Short History | author = Karen Armstrong | isbn = 0-8129-6618-X | date = 2000,2002 }}
...i.e. spaced out and across lines rather than as a string of parameters. Sardanaphalus ( talk) 15:00, 31 January 2014 (UTC)
{{
cite xxx}}
templates), the presence of whitespace before or after the parameter name or value has zero effect on the template's action. Similarly, when whitespace is present, the amount and type of whitespace (true spaces, tabs or newlines) also makes no difference. It's a long-standing agreement that bots should not be given tasks that do not cause any change in the rendered output. --
Redrose64 (
talk) 16:12, 31 January 2014 (UTC)Hi, there is this website for the newspaper Deccan Chronicle that does not regularly maintain archives. Bcos of that, dead links are frequent. I therefore request that a bot regularly monitor DC references and automatically archive them on Internet archive/Webcite as soon as they are added. Do u know of any such bot, or can u create any such? Kailash29792 ( talk) 2:38 pm, 25 January 2014, Saturday (11 days ago) (UTC+5.5)
Category:Wikipedia usernames with possible policy issues is severely backlogged, and one easy way to reduce the backlog is to follow the category's directions by removing users who haven't edited in more than a week. Would it be possible for a bot to produce a list of all pages in the category whose users haven't edited in more than a week? Such a list could be dumped in a page in my userspace for me to act on it; no need for the bot to do anything else. I'm asking for an adminbot because each user's deleted contributions should be checked as well as active contributions. Nyttend ( talk) 22:40, 5 February 2014 (UTC)
Would someone consider running a bot through the scirus.com links in the main ns and adding {{ dead link}} to them. From the 20 that I check, all seem kaput. Might also be worth considering running it through articles for deletion, as there are whack of links there. — billinghurst sDrewth 14:40, 7 February 2014 (UTC)
This bot would remove wikilinks to articles from other pages after the article in question is deleted. The idea is as follows: 1. Check the deletion log. 2. Type in the name of every article that has recently been deleted into the "What Links Here" search box. 3. Edit all those pages so that the links to the recently deleted article are removed. I am not an expert in understanding how bots work or are created, so I would like some input on whether this is feasible. Jinkinson talk to me 22:58, 27 January 2014 (UTC)
In that case, what if this bot only removed wikilinks to pages that had been deleted as a result of an AFD discussion? I imagine that pages deleted after such discussions usually aren't deleted for copyright violations, nor for pages deleted simply because they are so badly written that they have no encyclopedic merit (both of which would probably be speedied). Jinkinson talk to me 23:30, 5 February 2014 (UTC)
Hello bot editors. I'm here from WikiProject Tennis where we encountered a new issue. Davis Cup has recently changed his official website URL access format - fortunately it was a systematic renaming rather then a full revamp. The code for a tennis match was as follows www.daviscup.com/en/draws-results/tie/details.aspx?tieId= and the ID number, which became www.daviscup.com/en/draws-results/tie/details.aspx?tieId=. It affects 115 articles per Google. Can you please help us out? Lajbi Holla @ me • CP 18:07, 5 February 2014 (UTC)
Hello. WP:NYC currently contains over 12,000 articles. However, I just entered Category:New York City into AutoWikiBrowser (recursive search), and it comes up with 197,852 pages within the category and its subcategories. Damn. Some of them are userpages, though (not sure why), so the final tally should be somewhat lower than that. Does someone with a bot want to tag these articles for us? Thanks. – Muboshgu ( talk) 01:49, 10 February 2014 (UTC)
Hi. Can someone help with this request? Plastikspork seems to be busy with RL. In addition to that, could the bot generate a list of most used fields (after bot run) that are not longer supported by {{ Infobox dam}}? (Mainly for manual action, if necessary.) Best regards, Reh man 15:18, 3 February 2014 (UTC)
res_total_capacity
, res_active_capacity
, res_inactive_capacity
, to be, res_capacity_total
, res_capacity_active
, res_capacity_inactive
. The change is purely cosmetic, but since we're running the bot anyways, it would be nice. Thanks a lot!
Reh
man 14:13, 5 February 2014 (UTC)
Hello again! I have an idea for a bot that somebody could make that there should be a bot that gets rid of bad words. Any thoughts? Yoshi24517 Chat Absent 04:04, 10 February 2014 (UTC)
@ Redrose64: I know, but I have changed. I'm sure I won't mess up again this time. Yoshi24517 Chat Absent 00:29, 13 February 2014 (UTC)
Hello,
I suggest to create a bot to replace normal space for a non-brake space ( ) between quantities (in numbers) and their units. For example in: 27,4 mm --> 27,4 mm.
This will guarantee that every unit is in the same line than its number. This is a recommended practice for good technical editors.
Thanks, Petterware ( talk) 14:23, 12 February 2014 (UTC)
Could somebody please remove link tracking from Daily Mirror citations (and external links) (that's a UK newspaper) as in these edits? If the same can be done for other sites, so much the better. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:22, 13 February 2014 (UTC)
Not done
hello, the category
Category:Provinces of Saudi Arabia and all of its subcategories need to be renamed (province --> region) can someone do that? :)
Ladsgroup
بحث 22:01, 7 February 2014 (UTC)
:)
Ladsgroup
بحث 22:12, 7 February 2014 (UTC)
This request was made at WP:CFD/S by User:Androoox, where as reviewing admin I opposed it because it did not meet the speedy renaming criteria.
Another editor noted that Androox proceeded to start performing the rename anyway, for which I blocked Androoox and rolled back the changes.
I have no idea which is the correct title, and no interest or involvement in it. However, end runs around the consensus-forming process are disruptive. Please can the editors working on this topic nominate the category for discussion at WP:CFD, and seek a consensus. User:Ladsgroup is now the second editor to try bypassing the process in respect of this category. -- BrownHairedGirl (talk) • ( contribs) 11:24, 14 February 2014 (UTC)
:)
Ladsgroup
بحث 22:11, 14 February 2014 (UTC)
So, a lot of articles out there don't have photos but could (see this image for an example of National Register of Historic Places articles without images). I know that there is the image requested template for these sorts of pages, but there isn't any bot that I know of that tags the images. Would someone be willing to create or program a bot to add the template to the talk pages of existing articles so that users who use apps similar to this one would be able to go out and efficiently photograph images? Thanks! Kevin Rutherford ( talk) 02:19, 11 February 2014 (UTC)
{{
Coord}}
template, go to the page. See if there's a infobox, if so see if there's a image parameter in the infobox that is populated, if so end. If there's no image parameter populated in the template, add the Image requested template (because the infobox really should have the image if there is any and flag down a user to fix it). If there's no infobox (Add the Needs infobox template), see if there's any links to a file namespace image, if there is end. If there really is not a image andd the image requested template.
This bot would scan talk pages looking for arguments. For example, if it picks up the word "stupid" on a talk page, it would look at the context. "Forrest Gump was considered stupid by his peers" would be ignored, "You're stupid for suggesting that" would get a notice about WP:RUDE (or whatever the relevant guideline was). The name comes from the Selsnya Conclave, the peace-loving commune in Ravnica, and that meaning works just fine for here too. Supernerd11 :D Firemind ^_^ Pokedex 08:45, 16 February 2014 (UTC)
Hi there,
Editing a lot of English Wikipedia pages about Poland recently I've noticed a lot of pages have their formatting blown away because of errors in the pronunciation sections at the start of the articles i.e. when there's a code like this after a name (person or place) [ˈsɔlɛt͡s kuˈjafskʲi] - and it renders as [ˈ[unsupported input]'[unsupported input]'[unsupported input]ʂ[unsupported input]'[unsupported input]'[unsupported input]]
The page usually displays as almost blank as the main body text displays after the bottom of any infobox rather than to the left of one.
I thought this might be browser or platform specific, but it certainly seems to fail in Chrome, Safari and Firefox on a Mac and the ˈ[unsupported input]' comes up in tens of thousands of Google search results (54,000, in fact).
Fixing it seems to involve retyping the ' (apostrophe) in one of the fields above and resaving. Oddly, even copying and pasting the source code (as I did above) also fixes it. I'm guessing some kind of rogue character instead of a apostrophe?
Is this an appropriate task for a bot (I've never been involved with one before)?
Examples include this one (as at 18/02/14) /info/en/?search=Solec_Kujawski
I'm not asking in any official capacity - just as a normal user. Wikipedia:WikiProject Poland might have something to add?
Thanks for your time, Scott Escottf ( talk) 16:10, 18 February 2014 (UTC)
This article was flagged at AfD to be merged into Cthulhu Mythos deities. And while that is possible, the formatting of this bibliography article would make the page way way too long. If this information is in a table, it would be easier to add it to the deities article. I would request that a bot be used to transfer the information from the sections into a sortable table in the following format:
The content seems to be listed in a fairly consistent manner, but any items that pose problems can be skipped and I can enter them in manually after. -- Nick Penguin( contribs) 17:07, 18 February 2014 (UTC)
Index code | Author | Work | Publication Date | Other Info |
---|---|---|---|---|
H15 | Luis G. Abbadie | "Huitloxopetl XV: The Transition of Miguel Quocha" | 1997 | |
VA | Christopher Smith Adair | The Voice Of The Animals (worlds Of Cthulhu) | 2006 | Publisher: Pegasus Press ISBN: 9783937826639 |
I propose that we get an existing bot to remove orphan tags from pages that are no longer orphaned. Jinkinson talk to me 17:23, 19 February 2014 (UTC)
ProQuest often gets miscapitalised as Proquest; it's a capitalisation mistake, not an alternate capitalisation. Could someone please run AWB to fix it on the following pages?
I've checked every instance of "Proquest" on every one of these pages, and I've not found a single situation in which "Proquest" should not be changed to "ProQuest", aside from its appearance in URLs — the visible text of these pages should always read "ProQuest". This is essentially the database dump exception to WP:SPELLBOT, since I've already run through everything with human eyes. Nyttend ( talk) 05:27, 20 February 2014 (UTC)
All instances of the "shimming" templates listed at Wikipedia:List of infoboxes#Shimming, in article space, should be 'subst:' - the templates are note intended for permanent use in the English Wikipedia, only for importing data from other Wikipedias. If this could be added to the duties of one of the active cleanup bots, so much the better. Note that there are occasional additions to the list of shimming templates. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:04, 20 February 2014 (UTC)
{{
substituted|auto=yes}}
in each template's documentation.
Anomie
⚔ 11:55, 20 February 2014 (UTC)
Per
Template talk:Cite DNB#CS1 errors when volume not included, could someone create a bot which would look at all the instances of {{
Cite DNB}} and {{
DNB}} (and their redirects) where |wstitle=
is populated and |volume=
if it doesn't already exist? For example,
Ralph Cudworth contains {{DNB Cite|wstitle=Cudworth, Ralph}}
which displays:
The first link takes you to
https://en.wikisource.org/wiki/Cudworth,_Ralph_(DNB00) which contains a DNB00 template with |volume=13
. The request is to change the Wikipedia article to {{DNB Cite|wstitle=Cudworth, Ralph|volume=13}}
which displays a more specific reference:
Similarly, could someone also add |volume=
to {{
Cite DCB}} (and its redirects) if it doesn't already exist? For example,
Mackenzie Bowell contains {{Canadabio|ID=7231}}
which displays:
The first link takes you to
http://www.biographi.ca/en/bio.php?id_nbr=7231 which contains var m_volume_name = 'Volume XIV (1911-1920)';
. The request is to change the Wikipedia article to {{Canadabio|ID=7231|volume=XIV}}
which displays a more specific reference:
Thanks! GoingBatty ( talk) 04:00, 20 February 2014 (UTC)
Two thousand dead links for multiply.com. If someone has a bot that dead links, please set it loose for those pages at Special:LinkSearch/*.multiply.com. Thanks. — billinghurst sDrewth 22:24, 21 February 2014 (UTC)
Two thousand dead links for multiply.com. If someone has a bot that dead links, please set it loose for those pages at Special:LinkSearch/*.multiply.com. Thanks. — billinghurst sDrewth 22:24, 21 February 2014 (UTC)
{{
Infobox road}}
and {{
Infobox road small}}
insert an image at the top of the infobox. If a route type and number combination call for an image does not exist, it trips a
category. I would like a bot to output a list of images needed in the 'Infobox road transclusions without route marker' category.
If the infobox route type parameter (type) is set up correctly, it should be easy to list the missing files. However, if the type is not set up, it should flag as an invalid type. – Fredddie ™ 04:58, 2 March 2014 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 55 | Archive 56 | Archive 57 | Archive 58 | Archive 59 | Archive 60 | → | Archive 65 |
Per consensus
here, can we have a bot roaming around to find all instances of the template and its associated redirects on video game articles to remove the |rating=
or |ratings=
parameter?
TeleComNasSprVen (
talk •
contribs) 22:54, 26 December 2013 (UTC)
This bot would change all templates that need to be substituted. For example, if somebody put {{ AfD}} instead of {{ subst:AfD}}, the bot would fix it. buff bills 7701 16:42, 27 December 2013 (UTC)
{{
substituted|auto=yes}}
on the template (and temporarily add it to
User:AnomieBOT/TemplateSubster force if necessary). For existing templates, be sure there is consensus for the template to be substituted by a bot before doing so.
Anomie
⚔ 16:48, 27 December 2013 (UTC)Hi ya'll. Would it be possible for a bot to run through all of the articles on this list (NA-class, unassessed quality articles for WP:WikiProject Michigan) and set the talk page project assessment to "class=redirect"? There are currently 1005 redirects on the list; it would be quite possible to do with AWB, but it would seem much easier to do with a bot. I am, however, a complete newb when it comes to bots, so my idea of "easier" may in fact be...not so much. Thanks in advance, Dana boomer ( talk) 21:34, 29 December 2013 (UTC)
What if we had a bot that could create redirects upon command? I'm imagining the following features:
We wouldn't need to worry about false positives or other "normal" bot mistakes, since the bot would only edit based on very specific human instructions. I'm thinking of this after discovering a pile of redirects to create: 49 of the 50 lines at 0 this edition of the sandbox need to be created, and while any of them is simple, I don't feel like spending the time (or asking anyone else to spend the time) creating all of them. Perfect bot task, if I'm not misunderstanding something. Nyttend ( talk) 18:39, 20 December 2013 (UTC)
Is there a bot that can still be tasked with removing date links? There are a set of pages Wikipedia:WikiProject Missing encyclopedic articles/DNB Epitome 01 through to Wikipedia:WikiProject Missing encyclopedic articles/DNB Epitome 63 which carry linked years. These were originally created when linking dates was still in fashion. It would help with moving information from these Wikipedia pages to article pages if the links were removed. It would also help if dashes between the years to be changed, were changed to ndashs.
If there is no such regular bot job set up to do it, here are the expressions (that covers most cases -- but only tested on one page) for those bots based on AWB.
find | replace | regex | notes |
---|---|---|---|
– | – | no | |
— | — | no | |
(\]\] *\?.*)-(.*\[\[) | $1–$2 | yes | |
(\]\].*)-.*c\..*\[\[ | $1– c. [[ | yes | |
]]-[[ | ]]–[[ | no | |
([0-9]*\]\])([0-9]) | $1 $2 | yes | case where one number is next to another split them |
[\[([0-9]*)\]\] | $1 | yes |
list of pages
|
---|
|
-- PBS ( talk) 11:39, 21 December 2013 (UTC)
\[\[([12]?\d{1,3})\]\]
so it doesn't change valid non-year wikilinks such as
8086 and
80486.
GoingBatty (
talk) 14:57, 21 December 2013 (UTC)
find | replace | regex | notes |
---|---|---|---|
– | – | no | |
— | — | no | |
(\]\] *\?.*)-\s*(.*\[\[) | $1–$2 | yes | also remove a space after the dash |
(\]\].*)-.*c\..*\[\[ | $1– c. [[ | yes | |
\]\]-\s*\[\[ | ]]–[[ | yes | also remove a space after the dash |
([0-9]*\]\]\??)\s*(\d{4}) | $1–$2 | yes | case where one year is next to another, add a – |
([0-9]*\]\])\s*[79]([-\)]) | $1?$2 | yes | change a 7 or 9 after the year to a ? |
([0-9]*\]\])([0-9]) | $1 $2 | yes | case where one number is next to another split them |
[\[([0-9]*)\]\]\s\? | $1? | yes | remove a space between the year and the ? |
[\!li]\s*\[\[(\d{3})\]\] | 1$1 | yes | change a ! or l in front of a three-digit year to a four digit year |
\[\[([12]?\d{1,3})\]\]? | $1 | yes | only change valid years |
\[\[(\d{1,3})7\]\]? | $1? | yes | change a three-digit year followed by a 7 (which looks like an invalid four-digit year ending in 7) to three-digit year followed by a 7 |
Hello. Following
this discussion at the Footy project, please would it possible for all the instances of urls referencing a specific site to be changed from .htm to .html. The urls to be changed are of the form www.neilbrown.newcastlefans.com/
followed by a variable part and then .htm, e.g. www.neilbrown.newcastlefans.com/player/barriethomas.htm
. Some links to the site have already been fixed, but there are still quite a lot broken. Thanks in advance,
Struway2 (
talk) 22:00, 29 December 2013 (UTC)
(http:\/\/www\.neilbrown\.newcastlefans\.com\/(.*?).htm)(?!l)
$1l
@ GoingBatty: Think we all want to use the template when it's up to the job, but as an immediate fix we do appear to have a consensus for mending the URLs. Thanks for requesting clarification. cheers, Struway2 ( talk) 17:17, 30 December 2013 (UTC)
My idea is a bot that automatically updates sports statistics. Preferably, it would get statistics from the assorted [sport]-reference.com sites (e.g. baseball-reference.com basketball-reference.com etc.). It would pull information from an athletes' page every so often and update that athlete's Wikipedia article.
Thanks! Newyorkadam ( talk) 18:49, 30 December 2013 (UTC)Newyorkadam
The above reminded me I have long considered a request for bot updating of top-10 tennis ranking navboxes for ATP (men) and WTA (women). There are many in Category:ATP Tour navigational boxes and Category:WTA Tour navigational boxes, for example {{ Top ten Argentine male singles tennis players}} and {{ Top Australian female tennis players (doubles)}}. There are also continents like {{ Top ten male singles tennis players of countries in the Asian Tennis Federation}} and {{ Top ten European female doubles tennis players}}. And this one for the World has its own design: {{ Top ten tennis players}}. It has been off-season for ATP and WTA for 1-2 months with few ranking changes caused by small tournaments, but that will change next week. The ATP and WTA seasons start around 1 January and last 10-11 months.
Rankings are usually published each Monday (except the middle of the four two-week Grand Slam tournaments) at http://www.atpworldtour.com/Rankings/Singles.aspx and http://www.wtatennis.com/singles-rankings (they are different organizations with different formats and publishing times). In addition to updating the navboxes, a bot should ideally also add or remove them on the player biographies when a player moves in or out of a navbox. Category:Tennis templates also has some non-navbox ranking templates for permanent display in general tennis articles: {{ Current Men's Singles ATP Rankings}}, {{ Current Men's Doubles Individual ATP Rankings}}, {{ Current Men's Doubles Team ATP Rankings}} (not currently used), {{ Current Women's Singles WTA Rankings}}, {{ Current Women's Doubles Individual WTA Rankings}}. I don't know the copyright rules but two articles dedicated to longer ATP and WTA rankings like top-100 or more would also be nice. World rankings are very important in tennis because they determine the seeds and players in nearly all tournaments (except a few wild cards). Tennis is probably the biggest spectator sport for women.
An ambitious bot operator could also consider offering a bot to other languages, maybe by letting the bot provide raw data and call local templates for text and design. German and Italian have many tennis ranking navboxes. There are also some in other languages, and they might want more if they don't have to update them. I don't know whether there is a practical way to do it at Wikidata. The ranking of individual players for display in their own biography could of course be bot-maintained at Wikidata, along with other player stats. PrimeHunter ( talk) 00:06, 31 December 2013 (UTC)
Hello again,
some months (or even years?) ago, I requested a mass-move and following orthography check allover the Romanian topics: Şş and Ţţ (with cedilla) are wrong, Șș and Țț (with diacritic comma) are correct. I don't remember who did it finally, but it was done.
I now see several "cedilla-s" and "cedilla-t" coming again: Could please somebody (or even the same who did it in the past) check the whole category (including the category itself) Category:Communes of Ştefan Vodă district?
Thank you (and a happy new year)! —[ ˈjøː ˌmaˑ] 11:09, 1 January 2014 (UTC)
Can someone please remove all instances of the phrase "the highest and most prestigious award for gallantry in the face of the enemy that can be awarded to British and Commonwealth forces." This clumsy and entirely unnecessary phase is repeated about 1,310 times on the Wiki; it is part of every biograph of everyone who has ever been awarded the VC. The nature of internal links means readers if they're not familiar with the award are just a click away from it. Those that are aware of what a VC is, and if you're looking at English language military biographies, you ought to really know this, don't need this extra information. Barney the barney barney ( talk) 19:52, 27 December 2013 (UTC)
Yet Another Redirect Cleanup Bot ( BRFA · contribs · actions log · block log · flag log · user rights) has been retired due to inactivity. Is there an active admin willing to operate a bot to perform this task in its place? The source code of the retired bot is available. WJBscribe (talk) 12:31, 6 January 2014 (UTC)
I would request that someone revive my request, and add missing {{ mergeto}} and {{ mergefrom}}tags on articles found in Category:All_articles_to_be_merged. Work on this was being done previously, but real life concerns put it on the back burner. I would also request that a talk page message be left by the bot explaining why the tag has been added, and requesting that both tags be removed (target and subject articles) if there is no consensus to merge.
The goal of this would be to reduce the backlog of old merge requests (4 years) at WP:PMG, by having regular editors tag care of old, unfulfilled merge requests. -- Nick Penguin( contribs) 18:12, 2 January 2014 (UTC)
Hi, I would like to revive my request archived here: /info/en/?search=Wikipedia:Bot_requests/Archive_57
The request: Hi All, I am Dr. Noa Rappaport, scientific leader of the MalaCards database of human diseases. Following a suggestion by Andrew Su ( /info/en/?search=Wikipedia:WikiProject_Molecular_and_Cellular_Biology/Proposals#MalaCards_-_www.malacards.org) we were asked to write a bot that updates the disease box external references within disease entries in Wikipedia: /info/en/?search=User:ProteinBoxBot/Phase_3#Disease. We found it to be a non trivial task. Does anyone know of any such bot that exists or can help us write it ? Mapping data is found here: https://en.wikipedia.org/?title=User:Noa.rappaport/Malacard_mappings Thanks. Noa.rappaport ( talk) 08:36, 9 January 2014 (UTC)
Hi all. Based on an emerging consensus at the DS review, I have to establish that it is possible to automate the logging of discretionary sanctions notices. At the moment, when users are given notice of discretionary sanctions, the notice is logged at the related arbitration decision page. When the new Alert template comes into use, we hope to replace manual logging with bot-assisted automated logging. This is the intended behaviour:
|1=
of the template instance.The idea is basically to take the stigma out of discretionary sanctions alerts (which are currently known as notices). Eliminating "naming and shaming" logs, in favour of a neutral tracking spreadsheet maintained by a stable bot seems the ideal solution. The template is tracked using a Z-number template, though obviously a bot that checked that template a mere few times a day would still miss notices given and reverted between checks. Does this sound like something it would be possible? If so, is anybody willing to have their bots do this? Thanks for your consideration. Regards, AGK [•] 13:43, 10 January 2014 (UTC)
I've discovered that many pages in
Category:All unreferenced BLPs actually have external links, which count as references and thus the BLP is not unsourced. Seven out of the first ten pages in the category excluding those with PROD BLP tags,
1,
2,
3,
4,
5,
6,
7,
8,
9, and
10, have external links of some sort. It seems like a job for a bot to go through and maintain the category by tagging any page with external links by doing something like adding |links=yes
to the BLP unsourced template. A human would then review the article and judge if the links are appropriate, in which case they would remove the tag, maybe replacing it with {{
BLP sources}}, or better yet cleaning up and further sourcing the article. I'd expect it would be too controversial to have the bot remove/replace the tag based on the presence of a link, which is why I recommend tagging it. (I thought a list of the relevant pages could be generated in one step with an
API query, but for whatever reason it only displays some ELs.) Anyway, this seems pretty straightforward, but please let me know if there are any problems or if I need to gain consensus for this. Cheers, ~
Hue
Sat
Lum 23:52, 10 January 2014 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
I would like to create a bot that would clean the Sandboxes out and revert vandalism. Do I need to create another account before I do this? Thanks! Yoshi24517 ( talk) 19:31, 9 January 2014 (UTC)
Is there anything that can send me the diffs of the edits on my talkpage to an email? It would be a useful time saver. 930913( Congratulate) 09:12, 13 January 2014 (UTC)
The non-free content criteria require that each fair-use image bear "the name of each article ... in which fair use is claimed for the item, and a separate, specific non-free use rationale for each use of the item". Sometimes, non-free images are placed into articles which are not named in the FUR, which means that the use of the image in that article contravenes WP:NFCCP#10c. There does not at present appear to be any automated means for detecting such misuse.
I would like to request that a bot be tasked (or created if necessary) which ensures that the non-free content criteria are adhered to, so far as is possible with a bot - clearly, a bot cannot judge WP:NFCCP#5. -- Redrose64 ( talk) 23:07, 8 January 2014 (UTC)
Should we have a bot to remove the word "currently" from every article? Any sentence with "currently" should probably rewritten altogether because "currently" is a sign of a situation expected to be temporary, in which case the sentence should be written now in such a way that it will still make sense a year from now:
In the absence of a good rewrite, however, we can at least get rid of the "currently". I know this seems like six of one, half a dozen of the other, but it's a Wikipedia pet peeve of mine because it looks absurd when I come across outdated content that contains that word. Consider the situation a year from now, if a reader comes across the article and sees "Marsh is appearing in a revival of Barefoot in the Park". Suppose Marsh has by then moved on to another production. In that case, the sentence is false whether or not it contains the word "currently". But without it, at least the sentence isn't sitting there insisting in vain on its own currency. —Largo Plazo ( talk) 12:56, 16 January 2014 (UTC)
Alternatively, I suppose we could define an inline template, similar to {{ who}} or {{ citation needed}}, that the bot could insert after the word "currently", that would display some kind of text encapsulating my thoughts from above regarding the desirability of a rewrite. The template could group the articles into a category for use by editors looking to make improvements or keep track of potentially outdated assertions. —Largo Plazo ( talk) 13:04, 16 January 2014 (UTC)
{{
when}}
. --
Redrose64 (
talk) 14:45, 16 January 2014 (UTC)
Since I had a mistake in the 'translation' of texts that are used to generate COIBot-pages, there are now 2 x ## links (fill in the number of bot-generated pages) pointing to mainspace pages:
Which should be
(the former are the links on meta, not on en.wikipedia). The bot from now on will write them correctly (and update them in reports that it updates, see the changed line at the bottom of this diff.
Is someone operating a bot that solves these, who would be willing to perform the change for all pages that link to the first two (most should be in Wikipedia:WikiProject Spam-space, but there may be some lying around elsewhere as well - better to do all of them). Afterwards, the two mainspace CNR's can be deleted. Thanks! -- Dirk Beetstra T C 14:00, 11 January 2014 (UTC)
I would like a bot to find Counties of the United States which are not using a photo in the ex image variable within the Template:Infobox U.S. County and then put into Category:U.S. Counties Missing Ex Image. Please note that some of the Infoboxes don't even have "ex image" in the template as this variable was added after many of the Infoboxes were put in place. This would help in quickly finding counties with need a visual representation of the county. Is this possible? - Ichabod ( talk) 01:30, 14 January 2014 (UTC)
Is there anyone who is currently running a project tagging bot. I need all the new articles listed in categories (but not subcategories) listed at WP:CHIBOTCATS to be tagged with {{ WikiProject Chicago}}.-- TonyTheTiger ( T / C / WP:FOUR / WP:CHICAGO / WP:WAWARD) 17:00, 17 January 2014 (UTC)
Hello again,
some months (or even years?) ago, I requested a mass-move and following orthography check allover the Romanian topics: Şş and Ţţ (with cedilla) are wrong, Șș and Țț (with diacritic comma) are correct. I don't remember who did it finally, but it was done.
I now see several "cedilla-s" and "cedilla-t" coming again: Could please somebody (or even the same who did it in the past) check the whole category (including the category itself) Category:Communes of Ştefan Vodă district?
Thank you (and a happy new year)! —[ ˈjøː ˌmaˑ] 11:09, 1 January 2014 (UTC)
At Draft talk:Template:Redirect documentation a template to be called {{ Redirect documentation}} is being developed. This will require there to be a parameter, either unamed, 1= or current= to have any use. Would it be possible for a bot to leave a message on the talk page of any user who adds this template without a parameter or with an empty parameter?
If possible, someone who does it more than once should get no more than one message per day that lists all the redirects that need to be addressed. It shouldn't ever leave more than one message per redirect.
Still being discussed, but there is possibility that this template will autocategorise a page as a redirect unless a parameter is set. Would it be possible for a bot to check uses of this template and (a) set the parameter if the page it is used on is not a redirect; (b) remove the parameter if the page is a redirect?
At the minute this is just a request to determine feasibility, as the template is not live. If it is feasible then I expect a request to actually implement it wont be too far in the future. Thryduulf ( talk) 11:26, 19 January 2014 (UTC)
Would it be possible for a bot to identify pages are that are redirects to pages that have one or more WikiProject banners, and if so tag the redirect talk page for the same WikiProjects and taskforces as the target page using class=redirect and importance=NA?
e.g. If
Foo redirects to
Bar and
Talk:Bar is tagged with a banner for WikiProject Trains and the Locomotives task force, then add {{
WikiProject Trains}}
to
Talk:Foo.
If there is standard logic for when to use the {{ WikiProject banner shell}} template, then apply that. Otherwise use it if the target does, don't if it doesn't.
If the redirect talk already has Project banners then the bot should just (a) make sure that they use class=redirect and (b) make sure that any living/not living tags match, even if the set of banners is not the same as those on the target. The thinking behind this is that the redirect might be more specific than the target and so different projects might apply - for example, The weather in London redirects to London, the redirect talk page is tagged for the Meterology project but not the Olympics/Paralympics project. I don't consider these differences to be sufficiently often or important that it should stand in the way of a bot just copying all by default.
After an initial run, then there will be a need for either periodic or continuous future runs as redirects are fluid. My gut feeling is that converting class=redirect to class=(something else) on pages that are not redirects is a different task? If I'm wrong on that I have no object to it being included.
This request is floating an idea to see if it is possible, I have not sought consensus for it anywhere (there is no point if it's not practical). If people here think that it is both possible and that consensus for it is needed then can you suggest where best to get that consensus (Village pump?). If it is possible and uncontroversial then please go ahead and work your magic! Thryduulf ( talk) 11:51, 19 January 2014 (UTC)
Just to make sure we are on the same page: Not all projects tag redirects. Moreover, I think if class is set to "Redirect" then importance is auto set to "NA" and it is not need to be added. -- Magioladitis ( talk) 13:02, 19 January 2014 (UTC)
Topic ban isn't hapening and I got a WikiGnome pat of happiness
|
---|
Since I'm in danger of being topic banned from AFC the following Bot Tasks need new Operators:
I'm happy to transfer a database dump that drives tasks 1 and 2 to the new operator and help the new operator get acclimated with the processes |
This bot would detect hoax articles and maybe hoax edits. The bot would do Google searches (including Books,News,and Scholar) for the article's name. After it discounts Wikipedia mirrors and any other sites with identical text or that definitely came after the article's creation, any articles with unusually low ghits will be reported to a queue. This tool might also detect non-notable articles. WorldCat and JSTOR might also be used. Pages created by users with not many edits might be profiled. Alexschmidt711 ( talk) 21:09, 22 January 2014 (UTC)
Hi, there is this website for the newspaper Deccan Chronicle that does not regularly maintain archives. Bcos of that, dead links are frequent. I therefore request that a bot regularly monitor DC references and automatically archive them on Internet archive/ Webcite as soon as they are added. Do u know of any such bot, or can u create any such? Kailash29792 ( talk) 09:08, 25 January 2014 (UTC)
This Category recently started to fill up again, due to the efforts of one of Theo's Little Bot's to add attribution information to self created images.
The issue in practically all of the case's is that they bot couldn't find a description.
I've found that often the description can be obtained by pulling the image caption from an article where the image is used.
Is it possible for someone to provide a bot does a caption pulling sweep in respect of that category on a Weekly basis?
The intention was that the Bot added something like :
'''Captioned:''' {{{caption}}}} in [[{{{article}}}]] where {{{capiton}}} is the caption pulled from an article and {{{article}}} is the relevant article name.
Sfan00 IMG ( talk) 11:16, 25 January 2014 (UTC)
If it's not already done....
www.archiveteam.org/index.php?title=URLTeam#bit.ly_aliases
has a list of URL shortners.
Surely a bot could go through these and expand them out? Sfan00 IMG ( talk) 20:05, 26 January 2014 (UTC)
This is a long story....but to begin, I joined Wikipedia on 18:23, September 29, 2011, to be exact. I joined for the sole purpose of bots; I was younger and naïve. I immediately asked for help on making a bot, but was told I didn't have enough experience. So, I found a home at the Illustration Graphics Lab. After making graphics for a while, I left for personal reasons. I returned with Google Code-In 2013, and found some cool tasks for Pywikibot. This re-sparked my interest in bot-making, and I decided to figure out something to make my bot do, something simple to start out with but really helpful around here. I know this is kind of a reverse-bot request, but does anyone have ideas that could help?
I'm not really that experienced (only 585 edits as of 00:30, 19 January 2014 (UTC)), but the edits I've made have been helpful (or so I hope). If you believe I'm not experienced yet, I'll work on editing more.
Thanks, Sn1pe! (talk) (edits) 00:30, 19 January 2014 (UTC)
So, a few users and myself are working to update Template:Infobox military installation and this will require removing some images parameters from the infoboxes with a bot. This includes "[[File:", "]]", and "|XXXpx]]." Would anyone be able to program a bot to quickly follow us once we update the infobox in the coming days so that we don't have random parameters appearing once everything is done? Thanks! Kevin Rutherford ( talk) 22:18, 28 January 2014 (UTC)
Hello again,
some months (or even years?) ago, I requested a mass-move and following orthography check allover the Romanian topics: Şş and Ţţ (with cedilla) are wrong, Șș and Țț (with diacritic comma) are correct. I don't remember who did it finally, but it was done.
I now see several "cedilla-s" and "cedilla-t" coming again: Could please somebody (or even the same who did it in the past) check the whole category (including the category itself) Category:Communes of Ştefan Vodă district?
Thank you (and a happy new year)! —[ ˈjøː ˌmaˑ] 11:09, 1 January 2014 (UTC)
Many, many thanks for implementation of my idea creation User:ReferenceBot > Take a look at Archive New REFBot request.
DPL bot, BracketBot and ReferenceBot are the best inventions of Wikipedia. It's time for a new Bot. We need the
See
Category:Articles with missing files. Cleaned today at 9:00, at 13:00 there were 43 new entries. 10 per hour.
If there is a Bot like DPL bot & BracketBot exists (sending a message after about 10 minutes), 90% of work to clean up the category would be saved.
Excuse my bad English. -- Frze > talk 13:48, 3 January 2014 (UTC) @ 930913: Many, many thanks @ John of Reading: @ StarryGrandma: @ Nyttend: @ Benzband: @ TheJJJunk: @ Jonesey95:
There are:
User:ImageRemovalBot does not detect such errors. -- Frze > talk 08:43, 4 January 2014 (UTC)
See [1] User contributions For KylieTastic 1/1/2014-4/1/2014 - There are at least >300 edits to clean up this category, 100 edits per day. Not necessary. -- Frze > talk 16:14, 4 January 2014 (UTC)
Hi, from my experience clearing up many of the last 11K problems in this category, and some of the new ones popping up since cleared, I would categorise the main issues as
For many issues if a bot posted to the users page like BracketBot that they had caused a file issue - it would help a lot.
Hope that helps — Cheers KylieTastic ( talk) 17:21, 4 January 2014 (UTC)
Thank youvery,very much, Kylie Tastic! Such a lot of work! Renaming sounds better: MissingFilesBot compared to ImageRemovalBot. Best wishes -- Frze > talk 19:06, 4 January 2014 (UTC)
[[file:http://...]]
. -
tucoxn\
talk 03:28, 10 January 2014 (UTC)@ Frze, I think User:ImageRemovalBot only catches files that were deleted from en.wikipedia and not Commons. Also, it doesn't catch rectify all the files that were deleted from en.wikipedia: 1, 2, 3, and 4. Maybe Carnildo, who runs that bot, would like to comment. - tucoxn\ talk 08:36, 10 January 2014 (UTC)
Coding... I've noticed no opposition to this idea but nobody else has taken the initiative to do it either. I'm collaborating with some other editors to get the coding worked out for en.wp. Considering comments from Bgwhite that " CommonsDelinker was essentially abandoned" and from Siebrand that there is "little to no development capacity for CommonsDelinker", I'm moving forward with a bot to take up the slack. Other projects have noticed the problems with CommonsDelinker and I plan to try to update their successful solution for use here. I'm looking forward to a successful collaborative process. More updates to come.... - tucoxn\ talk 21:37, 21 January 2014 (UTC)
Is there a bot I can use to add articles to a newly created geographical/administrative category. For example, if I want to create Category:West Palatinate and add in all articles in German Wiki's de:Kategorie:Westpfalz, is there a bot I can use to do this quickly rather than laboriously doing every article manually? Clearly one snag is that not every article in the German Wiki category yet has an English Wiki equivalent... -- Bermicourt ( talk) 17:14, 1 February 2014 (UTC)
My idea for a bot is one that fixes one specific part of grammar: an/a before a vowel/not before a vowel. For example:
I know this might be considered a Fully automatic spell-checking bots, which is not allowed according to the Frequently denied bots list, but I think this is much simpler and is less prone to mistakes.
Thoughts? - Newyorkadam ( talk) 02:05, 24 January 2014 (UTC)Newyorkadam
A bot that tried to edit in this way would inevitably result in false positives. Not to get all WP:BEANS, but using your example above, what if your bot encountered text like: "There are three types of animals: 'Type A animals', 'Type B animals', and 'Type C animals'." Your bot would be wrong to "fix" that sentence to read "...'Type An animals'....".
Or what about a sentence like "It is considered bad grammar to write 'a animal'." Your bot certainly shouldn't "fix" that sentence, but per your proposal, it would.
And that's leaving aside things like "a/an historic event", "a/an herb garden", "an honest man", "a unique problem", "an NHL goalie", and on and on.
Once you start really laying out what such a bot would actually do and the many mistakes it could make, you should be able to see why "Bots that attempt to fix spelling or grammar mistakes or apply templates such as {{ weasel words}} in an unattended fashion are denied because it is currently beyond the capability of artificial intelligence technology to create such a bot that would not make mistakes."
You are certainly welcome to create an AWB or AutoEd script that fixes such problems, but you'll need to confirm each edit manually to ensure that it does not (i.e. you do not) create errors where there were none. – Jonesey95 ( talk) 04:40, 24 January 2014 (UTC)
At
WP:CFD 2013 October 4, it was agreed to delete
Category:Archives of American Art related once the talk pages of the articles had been tagged with {{WikiProject Smithsonian Institution|class=|importance=|listas=|SIART=yes|SIART-importance=}}
.
Please can some kind bot owner do this? If you ping me when it is completed I will then arrange for deletion of the category. -- BrownHairedGirl (talk) • ( contribs) 22:08, 2 February 2014 (UTC)
Hello. I suspect this is an old chestnut, but, finding myself regularly reminded of it once again, here goes...
When editing, I find {{cite X}}s within <ref>s formatted in these kinds of ways...
{{cite book|pages=10–12|title=Islam: A Short History|author=Karen Armstrong|isbn=0-8129-6618-X|date=2000,2002}} {{cite book| pages=10–12| title=Islam: A Short History| author=Karen Armstrong| isbn=0-8129-6618-X| date=2000,2002}} {{cite book | pages = 10–12 | title = Islam: A Short History | author = Karen Armstrong | isbn = 0-8129-6618-X | date = 2000,2002 }} (etc)
– i.e. either without spacing before each parameter, or the pipe-character before the next parameter stuck to the end of the previous one, or with spaces either side of the pipe-character (and usually the same around equals-signs) – to be either less easy to read and/or more prone to undesirable linewrapping than this sort of approach:
{{cite book |pages=10–12 |title=Islam: A Short History |author=Karen Armstrong |isbn=0-8129-6618-X |date=2000,2002}}
...i.e. where there is a space preceding each pipe-character before the next parameter and no spaces either side of equals-signs, nor before the closing double curly-brackets. Might a bot (or, probably, bots) be tasked to work through <ref>s and format any/all {{cite X}}s they find in this sort of way, please..?
Sardanaphalus ( talk) 14:48, 31 January 2014 (UTC)
PS I forgot to add the following type of format to the list above:
{{cite book | pages = 10–12 | title = Islam: A Short History | author = Karen Armstrong | isbn = 0-8129-6618-X | date = 2000,2002 }}
...i.e. spaced out and across lines rather than as a string of parameters. Sardanaphalus ( talk) 15:00, 31 January 2014 (UTC)
{{
cite xxx}}
templates), the presence of whitespace before or after the parameter name or value has zero effect on the template's action. Similarly, when whitespace is present, the amount and type of whitespace (true spaces, tabs or newlines) also makes no difference. It's a long-standing agreement that bots should not be given tasks that do not cause any change in the rendered output. --
Redrose64 (
talk) 16:12, 31 January 2014 (UTC)Hi, there is this website for the newspaper Deccan Chronicle that does not regularly maintain archives. Bcos of that, dead links are frequent. I therefore request that a bot regularly monitor DC references and automatically archive them on Internet archive/Webcite as soon as they are added. Do u know of any such bot, or can u create any such? Kailash29792 ( talk) 2:38 pm, 25 January 2014, Saturday (11 days ago) (UTC+5.5)
Category:Wikipedia usernames with possible policy issues is severely backlogged, and one easy way to reduce the backlog is to follow the category's directions by removing users who haven't edited in more than a week. Would it be possible for a bot to produce a list of all pages in the category whose users haven't edited in more than a week? Such a list could be dumped in a page in my userspace for me to act on it; no need for the bot to do anything else. I'm asking for an adminbot because each user's deleted contributions should be checked as well as active contributions. Nyttend ( talk) 22:40, 5 February 2014 (UTC)
Would someone consider running a bot through the scirus.com links in the main ns and adding {{ dead link}} to them. From the 20 that I check, all seem kaput. Might also be worth considering running it through articles for deletion, as there are whack of links there. — billinghurst sDrewth 14:40, 7 February 2014 (UTC)
This bot would remove wikilinks to articles from other pages after the article in question is deleted. The idea is as follows: 1. Check the deletion log. 2. Type in the name of every article that has recently been deleted into the "What Links Here" search box. 3. Edit all those pages so that the links to the recently deleted article are removed. I am not an expert in understanding how bots work or are created, so I would like some input on whether this is feasible. Jinkinson talk to me 22:58, 27 January 2014 (UTC)
In that case, what if this bot only removed wikilinks to pages that had been deleted as a result of an AFD discussion? I imagine that pages deleted after such discussions usually aren't deleted for copyright violations, nor for pages deleted simply because they are so badly written that they have no encyclopedic merit (both of which would probably be speedied). Jinkinson talk to me 23:30, 5 February 2014 (UTC)
Hello bot editors. I'm here from WikiProject Tennis where we encountered a new issue. Davis Cup has recently changed his official website URL access format - fortunately it was a systematic renaming rather then a full revamp. The code for a tennis match was as follows www.daviscup.com/en/draws-results/tie/details.aspx?tieId= and the ID number, which became www.daviscup.com/en/draws-results/tie/details.aspx?tieId=. It affects 115 articles per Google. Can you please help us out? Lajbi Holla @ me • CP 18:07, 5 February 2014 (UTC)
Hello. WP:NYC currently contains over 12,000 articles. However, I just entered Category:New York City into AutoWikiBrowser (recursive search), and it comes up with 197,852 pages within the category and its subcategories. Damn. Some of them are userpages, though (not sure why), so the final tally should be somewhat lower than that. Does someone with a bot want to tag these articles for us? Thanks. – Muboshgu ( talk) 01:49, 10 February 2014 (UTC)
Hi. Can someone help with this request? Plastikspork seems to be busy with RL. In addition to that, could the bot generate a list of most used fields (after bot run) that are not longer supported by {{ Infobox dam}}? (Mainly for manual action, if necessary.) Best regards, Reh man 15:18, 3 February 2014 (UTC)
res_total_capacity
, res_active_capacity
, res_inactive_capacity
, to be, res_capacity_total
, res_capacity_active
, res_capacity_inactive
. The change is purely cosmetic, but since we're running the bot anyways, it would be nice. Thanks a lot!
Reh
man 14:13, 5 February 2014 (UTC)
Hello again! I have an idea for a bot that somebody could make that there should be a bot that gets rid of bad words. Any thoughts? Yoshi24517 Chat Absent 04:04, 10 February 2014 (UTC)
@ Redrose64: I know, but I have changed. I'm sure I won't mess up again this time. Yoshi24517 Chat Absent 00:29, 13 February 2014 (UTC)
Hello,
I suggest to create a bot to replace normal space for a non-brake space ( ) between quantities (in numbers) and their units. For example in: 27,4 mm --> 27,4 mm.
This will guarantee that every unit is in the same line than its number. This is a recommended practice for good technical editors.
Thanks, Petterware ( talk) 14:23, 12 February 2014 (UTC)
Could somebody please remove link tracking from Daily Mirror citations (and external links) (that's a UK newspaper) as in these edits? If the same can be done for other sites, so much the better. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:22, 13 February 2014 (UTC)
Not done
hello, the category
Category:Provinces of Saudi Arabia and all of its subcategories need to be renamed (province --> region) can someone do that? :)
Ladsgroup
بحث 22:01, 7 February 2014 (UTC)
:)
Ladsgroup
بحث 22:12, 7 February 2014 (UTC)
This request was made at WP:CFD/S by User:Androoox, where as reviewing admin I opposed it because it did not meet the speedy renaming criteria.
Another editor noted that Androox proceeded to start performing the rename anyway, for which I blocked Androoox and rolled back the changes.
I have no idea which is the correct title, and no interest or involvement in it. However, end runs around the consensus-forming process are disruptive. Please can the editors working on this topic nominate the category for discussion at WP:CFD, and seek a consensus. User:Ladsgroup is now the second editor to try bypassing the process in respect of this category. -- BrownHairedGirl (talk) • ( contribs) 11:24, 14 February 2014 (UTC)
:)
Ladsgroup
بحث 22:11, 14 February 2014 (UTC)
So, a lot of articles out there don't have photos but could (see this image for an example of National Register of Historic Places articles without images). I know that there is the image requested template for these sorts of pages, but there isn't any bot that I know of that tags the images. Would someone be willing to create or program a bot to add the template to the talk pages of existing articles so that users who use apps similar to this one would be able to go out and efficiently photograph images? Thanks! Kevin Rutherford ( talk) 02:19, 11 February 2014 (UTC)
{{
Coord}}
template, go to the page. See if there's a infobox, if so see if there's a image parameter in the infobox that is populated, if so end. If there's no image parameter populated in the template, add the Image requested template (because the infobox really should have the image if there is any and flag down a user to fix it). If there's no infobox (Add the Needs infobox template), see if there's any links to a file namespace image, if there is end. If there really is not a image andd the image requested template.
This bot would scan talk pages looking for arguments. For example, if it picks up the word "stupid" on a talk page, it would look at the context. "Forrest Gump was considered stupid by his peers" would be ignored, "You're stupid for suggesting that" would get a notice about WP:RUDE (or whatever the relevant guideline was). The name comes from the Selsnya Conclave, the peace-loving commune in Ravnica, and that meaning works just fine for here too. Supernerd11 :D Firemind ^_^ Pokedex 08:45, 16 February 2014 (UTC)
Hi there,
Editing a lot of English Wikipedia pages about Poland recently I've noticed a lot of pages have their formatting blown away because of errors in the pronunciation sections at the start of the articles i.e. when there's a code like this after a name (person or place) [ˈsɔlɛt͡s kuˈjafskʲi] - and it renders as [ˈ[unsupported input]'[unsupported input]'[unsupported input]ʂ[unsupported input]'[unsupported input]'[unsupported input]]
The page usually displays as almost blank as the main body text displays after the bottom of any infobox rather than to the left of one.
I thought this might be browser or platform specific, but it certainly seems to fail in Chrome, Safari and Firefox on a Mac and the ˈ[unsupported input]' comes up in tens of thousands of Google search results (54,000, in fact).
Fixing it seems to involve retyping the ' (apostrophe) in one of the fields above and resaving. Oddly, even copying and pasting the source code (as I did above) also fixes it. I'm guessing some kind of rogue character instead of a apostrophe?
Is this an appropriate task for a bot (I've never been involved with one before)?
Examples include this one (as at 18/02/14) /info/en/?search=Solec_Kujawski
I'm not asking in any official capacity - just as a normal user. Wikipedia:WikiProject Poland might have something to add?
Thanks for your time, Scott Escottf ( talk) 16:10, 18 February 2014 (UTC)
This article was flagged at AfD to be merged into Cthulhu Mythos deities. And while that is possible, the formatting of this bibliography article would make the page way way too long. If this information is in a table, it would be easier to add it to the deities article. I would request that a bot be used to transfer the information from the sections into a sortable table in the following format:
The content seems to be listed in a fairly consistent manner, but any items that pose problems can be skipped and I can enter them in manually after. -- Nick Penguin( contribs) 17:07, 18 February 2014 (UTC)
Index code | Author | Work | Publication Date | Other Info |
---|---|---|---|---|
H15 | Luis G. Abbadie | "Huitloxopetl XV: The Transition of Miguel Quocha" | 1997 | |
VA | Christopher Smith Adair | The Voice Of The Animals (worlds Of Cthulhu) | 2006 | Publisher: Pegasus Press ISBN: 9783937826639 |
I propose that we get an existing bot to remove orphan tags from pages that are no longer orphaned. Jinkinson talk to me 17:23, 19 February 2014 (UTC)
ProQuest often gets miscapitalised as Proquest; it's a capitalisation mistake, not an alternate capitalisation. Could someone please run AWB to fix it on the following pages?
I've checked every instance of "Proquest" on every one of these pages, and I've not found a single situation in which "Proquest" should not be changed to "ProQuest", aside from its appearance in URLs — the visible text of these pages should always read "ProQuest". This is essentially the database dump exception to WP:SPELLBOT, since I've already run through everything with human eyes. Nyttend ( talk) 05:27, 20 February 2014 (UTC)
All instances of the "shimming" templates listed at Wikipedia:List of infoboxes#Shimming, in article space, should be 'subst:' - the templates are note intended for permanent use in the English Wikipedia, only for importing data from other Wikipedias. If this could be added to the duties of one of the active cleanup bots, so much the better. Note that there are occasional additions to the list of shimming templates. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:04, 20 February 2014 (UTC)
{{
substituted|auto=yes}}
in each template's documentation.
Anomie
⚔ 11:55, 20 February 2014 (UTC)
Per
Template talk:Cite DNB#CS1 errors when volume not included, could someone create a bot which would look at all the instances of {{
Cite DNB}} and {{
DNB}} (and their redirects) where |wstitle=
is populated and |volume=
if it doesn't already exist? For example,
Ralph Cudworth contains {{DNB Cite|wstitle=Cudworth, Ralph}}
which displays:
The first link takes you to
https://en.wikisource.org/wiki/Cudworth,_Ralph_(DNB00) which contains a DNB00 template with |volume=13
. The request is to change the Wikipedia article to {{DNB Cite|wstitle=Cudworth, Ralph|volume=13}}
which displays a more specific reference:
Similarly, could someone also add |volume=
to {{
Cite DCB}} (and its redirects) if it doesn't already exist? For example,
Mackenzie Bowell contains {{Canadabio|ID=7231}}
which displays:
The first link takes you to
http://www.biographi.ca/en/bio.php?id_nbr=7231 which contains var m_volume_name = 'Volume XIV (1911-1920)';
. The request is to change the Wikipedia article to {{Canadabio|ID=7231|volume=XIV}}
which displays a more specific reference:
Thanks! GoingBatty ( talk) 04:00, 20 February 2014 (UTC)
Two thousand dead links for multiply.com. If someone has a bot that dead links, please set it loose for those pages at Special:LinkSearch/*.multiply.com. Thanks. — billinghurst sDrewth 22:24, 21 February 2014 (UTC)
Two thousand dead links for multiply.com. If someone has a bot that dead links, please set it loose for those pages at Special:LinkSearch/*.multiply.com. Thanks. — billinghurst sDrewth 22:24, 21 February 2014 (UTC)
{{
Infobox road}}
and {{
Infobox road small}}
insert an image at the top of the infobox. If a route type and number combination call for an image does not exist, it trips a
category. I would like a bot to output a list of images needed in the 'Infobox road transclusions without route marker' category.
If the infobox route type parameter (type) is set up correctly, it should be easy to list the missing files. However, if the type is not set up, it should flag as an invalid type. – Fredddie ™ 04:58, 2 March 2014 (UTC)