This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 55 | ← | Archive 60 | Archive 61 | Archive 62 | Archive 63 | Archive 64 | Archive 65 |
De Hollandsche Molen have changed their database. I'd like a bot to change urls in the following lists:-
The string http://www.molens.nl/molens.php?molenid= needs to be replaced with http://www.molens.nl/site/dbase/molen.php?mid= I've done the Drenthe and Friesland lists already. Strings in articles will have to be a manual job as they link to subpages which have been altered and in some cases combined. Mjroots ( talk) 10:48, 5 October 2014 (UTC)
Maybe consider creating a template to avoid this in the future? -- Magioladitis ( talk) 11:28, 8 October 2014 (UTC)
{{
Merseyrail info lnk}}
with the basic link and popped that into articles
like this, so that if they change the URL format again, we would only need to amend that one template and not 60+ individual articles. It seems that I need to amend the template, because that URL is non-working again, and should become
http://www.merseyrail.org/plan-your-journey/stations/ainsdale.aspx --
Redrose64 (
talk) 19:08, 10 October 2014 (UTC)
Done The only page that had more than one offending link was List of windmills in Groningen. I fixed them with a text editor. BMacZero ( talk) 17:02, 18 October 2014 (UTC)
Can a bot be made that finds all alternative scientific names listed in the synonym field of a taxobox or speciesbox and makes them into redirects to the article? This is something we normally do manually. It is useful in preventing duplicate articles from being created.
User:Peter coxhead points out that sometimes these boxes contain information like "Fooia bara Smith, non Fooia bara Jones". Normally, they contain items formatted like these examples:
''Slimus Slimus'' <small>Betty Biologist, 1901</small><br />
*''Slimus Slimus'' <small>Betty Biologist, 1901</small>
*''Slimus Slimus'' Betty Biologist, 1901
The common feature is the italics.
{{Specieslist |Slimus slimus|Betty Biologist, 1901 |Slimus maximus |(Smith, 1898)}}
. Also there's increasing use of {{
small}} rather than <small>..</small>.
Peter coxhead (
talk) 08:17, 13 October 2014 (UTC)Links:
Anna Frodesiak ( talk) 01:11, 13 October 2014 (UTC)
Rjwilmsi has been performing similar tasks in the past. -- Magioladitis ( talk) 10:10, 13 October 2014 (UTC)
See Talk:Main Page#Main page image vandalism. In short, because commons:user:KrinkleBot was inactive for a few days, images approaching their turn on our main page weren't getting protected at Commons, and someone exploited that vulnerability to change the TFA image to pornography. I have raised the need for a back-up image protection system in the past ( Wikipedia:Bot requests/Archive 57#Bot to upload main page images in November 2013), and Legoktm got as far as Wikipedia:Bots/Requests for approval/TFA Protector Bot 2 which did not complete. Legoktm, do you fancy reviving this? Or does anyone else fancy adding this to their bot rota? Thanks, Bencherlite Talk 05:36, 14 October 2014 (UTC)
Images are tagged with "Image not protected as intended." Todays featured image seems to allow uploads at commons. [1]. -- DHeyward ( talk) 17:11, 16 October 2014 (UTC)
Question: how hard would it be to extend this cascaded protection to every image used on a Wikipedia page? Maybe a "revision accepted" privilege for updates at commons? It wouldn't prevent new files and it wouldn't prevent articles from pointing to new files but it would prevent a bad revision from going live on a "protected" page and also attract vandalism patrollers when an image reference is changed to a new name. It wouldn't lock any images that aren't used by Wikipedia. -- DHeyward ( talk) 05:18, 18 October 2014 (UTC)
What's the tool / gadget / script / bot -- that automatically or semi-automatically checks for archived versions of URLs and adds them into the citations with archiveurl parameter?
Thank you for your time,
— Cirt ( talk) 00:06, 15 October 2014 (UTC)
|auto-archived=October 2014
) in the cite/citation templates, and have wikipedians working through the category, removing that parameter and any bad archive urls. -
Evad37 [
talk 17:27, 16 October 2014 (UTC)
WP:NRHP maintains lists of historic sites throughout the United States, with one or more separate lists for each of the country's 3000+ counties. These lists employ {{ NRHP row}}, which (among its many parameters) includes parameters to display latitude and longitude through {{ coord}}. For most of the project's history, the lists used an older format with manually written coords (e.g. a page would include the code {{coord|40|30|0|N|95|30|0|W|name=House}}, when today they just have |lat=40.5 |lon=95.5), and when a bot was run to add the templates to the lists, it somehow didn't address the coordinates in some lists. With this in mind, I'd like if someone could instruct a bot to discover all WP:NRHP lists that are currently using both {{ NRHP row}} and {{ coord}}. I tried to use Special:WhatLinksHere/Template:Coord, but it didn't produce good results: since {{ NRHP row}} transcludes {{ coord}} when it's correctly implemented, all of these lists have links to {{ coord}}. As a result, I was imagining that the bot would perform the following procedure:
"Results" could be a spot in the bot's userspace. Since the bot won't be doing anything except editing the results page, you won't need to worry about opening a BRFA. Nyttend ( talk) 01:49, 15 October 2014 (UTC)
I don't know how to use bots and the taskforce is too large to go one by one adding articles to WP:Tejano. Best, .jona talk 18:37, 15 October 2014 (UTC)
Tejano music, Banda, Duranguense, Jarocho, Ranchera, Mariachi, Norteño (music). Erick ( talk) 21:17, 15 October 2014 (UTC)
Done. - DePiep ( talk) 15:29, 19 October 2014 (UTC)
In December 2013, Template:Convert ( | talk | history | links | watch | logs) was converted to Lua code (680k transclusions). The old wikicode template used subpages (subtemplates) of Template:Convert, like Template:Convert/flip2. There are some 3900 subtemplates in this pattern. To manage cleanup (e.g., improve the module:convert or its /data page), we'd like to know which subtemplates still are used in mainspace.
Request: produce a list with all pages that have pagename prefix (pattern) Template:Convert
and that have transclusions in mainspace. Pages with zero transclusions in mainsp can omitted (do not list).
Example:
Note 1: True subpages are listed by requiring Template:Convert/
(with slash). However, to cast the net a bit wider that slash is omitted from the filter (we want to catch page "Template:Convertx" too).
Note 2: Format suggestion: add the number, link the template pagename, newline per page+bullet.
My bet would be: you'll find between 25 and 100 pages. - DePiep ( talk) 19:10, 18 October 2014 (UTC)
Template | Mainspace pages transcluding |
---|---|
{{ Convert}} | 658760 |
{{ Convert/CwtQtrLb_to_kg}} | 35 |
{{ Convert/E}} | 2 |
{{ Convert/TonCwt_to_t}} | 286 |
{{ Convert/numdisp}} | 1 |
{{ Convert/per}} | 1 |
{{ Convert/words}} | 12 |
{{ ConvertAbbrev}} | 40086 |
{{ ConvertAbbrev/ISO_3166-1/alpha-2}} | 40086 |
{{ ConvertAbbrev/ISO_3166-1/alpha-3}} | 629 |
{{ ConvertAbbrev/ISO_3166-2/US}} | 39457 |
{{ ConvertAbbrev/ISO_639-1}} | 629 |
{{ ConvertAbbrev/ISO_639-2}} | 1 |
{{ ConvertIPA-hu}} | 372 |
The consensus of the discussion at Wikipedia:Village pump (proposals)/Archive 110#Bot blank and template really, really, really old IP talk pages. was never followed through on, so I am following through now. We would like a bot to blank and add the {{OW}} template to all IP user talk pages for which no edits have been made by the IP within the last seven years; and the IP is not been blocked within the last five years. These time frames may be tightened further in future discussions. Cheers! bd2412 T 20:47, 22 October 2014 (UTC)
I used to really like DASHBot ( talk · contribs) operated by Tim1357.
It would scan an article, find archive links, and automatically add them to the page.
Is there a bot, or script that I could even use semi-automatically, that could perform this function?
See for example DIFF.
Any help would be appreciated,
— Cirt ( talk) 02:05, 19 October 2014 (UTC)
Today there are no Bot who only focuses on combining references /duplicate references. I think such a Bot would be really useful for article creators and older already existing articles that are added with new references as well. That is why I now request that such a Bot should be created. And that the Bot in some way goes after a List of articles with non-combined references or similar. An option could be to add this task to an already existing Bot. -- BabbaQ ( talk) 15:26, 19 October 2014 (UTC)
I think a bot that detects and adds articles that has reached over the 5,000 views threshold for DYK articles are needed (to be added to the DYK stats page). Today not even half of the articles that appears on DYK and reaches that threshold are then added to the DYK stats page. And though that page is meant for some light-hearted fun stats it still makes the lists kind of irrelevant if the articles are not added. So if there is a way to create a bot that detects this and adds the new article to DYK stats it would be a good thing.-- BabbaQ ( talk) 11:51, 26 October 2014 (UTC)
Some 1500 stub articles have recently been added to
Category:Megachile. Can any of your bots please add sortkeys to these pages so that they are sorted according to the species name like [[Category:Megachile|Mucida]]
? The operation would be quite simple: If the page name begins with "Megachile" and contains two words, take the second word and use it as sort key beginning with an upper case letter.
De728631 (
talk) 18:58, 21 October 2014 (UTC)
We previously had a bot operating from User:Theopolisme, but it has since been depreciated except for one function. There are 4 main project requirements:
Previous versions of the operating code is available to be read at https://github.com/theopolisme?tab=repositories . Thanks, -- Nick Penguin( contribs) 04:08, 27 October 2014 (UTC)
Hi, we need to replace "cdsweb" with "cdsarc" in all the pages found in this category, leaving the rest untouched. "cdsweb" is an old address for the VizieR astronomical database. Changing it with "cdsarc" will make all the references working properly again. Thank you. -- Roberto Segnali all'Indiano 15:24, 29 October 2014 (UTC)
Template:TonCwt to t now redirects to
Template:long ton. Could we get a bot to replace {{TonCwt to t|
with {{long ton|
?
Jimp 09:25, 28 October 2014 (UTC)
Some years ago I created several thousand stubs for fungal taxa (classes, orders, families, and genera), many of which used "Outline of Ascomycota - 2007" as a source. Since then the main link for the page (at http://www.fieldmuseum.org/myconet/outline.asp) has gone dead, although the source is still available at http://http://archive.fieldmuseum.org/myconet/outline.asp. I'd appreciate it if a bot could be made to replace those deadlinks with the working archive link. Sasata ( talk) 00:02, 26 October 2014 (UTC)
I also think adding the task of putting ITN tags to the talk pages of articles that appears in the ITN section to one of the bots is needed. Today articles that do appear on ITN are sometimes added with the ITN tag on the article talk page and sometimes not. Atleast since a few months back. Just for consistency such a task would benefit the Wikipedia project. And as we have a DYKupdateBot I can not see why a ITNupdateBot could not be created.-- BabbaQ ( talk) 11:51, 26 October 2014 (UTC)
Following a discussion about incorporating links to MalaCards in the Disease Box ( User:ProteinBoxBot/Phase 3#Disease) by Marilyn Safran, Alex Bateman, and Andrew Su ( Wikipedia:WikiProject Molecular and Cellular Biology/Proposals#MalaCards - www.malacards.org) in June 2013, a member of the community volunteered to write the bot ( Wikipedia:Bot requests/Archive 57), and I posted a dump as per his request (at User:Noa.rappaport/Malacard mappings). Since over a year has passed and the robot hasn’t materialized, we have decided to develop and contribute the bot ourselves, and would appreciate help with the following questions:
Thanks, Noa — Preceding unsigned comment added by Noa.rappaport ( talk • contribs) 13:28, 26 October 2014 (UTC)
can we get a bot to remove transclusions of User:HBC Archive Indexerbot/OptIn on pages with another archival bot system in place? the bot hasn't operated in years, and most of the talk pages transcluding the template already have another bot archiving (e.g., transcluding User:MiszaBot/config as well). I would say just remove all the transclusions, but it may be useful to replace it (by hand) in the cases that there is no other bot archiving. note that the only reason I noticed was while cleaning up this page where the double mask parameter was creating an entry in Category:Pages using duplicate arguments in template calls, so removing these will probably clean up some of those as well. Frietjes ( talk) 20:46, 27 October 2014 (UTC)
Good editors often make bad edits to disambiguation pages, because they don't fully appreciate the difference between dab pages and articles. Akin to BracketBot, this bot will scan changes to disambiguation pages, identify new entries that violate WP:MOSDAB as described below, and leave a polite message for the editor so they can self-correct the problematic entry/entries. I envision this bot detecting:
The talk message should be something like:
Thoughts? — Swpb talk 19:33, 30 October 2014 (UTC)
Could we get a bot to fill in cites using {{ JSTOR}}? It would help immensely. Oiyarbepsy ( talk) 00:41, 4 November 2014 (UTC)
Can a bot please fix all links in article and article-talk-page-space, to avoid redirects and point directly to article: Think of the children?
Thank you,
— Cirt ( talk) 18:14, 4 November 2014 (UTC)
Redirects are not supposed to contain anything but the redirect itself and a template explaining what sort of redirect it is (e.g. a redirect from an alternative spelling, from a plural, from a pseudonym, etc.). However, it often seems that redirects get made with all kinds of other text on the page. This, of course, is of no help to readers, who never actually see the content of redirect pages. Nevertheless, these pages show up on the various lists of errors needing repair if they contain broken templates or disambiguation links or the like. Ideally, we should have a bot come around and clear everything off the redirect page that is not supposed to be on one.
There are occasions where even a well-meaning but inexperienced editor will put a redirect on top of a page that should not be a redirect, so perhaps a bot could be directed only at older pages with such content (at least a couple months). Generally, however, these should be cleaned up. Cheers! bd2412 T 14:44, 3 November 2014 (UTC)
"The following new content (which was nominated for speedy deletion as CSD:A10) is left here temporarily so it can be merged with the target". It appears from this edit that the Sana Sheikh example was a case of an editor using a section edit, rather than a whole apge edit, as the firsts section was blanked. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:53, 4 November 2014 (UTC)
I think we need a report of redirects with unexpectd large size and not a bot to fix them. We can spot them and fix them manually. -- Magioladitis ( talk) 13:51, 4 November 2014 (UTC)
We previously had a bot operating from User:Theopolisme, but it has since been depreciated except for one function. There are 4 main project requirements:
Previous versions of the operating code is available to be read at https://github.com/theopolisme?tab=repositories . Thanks, -- Nick Penguin( contribs) 04:08, 27 October 2014 (UTC)
Moved to Wikipedia talk:Media Viewer#Media viewer URLs, as this is clearly something a bot can't fix. Oiyarbepsy ( talk) 23:37, 7 November 2014 (UTC)
A new Bot is needed to fix disambiguation links that are not properly formatted. Currently no one is active from what I can tell, atleast its not effective.-- BabbaQ ( talk) 21:35, 8 November 2014 (UTC)
User:Frietjes recently brought to my attention an issue with moving {{ navbox}}-based templates where the name parameter is not updated. See for example the recently moved {{ Asian Games Field hockey}} where the "E" edit link leads to the redirect at the old page name instead. I've moved many templates and not thought about this consequence. Frietjes specifically raises the issue of less-technically able users getting confused by the bad links and forking content to the redirect page in error. Presumably I'm not the only one who has moved a template and not thought about this consequence.
For me, going back to check all templates I've moved to confirm the name has also been updated will be an arduous and boring task to say the least. It seems like a perfect and simple bot task – check that the name parameter matches the given template title for all templates where {{ navbox}} is used. This could feasibly be a regular, scheduled task. Separately, but on a related note, maybe even an "issue" category could be automatically generated by the navbox template when these two are mis-aligned(?). SFB 18:52, 2 November 2014 (UTC)
{{
navbar}}
. This includes WikiProject banners - these get the name for the navbar in either of two ways: it might be given in full as |BANNER_NAME=Template:WikiProject Foobar
or if that be blank, it's obtained from |PROJECT=Foobar
and prepended with Template:WikiProject
Then there are the
WP:RDTs - there are several forms for those; and the name for navbar use is normally passed through one of the parameters of one of the header templates - for instance, the second positional parameter of {{
BS-header}}
, or the |navbar=
parameter of {{
BS-map}}
. Stub templates also potentially fall into this group, but at present there are none in error - the {{
asbox}}
template has the means to validate the content of its |name=
parameter, and puts the page into
Category:Stub message templates needing attention (sorted under E) where there is a discrepancy, so that it's easy for people like me to
do this. Sometimes the presence of a page in that section of the cat is an indication of a completely different problem that is not bot-fixable - like page moved in error, or vandalism. --
Redrose64 (
talk) 19:58, 2 November 2014 (UTC)
SELECT COUNT(*) FROM templatelinks WHERE tl_from_namespace = 10 AND tl_namespace = 10 AND tl_title = 'Navbox'
I will see if I can create a new
Database report to detect this issue. --
Bamyers99 (
talk) 16:09, 4 November 2014 (UTC)
The Invalid Navbar links report has been run again with more base template checks. For both WikiProject banners and stub templates, the navbar is hidden by default, so I don't see a need to check those. -- Bamyers99 ( talk) 16:26, 11 November 2014 (UTC)
Would it be possible to write a bot that automatically checks the articles on site that have citations to ensure that the articles do not cite Wikipedia for there information? Recently I've come across a few articles that have cited other Wikipedia articles in reference templates and other source specific areas, but as Wikipedia requires citations in articles to be third party sources and from Wikipedia specifically I think it may be a good idea to write a bot to check for and remove these links from articles automatically if they are found to be present in an article. TomStar81 ( Talk) 05:15, 10 November 2014 (UTC)
Bot to automatically open, edit and save media wiki page without human intervention — Preceding unsigned comment added by 202.46.23.54 ( talk) 11:09, 11 November 2014 (UTC)
Could a bot be created to convert List of public art in St Marylebone and List of public art in the City of Westminster to using {{ Public art header with long notes}} and {{ Public art row with long notes}}? (The former article uses it for the first three sections but stops there.) I started the job of converting to the templates myself but it's proving much too tedious. If it were possible this could perhaps be rolled out on a much wider scale too, but it would be necessary to decide whether to use these templates or the ones on which they're based, {{ Public art header}} and {{ Public art row}}. Ham ( talk) 14:26, 11 November 2014 (UTC)
Would it be possible for a bot to find instances of 1. 2. 3. and change them to # characters, as I did here? It's obviously an ongoing thing (people unfamiliar with # won't stop adding numbered lists this way), so if it's possible, I expect that the best idea would be to add it as a task for an existing bot. Nyttend ( talk) 13:20, 12 November 2014 (UTC)
Please can someone replace {{ HighBeam}} (which I created) in citations that use Citation Style 1 templates, as in this edit, so that the former template, now redundant, can be deleted? Approx 113 articles are involved, but there may be more than one instance in a single article. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:34, 6 November 2014 (UTC)
See wikidata:Wikidata:Bot requests#Import Persondata from English Wikipedia. Thanks!
We don't normally "fix" redirects, but when they occur in navboxes, and the user views the article in question, the link is not de-linked and emboldened as it should be.
Can a bot find and fix them, as I did in this edit? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 22:20, 12 November 2014 (UTC)
Is it possible for a bot to change the link National Highway 1 (India) form all the articles to National Highway 1 (India)(old numbering)? If yes I would provide other links too. To find the context of such a change please see Talk:National Highway 1D (India)(old numbering). Marlisco ( talk) 02:18, 17 November 2014 (UTC)
There is a large series of articles involved here that need links migrated from National Highway XXX (India) to National Highway XXX (India)(old numbering) so that new articles can be placed at the National Highway XXX (India) titles without making existing links incorrect. Still hoping for help on this one! Dekimasu よ! 20:08, 18 November 2014 (UTC)
This is a bit complicated, but it is a major headache. Frequently, articles will include a links to templates for which the templates themselves call other templates. In some cases, a parameter in the template will refer to a list of terms on a third page which will then appear linked in the article, but do not appear as a searchable linked title on the page from which they are called.
A stable example would be Template:S-rail/lines. If you look at the page, you only see a snippet of template coding. However, if you look at the page source, you can see dozens and dozens of terms that are called from the page when a particular code is used in the Template:S-rail template. An item on this page, like [[Alaska Railroad]], will therefore be called when Template:S-rail is used on a page and "AKRR" is used as a parameter in that template. The problem arises from the fact that this template does not show up on the "What links here" results for templates linking to "Alaska Railroad". Sometimes entries on pages like this are renamed or made into disambiguation pages, and because the tools for fixing disambiguation pages tend rely on searching the "What links here" results, templates from which a term can be called without appearing as a linked term can be very frustrating.
I would like a bot to add to every page that has such terms a "noinclude" section (or in the existing "noinclude" section) a list of links to all terms found on any page from which those terms can be called. That way, if one of those links changes, it will easier to find and fix the template containing the term. Cheers! bd2412 T 17:10, 18 November 2014 (UTC)
Could there be a bot that follows these steps:
Would that be too much trouble? Retartist ( talk) 03:17, 19 November 2014 (UTC)
Please can someone redirect all the documentation sub-templates of the templates listed at Template:Election box/doc#List of templates (except for the target template, {{ Election box/doc}}), as I did in this edit? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:38, 15 November 2014 (UTC)
I can't believe such a bot doesn't exist, but I can't find it: I want to change all urls of the form http://www.chess.co.uk/twic/twicX.html to the form http://www.theweekinchess.com/html/twicX.html , where X is a number (to be precise, a positive integer in the range of 1 to 1100, inclusive). (Because the chess news web site The Week in Chess has moved). e.g. as a random example, http://en.wikipedia.org/wiki/Yuri_Shabanov#cite_note-2 needs to change to point to http://www.theweekinchess.com/html/twic473.html Adpete ( talk) 08:42, 17 November 2014 (UTC)
#something
that is occasionally found on a URL is called the fragment, and it normally indicates a point on the page that is somewhere other than the very top. In the case of
http://www.theweekinchess.com/html/twic473.html#6 for example, it's the subheading "6) 13th World Championship for Seniors 2003". In such uses, the purpose is identical to the section linking used by Wikipedia. I would imagine that if the part from the second /twic
to the end is left alone, and the only change is the replacement of http://www.chess.co.uk/twic
with http://www.theweekinchess.com/html
then the URLs which have fragments should then work as intended. --
Redrose64 (
talk) 00:31, 20 November 2014 (UTC)
http://www.chess.co.uk/twic/
and http://www.chesscenter.com/twic/
with http://www.theweekinchess.com/html/
. In a few cases it will produce a dead link, but that's ok, because the existing links are all dead links anyway. If a log of all changes can be sent to me somehow (dumped on my Talk page is fine) then I can go through and fix the dead links manually.
Adpete (
talk) 04:23, 20 November 2014 (UTC)Instance of {{ Infobox Korean name}} which are underneath a biographical infobox (for example {{ Infobox person}}) need, where possible to be made a module of that infobox, as in this edit. This could perhaps be done on the basis of proximity: say, "if there is no subheading between them, make the edit"? Or "If nothing but white space separates them"?
A list (drawn up last year; feel free to edit it) of articles which use both of the infoboxes is at User:Pigsonthewing/Korean names. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:15, 21 November 2014 (UTC)
Hello,
I've done a little helping out at WP:MILLION, which aims to boost morale by identifying good articles or featured articles that receive a certain amount of views each year and awarding a little "million award" to the main contributors of those articles. You can read more about it on the project page, which does a much better job of explaining itself than I ever could.
In practice, identifying and awarding these articles is a big pain in the butt. I do it all manually, and I feel it has three main steps:
1. Identifying GAs and FAs that receive at least 250,000 views a year. My approach to this is anything but sophisticated. I just check Wikipedia:Good articles/recent, click on articles that look promising, then click over to their talk page to check their view counts. I take their total views for the last 90 days and multiply by four.
2. Identifying the contributors of a particular qualifying article that contributed most to its passing GA/FA. This generally involves me searching through the contributions page and often isn't as straightforward as you might think.
3. Awarding these contributors. This involves copying and pasting one of the templates from WP:MILLION onto their talk page and editing it to make it personal to them. I also have to add the article and contributors to the list at the bottom of the project page.
I feel it's important to keep morale up, and contributors do seem to appreciate receiving the award, but I hate how tedious the above process is. Would a bot be able to assist at any or all of the three stages? Are people already using them for that? I don't know, but any help would be much appreciated.
Bobnorwal ( talk) 14:11, 18 November 2014 (UTC)
I need a CSV file (or something similar) please, showing our links to ChemSider IDs. This will enable the import of Wikdiata IDs into ChemSpider.
For each instance of {{
Chembox}} with a |ChemSpiderID=
, I need the value of that parameter in one column, and the corresponding Wikidata ID in the next column. The article title should go in the next column.
e.g for Boron nitride, that would return a row with:
59612,Q410193,Boron nitride
I would like a separate report for {{ Chembox Identifiers}}, a module of the former template, where the prams are:
|ChemSpiderID=
|ChemSpiderID1=
|ChemSpiderID5=
|ChemSpiderIDOther=
with one row per parameter (and this duplicate Wikdiata IDs).
I'll request a similar report from wikdiata, but there may be values recorded here and not there, or vice versa. I'll also try to resolve those cases.
Note: ChemSpider is a product of the Royal Society of Chemistry, where I employed as Wikimedian in Residence. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:28, 19 November 2014 (UTC)
A SpaceRemover bot could do:
- Yellow Dingo ( talk) 11:21, 23 November 2014 (UTC)
<pre>...</pre>
<source>...</source>
and similar). Outside of preformatted sections, changing multiple spaces to single spaces is very much
WP:COSMETICBOT and if done by AWB, it would fall foul of
WP:AWB#Rules of use item 4. We have a respected admin who habitually uses double spaces not just between sentences, but between random pairs of words, something like 50% of the time. I asked him about this at
Wikimania London 2014 and I was more than satisfied with his answer (which I won't repeat here); he's aware that it happens, but it's an unconscious act. I suspect that if people were to "clean up" the posts of this editor, it would not go down well. --
Redrose64 (
talk) 16:09, 24 November 2014 (UTC)
Does someone have, or could someone gen up, a bot that would substitute all occurrences of something like WP:X to W:Y? The number of occurrences is small, like less than five hundred? I don't have a consensus for the change and just want to get a quick answer on feasibility before I go start discussing it with other folks. NE Ent 23:11, 27 November 2014 (UTC)
Hi!
I'm working on fr.wikisource and unfortunately, I have no programming knowledge nor skills. I was wondering if it was possible to create a tool that allows this kind of report:
— Within a specified category (given as parameter)
— Within the namespace Livre:name of book
— creates a report that states for each Livre with this info:
This tool/bot would be a great addition to any of the 60 languages Wikisrouce projects as it would allow us to concentrate on specific books and complete books that are almost finished.
-- Ernest-Mtl ( talk) 19:40, 28 November 2014 (UTC)
Could we have a bot to do all of the stuff described here: Template:Editnotices/Page/Wikipedia:Goings-on? Oiyarbepsy ( talk) 04:50, 5 December 2014 (UTC)
I want a robot where you can ask that robot and he'll archive some talks in the talk page for you. Ask for archiving with 60-day-or-older talks and he'll do this. Qwertyxp2000 ( talk) 07:43, 6 December 2014 (UTC)
Some articles can really be messy and I want some of the talk pages to have archives. Qwertyxp2000 ( talk) 07:50, 6 December 2014 (UTC)
Which robot does such thing? Qwertyxp2000 ( talk) 08:12, 6 December 2014 (UTC)
Help me. I don't know how. Please give me a brief way to grant want I wanted. Qwertyxp2000 ( talk) 08:22, 6 December 2014 (UTC)
{{User:MiszaBot/config | algo=old(90d) | archive={{SUBST:FULLPAGENAME}}/Archive %(counter)d | counter=1 | maxarchivesize=400K | archiveheader={{Automatic archive navigator}} | minthreadsleft=4 | minthreadstoarchive=1 }}
...and paste it at the top of the relevant talk page. Some time in the next 48 hours you should find that the older threads have been archived for you. -- John of Reading ( talk) 08:29, 6 December 2014 (UTC)
So I copy that piece of text and paste it to the page that needs archiving? Qwertyxp2000 ( talk) 08:34, 6 December 2014 (UTC)
It would be very useful to have User:SDPatrolBot revived or replaced. This one used to pick up where a creator of a new article removed a speedy deletion tag, which they're not supposed to do. Editors are doing this all the time and these are the savvier ones, some of whose pages probably need deleting more than most. The bot faded out in August 2013, can't see where this was ever commented on : Noyster (talk), 10:41, 2 December 2014 (UTC)
Could someone make a bot that strips the accountID from Proquest URLs? Here is a search to illustrate the problem that needs to be fixed. A Proquest URL by default affixes an accountID to the end which is specific to only one institution. For example: http://search.proquest.com/docview/229617956?accountid=14771 should be changed to http://search.proquest.com/docview/229617956
It is already a problem that these Proquest links are behind a paywall and only university users with a library subscription to Proquest can view them. But it's even worse when an institution-specific account ID on the URL prevents a user from another institution from accessing that resource, even if all they have to do is edit the URL to delete the accountid.
Does this seem like the kind of task a bot could do? Search for any URL that begins with http://search.proquest.com/docview/ and then remove any characters that come after the docview number? Lugevas ( talk) 18:19, 10 December 2014 (UTC)
T cedilla (Ţ) was wrongly atributed to Romanian language, which is using T comma (Ț) instead. There are 458 articles containing T cedilla in the name. They must be renamed, but also the S cedilla must be changed into S comma too (for example Căpăţâneşti has to become Căpățânești. I made the list with the articles to be renamed at User:Ark25/Robot#T Cedilla and I created a list with the current names and the correct names at User:Ark25/Robot#T Cedilla - Paired with the desired result. Most of the destination titles already exist as redirects to the respective page - they can be re-written without any problem. Thanks. — Ark25 ( talk) 22:43, 4 December 2014 (UTC)
I renamed manually about 1,500 articles containing s/S/t/T-cedilla.
In order to replace the diacritics inside articles, I made four lists with articles containing s/S/t/T-comma in title - only the Romanian language uses those letters, so all those articles refer to Romanian-language related things. In those articles, it's safe to replace cedilla diacritics with comma diacritics. Except a very few articles where they might contain Turkish words using S/T-cedilla. Therefore it requires a manually assisted robot. Turkish names are quite obvious so it's not necessary to have a bot master with knowledge of Romanian language. But I can volunteer to do that in case I my user will get the approval to become a bot - Special:Contributions/ArkBot - ro:Special:Contributions/ArkBot.
The same kind of semi-automatic replacements can be made in the articles from Category:Romanian-language surnames and other similar categories on Romanian-language topics.
@ Oiyarbepsy: I think your idea is a very good one. The template should be created and then advertised on Wikipedia:WikiProject Romania. — Ark25 ( talk) 12:13, 13 December 2014 (UTC)
@ Anomie: I noticed you created a lot of redirects like for example HC Steaua București. Can you remember where you got the list of redirects to create from? — Ark25 ( talk) 12:51, 13 December 2014 (UTC)
I found it: User:Strainu/ro - Wikipedia:Bot_requests/Archive_36#Make_redirects_from_titles_with_correct_Romanian_diacritics_to_the_currently_used_diacritics and also a sandbox — Ark25 ( talk) 13:31, 13 December 2014 (UTC)
I made a list of categories at User:Ark25/Robot#T Cedilla – Categories. They must be renamed, replacing T-cedilla (Ţ) with T-comma (Ț). Is it possible to do that it automatically (including the re-categorization of the articles)? — Ark25 ( talk) 16:13, 13 December 2014 (UTC)
At the Film Project three new task forces have been created. Please could a bot tag the {{ WikiProject Film}} banner of articles in the following categories with the appropriate tags:
|Mexican-task-force=yes
(or "Mexican=yes")|Documentary-film-task-force=yes
(or "Documentary=yes")|Silent-film-task-force=yes
(or "Silent=yes")If an article currently doesn't have a talkpage or a film project tag on an existing talkpage, to add that too. Thanks. Lugnuts Dick Laurent is dead 13:53, 3 December 2014 (UTC)
I am on it. -- Magioladitis ( talk) 23:58, 14 December 2014 (UTC)
Hi! There are currently 6,297 articles with video clips; see list here (put in 7000 as the limit to see them all). They should all be in Category:Articles containing video clips, which currently has 622 articles. Possible? Thanks, -- phoebe / ( talk to me) 06:17, 17 December 2014 (UTC)
Can a bot be developed to automatically update data in Wikipedia reference tables from external website sources? — Preceding unsigned comment added by 99.240.252.181 ( talk • contribs) 23:25, 19 December 2014
Presumably this could be an additional task for an existing bot. User:Nyttend/Pennsylvania is a long list of municipalities with no photo or poor-quality photos; it's a collaborative project to get all of them illustrated, and when we add (or discover that someone else added) a workable photo to one of the municipalities, we remove it from the userspace list and (ideally) add the image to List of municipalities in Pennsylvania. Given the length of the list (nearly 1000 items currently), it's quite likely that one will get a new photo every so often, and we won't notice.
I'm wondering if a bot could be instructed to visit User:Nyttend/Pennsylvania every so often (perhaps once per week or once every other week), examine all of the linked pages, and look to see if any image has been added since the last time. Presumably the bot could examine the edit history, ignore all pages that hadn't been edited since the last run, and examine each edit made since the last run to see if any of them had added an image. When it finds such an edit, it logs it and goes to the next article, and when it's run through all the articles, it leaves a simple note on my talk page saying something like "Images have been added to ___, ___, and ___ in the last [amount of time]". Almost all of these locations are small towns and rural areas that get very few edits (for example, before I added a photo this month, Franklin Township, Beaver County, Pennsylvania was edited just twice in the past year), so the bot won't need to check many edits. Some of the image-adding edits will likely be vandalism reversion, addition of non-photographic images (e.g. maps), and other things that I'm not looking for, but there's no need for the bot to filter for anything; after all, it's just giving me a list of pages to check, and there won't be many. Probably most runs won't find any new images; if this is the case, the bot should still leave me a message, saying something like "There weren't any new images in the last [amount of time]". Nyttend ( talk) 18:50, 20 December 2014 (UTC)
I was thinking that a bot could generate a list of articles with overlinking, but it seems a daunting task to comb through the existing articles looking for instances of overlinking. But perhaps it would not be too difficult to have a bot look at newly-created pages for multiple links to the same article? I'm thinking that the bot could look for all wikilinked strings, and find any with three or more instances. Then it could either generate a list somewhere or perhaps tag the article if, say, three or more different articles are multiply linked. The bot could ignore wikilinks in infoboxes and other templates, and perhaps also tables; then the number of overlinks that trigger the bot could be reduced to 2+. Abductive ( reasoning) 19:52, 18 December 2014 (UTC)
An automated process is requested to change the word "Category" into "Kategori" in the Indonesian wikipedia pages. Any bots available for use?
Thanks, JohnThorne ( talk) 00:23, 20 December 2014 (UTC)
See [3]. Can that be done? Seattle ( talk) 23:31, 25 December 2014 (UTC)
I would like a bot to sync transclusions of {{ Official website}} with WP:Wikidata. This is an example of what I would like to see done. If the external link matches the Wikidata entry it can be safely removed. Then we should make a list of what is left. -- Magioladitis ( talk) 07:16, 24 December 2014 (UTC)
Back in October, a request was made for a bot to fix "bgcolor" markup, so that background colours would display properly on mobile devices (see the discussion for full details). Mdann52 kindly volunteered to take up the task but encountered difficulties. Does anyone else want to have a go? FYI, I've recently noticed that Dispenser's Dab solver tool seems to incorporate the desired functionality (see this edit as an example), in case that is of any help. DH85868993 ( talk) 23:17, 29 December 2014 (UTC)
AN3 would be easier to browse and quickly determine if edit warring is occurring if entries like this
were formatted as diffs that looked like this:
Among other things, the dramatically consistent page size (first 99,800 bytes then 99,779 bytes) comes out.
I'd like to write or see written a bot that would perform this task automatically. Jsharpminor ( talk) 06:08, 1 January 2015 (UTC)
{{Make diff}}
, that takes a URL like https://en.wikipedia.org/?title=Robin_Williams&diff=next&oldid=639931566
and outputs a formatted diff? That would be useful in many other circumstances, too, and should be doable in Lua.
Andy Mabbett (Pigsonthewing);
Talk to Andy;
Andy's edits 11:37, 1 January 2015 (UTC)I just wanted some input regarding a bot idea which I am planning on implementing. The bot would monitor the requests for confirmed page to check whether:
I've already started to write this (but obviously won't run it without approval, etc.) Note that the bot won't need admin privileges because it will not be taking any actions which involve accepting requests. Hopefully this will take some of the load off the administrators there. CarnivorousBunny talk • contribs 18:11, 22 December 2014 (UTC)
You are already autoconfirmed
(or something similar). Thanks for the input.
CarnivorousBunny
talk •
contribs 17:19, 26 December 2014 (UTC)Not done We need a robot that can automatically add talk page tags that are almost always used. For example a page at this link will most likely not have a talkheader tag. EMachine03 ( talk) 12:29, 24 December 2014 (UTC)
ugh ok nvm :( EMachine03 ( talk) 20:41, 25 December 2014 (UTC)
More like requesting for AWB operation to replace the transclusion of {{ HK-MTRL color}} and {{ HK-MTRL lines}} with {{ HK-MTR color}} and {{ HK-MTR lines}} (without L) respectively because they now use the same syntax to invoke module:MTR. If you would, please also nominate the former 2 templates for speedy after the replacement. Thank you. -- Sameboat - 同舟 ( talk · contri.) 12:53, 30 December 2014 (UTC)
@ Sameboat: Done -- Magioladitis ( talk) 21:02, 30 December 2014 (UTC)
GoingBatty I still see some transclusions of the 2 templates but I am not sure why. -- Magioladitis ( talk) 21:17, 30 December 2014 (UTC)
{{
s-line|system=HK-MTRL}}
. Per testing in
my sandbox, the fix isn't as easy as removing the "L" from the template.
GoingBatty (
talk) 21:32, 30 December 2014 (UTC)
@ Sameboat: then I think it's better if you follow the right procedure and first send the templates for TfD before any other action. -- Magioladitis ( talk) 08:02, 31 December 2014 (UTC)
{{s-line|system=HK-MTRL|
with {{s-line|system=HK-MTR|
from all
MTR Light Rail stops articles which loads bunch of "HK-MTRL" templates but now moved by me under the "HK-MTR" prefix. {{s-rail|title=HK-MTRL}}
should be left intact because it's needed to call for the "
MTR
Light Rail" title, otherwise it becomes just "
MTR". --
Sameboat - 同舟 (
talk ·
contri.) 15:05, 3 January 2015 (UTC)
Currently, WP:NYC encompasses 13,478 articles. According to a recursive search of Category:History of New York City on AWB, the category and its subcategories include 22,363 articles. Category:Geography of New York City contains 57,564 articles. I started using AWB to tag the talk pages, but it's too long and cumbersome to do this through AWB, especially since a bot can automatically give the article its rating from existing project templates. Does someone have a bot they can lend to this task? – Muboshgu ( talk) 17:58, 4 January 2015 (UTC)
(sorry for my english)
This bot has stopped working for one year and more than
200 portals are no longer updated. The source code is
here. --
SleaY(
t) 05:23, 6 January 2015 (UTC)
Function: Add {{
user sandbox}} to pages of the form User:((.)*)/sandbox where the page does not already contain
Namespace: User
Run frequency: Daily
Expected run size per batch = 25-50 pages.
Remarks: I've recently been doing some New Page Patrol on user pages, (mostly subpages.), and have noted that marking sandboxs seems to be a substantial part of the process. Given it's essentially mechanical, I feel it is amenable to automation.
Userspace drafts, will still have to be identified manually as at present.
Sfan00 IMG (
talk) 13:32, 10 January 2015 (UTC)
Sfan00 IMG ( talk) 13:48, 10 January 2015 (UTC)
I hate having to work for WP:TAFIACCOMP and it is very tedious to convert the table to some other template. Could we have a new functioning robot that could finish every week's TAFI accomplishments, achievements, finish off converting the table, and do those?
Thanks, Qwertyxp2000 ( talk) 20:33, 13 January 2015 (UTC)
{{Wikipedia:Today's articles for improvement/Accomplishments/row |YYYY = |WW = |oldid = |olddate = |oldclass = |newid = |newdate = |newclass = |edits = |editors = |IPs = |bots = |reverts = |prose_before = |prose_after = |size_before = |size_after = }}
If this is what you're looking for, [13] is an example of what needs to be done. -- Ypnypn ( talk) 16:41, 15 January 2015 (UTC)
Good day!
Currently, the overall 2009 data included the word articles, however, the 2014 data to the public. Wikipedia, the updated data is famous, yet still 2009 data there are some places. The hungarian settlements categories: Category:Populated places in Hungary by county
The data are available in XLS format of the Central Statistical Office website: [14]
However, if you've updated population data, the mayors would be useful to inject into the page's data boxes. The mayors elected in 2014 valasztas.hu available on the website: [15]
Have a nice day and good luck with the expansion of Wikipedia.-- นายกเทศมนตรี ( talk) 16:06, 15 January 2015 (UTC)
Can your bots search for factors that help in stock investing? — Preceding unsigned comment added by Jdhurlbut ( talk • contribs) 03:35, 18 January 2015 (UTC)
May a bot be scheduled to move all those pages whose names include (case-insensitive) "Labelled Map" or "Labeled Map" from the categories Category:Graphic templates and
Category:Graphics templates to
Category:Labelled map templates, please?
Sardanaphalus (
talk) 12:11, 15 January 2015 (UTC)
When two Regional Indicator Symbols are combined to form a country code, some mobile devices interpret the letters by displaying the flag of the country in question. We've already created several of these combinations as redirects to the flag article, e.g. 🇷🇺 redirects Flag of Russia. Robin0van0der0vliet has proposed that all country codes be created for this purpose ( full list), and while I've created 🇳🇱 and would like to do the rest, all those page creations would take quite a while for a human. Could someone write a bot to do it? Titles that include Regional Indicator Symbols are blacklisted, so you'll need an adminbot to do this project. I've already looked through the full list and can't see any entries that I wouldn't be willing to create. Nyttend ( talk) 02:39, 10 January 2015 (UTC)
FindArticles.com used to be (I believe) a large aggregator of magazine articles from a variety of publications. We have a substantial number of articles linking to the original site - either to individual articles which were hosted at the site, or to generic "find articles by this person" searches (eg this removal) - I count around 17000 links in the main namespace, almost all of which are specific article links.
Unfortunately, it looks like the entire site now redirects to search.com, an obscure but not very useful search engine, and this material is completely gone; the links just provide traffic to a commercial search engine. Every one of these will need converted to archive.org links/plain-text magazine citations, or (for the few search links) simply removed outright. Does a bot exist that can do this? Andrew Gray ( talk) 18:41, 19 January 2015 (UTC)
There was an RFC last year on the use of date abbreviations, that closed in April 2014, the result was not to allow Sept as an abbreviation for September and also not to use full stops following the abbreviated month names. See RFC detail.
Could a BOT implement this as there seems to still be articles that fail to meet this change to the MOS. Probably could continue to run on a periodic basis to catch new non-compliances.
Detail would be to change dates that use Sept to Sep and to remove the full stop following a shortened month, assuming it is not at the end of a sentence. Though if in running text then Sept should be expanded to September and other months expanded as appropriate, as short months are not allowed in running text. Obviously the BOT should avoid quotes and reference titles when doing this.
Keith D ( talk) 22:41, 22 January 2015 (UTC)
Hello! I have a splendid idea for a good bot. I need code (probably in Python) so that there will be a bot that updates the BTC Price, on the article Bitcoin. I just put that part in the article, and I was wondering if that would be a good idea for a bot. I am ready to create another account so I can implement this bot in place when it is approved at BRFA. Let me know what you think. Yoshi24517 Chat Absent 17:15, 23 January 2015 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 55 | ← | Archive 60 | Archive 61 | Archive 62 | Archive 63 | Archive 64 | Archive 65 |
De Hollandsche Molen have changed their database. I'd like a bot to change urls in the following lists:-
The string http://www.molens.nl/molens.php?molenid= needs to be replaced with http://www.molens.nl/site/dbase/molen.php?mid= I've done the Drenthe and Friesland lists already. Strings in articles will have to be a manual job as they link to subpages which have been altered and in some cases combined. Mjroots ( talk) 10:48, 5 October 2014 (UTC)
Maybe consider creating a template to avoid this in the future? -- Magioladitis ( talk) 11:28, 8 October 2014 (UTC)
{{
Merseyrail info lnk}}
with the basic link and popped that into articles
like this, so that if they change the URL format again, we would only need to amend that one template and not 60+ individual articles. It seems that I need to amend the template, because that URL is non-working again, and should become
http://www.merseyrail.org/plan-your-journey/stations/ainsdale.aspx --
Redrose64 (
talk) 19:08, 10 October 2014 (UTC)
Done The only page that had more than one offending link was List of windmills in Groningen. I fixed them with a text editor. BMacZero ( talk) 17:02, 18 October 2014 (UTC)
Can a bot be made that finds all alternative scientific names listed in the synonym field of a taxobox or speciesbox and makes them into redirects to the article? This is something we normally do manually. It is useful in preventing duplicate articles from being created.
User:Peter coxhead points out that sometimes these boxes contain information like "Fooia bara Smith, non Fooia bara Jones". Normally, they contain items formatted like these examples:
''Slimus Slimus'' <small>Betty Biologist, 1901</small><br />
*''Slimus Slimus'' <small>Betty Biologist, 1901</small>
*''Slimus Slimus'' Betty Biologist, 1901
The common feature is the italics.
{{Specieslist |Slimus slimus|Betty Biologist, 1901 |Slimus maximus |(Smith, 1898)}}
. Also there's increasing use of {{
small}} rather than <small>..</small>.
Peter coxhead (
talk) 08:17, 13 October 2014 (UTC)Links:
Anna Frodesiak ( talk) 01:11, 13 October 2014 (UTC)
Rjwilmsi has been performing similar tasks in the past. -- Magioladitis ( talk) 10:10, 13 October 2014 (UTC)
See Talk:Main Page#Main page image vandalism. In short, because commons:user:KrinkleBot was inactive for a few days, images approaching their turn on our main page weren't getting protected at Commons, and someone exploited that vulnerability to change the TFA image to pornography. I have raised the need for a back-up image protection system in the past ( Wikipedia:Bot requests/Archive 57#Bot to upload main page images in November 2013), and Legoktm got as far as Wikipedia:Bots/Requests for approval/TFA Protector Bot 2 which did not complete. Legoktm, do you fancy reviving this? Or does anyone else fancy adding this to their bot rota? Thanks, Bencherlite Talk 05:36, 14 October 2014 (UTC)
Images are tagged with "Image not protected as intended." Todays featured image seems to allow uploads at commons. [1]. -- DHeyward ( talk) 17:11, 16 October 2014 (UTC)
Question: how hard would it be to extend this cascaded protection to every image used on a Wikipedia page? Maybe a "revision accepted" privilege for updates at commons? It wouldn't prevent new files and it wouldn't prevent articles from pointing to new files but it would prevent a bad revision from going live on a "protected" page and also attract vandalism patrollers when an image reference is changed to a new name. It wouldn't lock any images that aren't used by Wikipedia. -- DHeyward ( talk) 05:18, 18 October 2014 (UTC)
What's the tool / gadget / script / bot -- that automatically or semi-automatically checks for archived versions of URLs and adds them into the citations with archiveurl parameter?
Thank you for your time,
— Cirt ( talk) 00:06, 15 October 2014 (UTC)
|auto-archived=October 2014
) in the cite/citation templates, and have wikipedians working through the category, removing that parameter and any bad archive urls. -
Evad37 [
talk 17:27, 16 October 2014 (UTC)
WP:NRHP maintains lists of historic sites throughout the United States, with one or more separate lists for each of the country's 3000+ counties. These lists employ {{ NRHP row}}, which (among its many parameters) includes parameters to display latitude and longitude through {{ coord}}. For most of the project's history, the lists used an older format with manually written coords (e.g. a page would include the code {{coord|40|30|0|N|95|30|0|W|name=House}}, when today they just have |lat=40.5 |lon=95.5), and when a bot was run to add the templates to the lists, it somehow didn't address the coordinates in some lists. With this in mind, I'd like if someone could instruct a bot to discover all WP:NRHP lists that are currently using both {{ NRHP row}} and {{ coord}}. I tried to use Special:WhatLinksHere/Template:Coord, but it didn't produce good results: since {{ NRHP row}} transcludes {{ coord}} when it's correctly implemented, all of these lists have links to {{ coord}}. As a result, I was imagining that the bot would perform the following procedure:
"Results" could be a spot in the bot's userspace. Since the bot won't be doing anything except editing the results page, you won't need to worry about opening a BRFA. Nyttend ( talk) 01:49, 15 October 2014 (UTC)
I don't know how to use bots and the taskforce is too large to go one by one adding articles to WP:Tejano. Best, .jona talk 18:37, 15 October 2014 (UTC)
Tejano music, Banda, Duranguense, Jarocho, Ranchera, Mariachi, Norteño (music). Erick ( talk) 21:17, 15 October 2014 (UTC)
Done. - DePiep ( talk) 15:29, 19 October 2014 (UTC)
In December 2013, Template:Convert ( | talk | history | links | watch | logs) was converted to Lua code (680k transclusions). The old wikicode template used subpages (subtemplates) of Template:Convert, like Template:Convert/flip2. There are some 3900 subtemplates in this pattern. To manage cleanup (e.g., improve the module:convert or its /data page), we'd like to know which subtemplates still are used in mainspace.
Request: produce a list with all pages that have pagename prefix (pattern) Template:Convert
and that have transclusions in mainspace. Pages with zero transclusions in mainsp can omitted (do not list).
Example:
Note 1: True subpages are listed by requiring Template:Convert/
(with slash). However, to cast the net a bit wider that slash is omitted from the filter (we want to catch page "Template:Convertx" too).
Note 2: Format suggestion: add the number, link the template pagename, newline per page+bullet.
My bet would be: you'll find between 25 and 100 pages. - DePiep ( talk) 19:10, 18 October 2014 (UTC)
Template | Mainspace pages transcluding |
---|---|
{{ Convert}} | 658760 |
{{ Convert/CwtQtrLb_to_kg}} | 35 |
{{ Convert/E}} | 2 |
{{ Convert/TonCwt_to_t}} | 286 |
{{ Convert/numdisp}} | 1 |
{{ Convert/per}} | 1 |
{{ Convert/words}} | 12 |
{{ ConvertAbbrev}} | 40086 |
{{ ConvertAbbrev/ISO_3166-1/alpha-2}} | 40086 |
{{ ConvertAbbrev/ISO_3166-1/alpha-3}} | 629 |
{{ ConvertAbbrev/ISO_3166-2/US}} | 39457 |
{{ ConvertAbbrev/ISO_639-1}} | 629 |
{{ ConvertAbbrev/ISO_639-2}} | 1 |
{{ ConvertIPA-hu}} | 372 |
The consensus of the discussion at Wikipedia:Village pump (proposals)/Archive 110#Bot blank and template really, really, really old IP talk pages. was never followed through on, so I am following through now. We would like a bot to blank and add the {{OW}} template to all IP user talk pages for which no edits have been made by the IP within the last seven years; and the IP is not been blocked within the last five years. These time frames may be tightened further in future discussions. Cheers! bd2412 T 20:47, 22 October 2014 (UTC)
I used to really like DASHBot ( talk · contribs) operated by Tim1357.
It would scan an article, find archive links, and automatically add them to the page.
Is there a bot, or script that I could even use semi-automatically, that could perform this function?
See for example DIFF.
Any help would be appreciated,
— Cirt ( talk) 02:05, 19 October 2014 (UTC)
Today there are no Bot who only focuses on combining references /duplicate references. I think such a Bot would be really useful for article creators and older already existing articles that are added with new references as well. That is why I now request that such a Bot should be created. And that the Bot in some way goes after a List of articles with non-combined references or similar. An option could be to add this task to an already existing Bot. -- BabbaQ ( talk) 15:26, 19 October 2014 (UTC)
I think a bot that detects and adds articles that has reached over the 5,000 views threshold for DYK articles are needed (to be added to the DYK stats page). Today not even half of the articles that appears on DYK and reaches that threshold are then added to the DYK stats page. And though that page is meant for some light-hearted fun stats it still makes the lists kind of irrelevant if the articles are not added. So if there is a way to create a bot that detects this and adds the new article to DYK stats it would be a good thing.-- BabbaQ ( talk) 11:51, 26 October 2014 (UTC)
Some 1500 stub articles have recently been added to
Category:Megachile. Can any of your bots please add sortkeys to these pages so that they are sorted according to the species name like [[Category:Megachile|Mucida]]
? The operation would be quite simple: If the page name begins with "Megachile" and contains two words, take the second word and use it as sort key beginning with an upper case letter.
De728631 (
talk) 18:58, 21 October 2014 (UTC)
We previously had a bot operating from User:Theopolisme, but it has since been depreciated except for one function. There are 4 main project requirements:
Previous versions of the operating code is available to be read at https://github.com/theopolisme?tab=repositories . Thanks, -- Nick Penguin( contribs) 04:08, 27 October 2014 (UTC)
Hi, we need to replace "cdsweb" with "cdsarc" in all the pages found in this category, leaving the rest untouched. "cdsweb" is an old address for the VizieR astronomical database. Changing it with "cdsarc" will make all the references working properly again. Thank you. -- Roberto Segnali all'Indiano 15:24, 29 October 2014 (UTC)
Template:TonCwt to t now redirects to
Template:long ton. Could we get a bot to replace {{TonCwt to t|
with {{long ton|
?
Jimp 09:25, 28 October 2014 (UTC)
Some years ago I created several thousand stubs for fungal taxa (classes, orders, families, and genera), many of which used "Outline of Ascomycota - 2007" as a source. Since then the main link for the page (at http://www.fieldmuseum.org/myconet/outline.asp) has gone dead, although the source is still available at http://http://archive.fieldmuseum.org/myconet/outline.asp. I'd appreciate it if a bot could be made to replace those deadlinks with the working archive link. Sasata ( talk) 00:02, 26 October 2014 (UTC)
I also think adding the task of putting ITN tags to the talk pages of articles that appears in the ITN section to one of the bots is needed. Today articles that do appear on ITN are sometimes added with the ITN tag on the article talk page and sometimes not. Atleast since a few months back. Just for consistency such a task would benefit the Wikipedia project. And as we have a DYKupdateBot I can not see why a ITNupdateBot could not be created.-- BabbaQ ( talk) 11:51, 26 October 2014 (UTC)
Following a discussion about incorporating links to MalaCards in the Disease Box ( User:ProteinBoxBot/Phase 3#Disease) by Marilyn Safran, Alex Bateman, and Andrew Su ( Wikipedia:WikiProject Molecular and Cellular Biology/Proposals#MalaCards - www.malacards.org) in June 2013, a member of the community volunteered to write the bot ( Wikipedia:Bot requests/Archive 57), and I posted a dump as per his request (at User:Noa.rappaport/Malacard mappings). Since over a year has passed and the robot hasn’t materialized, we have decided to develop and contribute the bot ourselves, and would appreciate help with the following questions:
Thanks, Noa — Preceding unsigned comment added by Noa.rappaport ( talk • contribs) 13:28, 26 October 2014 (UTC)
can we get a bot to remove transclusions of User:HBC Archive Indexerbot/OptIn on pages with another archival bot system in place? the bot hasn't operated in years, and most of the talk pages transcluding the template already have another bot archiving (e.g., transcluding User:MiszaBot/config as well). I would say just remove all the transclusions, but it may be useful to replace it (by hand) in the cases that there is no other bot archiving. note that the only reason I noticed was while cleaning up this page where the double mask parameter was creating an entry in Category:Pages using duplicate arguments in template calls, so removing these will probably clean up some of those as well. Frietjes ( talk) 20:46, 27 October 2014 (UTC)
Good editors often make bad edits to disambiguation pages, because they don't fully appreciate the difference between dab pages and articles. Akin to BracketBot, this bot will scan changes to disambiguation pages, identify new entries that violate WP:MOSDAB as described below, and leave a polite message for the editor so they can self-correct the problematic entry/entries. I envision this bot detecting:
The talk message should be something like:
Thoughts? — Swpb talk 19:33, 30 October 2014 (UTC)
Could we get a bot to fill in cites using {{ JSTOR}}? It would help immensely. Oiyarbepsy ( talk) 00:41, 4 November 2014 (UTC)
Can a bot please fix all links in article and article-talk-page-space, to avoid redirects and point directly to article: Think of the children?
Thank you,
— Cirt ( talk) 18:14, 4 November 2014 (UTC)
Redirects are not supposed to contain anything but the redirect itself and a template explaining what sort of redirect it is (e.g. a redirect from an alternative spelling, from a plural, from a pseudonym, etc.). However, it often seems that redirects get made with all kinds of other text on the page. This, of course, is of no help to readers, who never actually see the content of redirect pages. Nevertheless, these pages show up on the various lists of errors needing repair if they contain broken templates or disambiguation links or the like. Ideally, we should have a bot come around and clear everything off the redirect page that is not supposed to be on one.
There are occasions where even a well-meaning but inexperienced editor will put a redirect on top of a page that should not be a redirect, so perhaps a bot could be directed only at older pages with such content (at least a couple months). Generally, however, these should be cleaned up. Cheers! bd2412 T 14:44, 3 November 2014 (UTC)
"The following new content (which was nominated for speedy deletion as CSD:A10) is left here temporarily so it can be merged with the target". It appears from this edit that the Sana Sheikh example was a case of an editor using a section edit, rather than a whole apge edit, as the firsts section was blanked. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:53, 4 November 2014 (UTC)
I think we need a report of redirects with unexpectd large size and not a bot to fix them. We can spot them and fix them manually. -- Magioladitis ( talk) 13:51, 4 November 2014 (UTC)
We previously had a bot operating from User:Theopolisme, but it has since been depreciated except for one function. There are 4 main project requirements:
Previous versions of the operating code is available to be read at https://github.com/theopolisme?tab=repositories . Thanks, -- Nick Penguin( contribs) 04:08, 27 October 2014 (UTC)
Moved to Wikipedia talk:Media Viewer#Media viewer URLs, as this is clearly something a bot can't fix. Oiyarbepsy ( talk) 23:37, 7 November 2014 (UTC)
A new Bot is needed to fix disambiguation links that are not properly formatted. Currently no one is active from what I can tell, atleast its not effective.-- BabbaQ ( talk) 21:35, 8 November 2014 (UTC)
User:Frietjes recently brought to my attention an issue with moving {{ navbox}}-based templates where the name parameter is not updated. See for example the recently moved {{ Asian Games Field hockey}} where the "E" edit link leads to the redirect at the old page name instead. I've moved many templates and not thought about this consequence. Frietjes specifically raises the issue of less-technically able users getting confused by the bad links and forking content to the redirect page in error. Presumably I'm not the only one who has moved a template and not thought about this consequence.
For me, going back to check all templates I've moved to confirm the name has also been updated will be an arduous and boring task to say the least. It seems like a perfect and simple bot task – check that the name parameter matches the given template title for all templates where {{ navbox}} is used. This could feasibly be a regular, scheduled task. Separately, but on a related note, maybe even an "issue" category could be automatically generated by the navbox template when these two are mis-aligned(?). SFB 18:52, 2 November 2014 (UTC)
{{
navbar}}
. This includes WikiProject banners - these get the name for the navbar in either of two ways: it might be given in full as |BANNER_NAME=Template:WikiProject Foobar
or if that be blank, it's obtained from |PROJECT=Foobar
and prepended with Template:WikiProject
Then there are the
WP:RDTs - there are several forms for those; and the name for navbar use is normally passed through one of the parameters of one of the header templates - for instance, the second positional parameter of {{
BS-header}}
, or the |navbar=
parameter of {{
BS-map}}
. Stub templates also potentially fall into this group, but at present there are none in error - the {{
asbox}}
template has the means to validate the content of its |name=
parameter, and puts the page into
Category:Stub message templates needing attention (sorted under E) where there is a discrepancy, so that it's easy for people like me to
do this. Sometimes the presence of a page in that section of the cat is an indication of a completely different problem that is not bot-fixable - like page moved in error, or vandalism. --
Redrose64 (
talk) 19:58, 2 November 2014 (UTC)
SELECT COUNT(*) FROM templatelinks WHERE tl_from_namespace = 10 AND tl_namespace = 10 AND tl_title = 'Navbox'
I will see if I can create a new
Database report to detect this issue. --
Bamyers99 (
talk) 16:09, 4 November 2014 (UTC)
The Invalid Navbar links report has been run again with more base template checks. For both WikiProject banners and stub templates, the navbar is hidden by default, so I don't see a need to check those. -- Bamyers99 ( talk) 16:26, 11 November 2014 (UTC)
Would it be possible to write a bot that automatically checks the articles on site that have citations to ensure that the articles do not cite Wikipedia for there information? Recently I've come across a few articles that have cited other Wikipedia articles in reference templates and other source specific areas, but as Wikipedia requires citations in articles to be third party sources and from Wikipedia specifically I think it may be a good idea to write a bot to check for and remove these links from articles automatically if they are found to be present in an article. TomStar81 ( Talk) 05:15, 10 November 2014 (UTC)
Bot to automatically open, edit and save media wiki page without human intervention — Preceding unsigned comment added by 202.46.23.54 ( talk) 11:09, 11 November 2014 (UTC)
Could a bot be created to convert List of public art in St Marylebone and List of public art in the City of Westminster to using {{ Public art header with long notes}} and {{ Public art row with long notes}}? (The former article uses it for the first three sections but stops there.) I started the job of converting to the templates myself but it's proving much too tedious. If it were possible this could perhaps be rolled out on a much wider scale too, but it would be necessary to decide whether to use these templates or the ones on which they're based, {{ Public art header}} and {{ Public art row}}. Ham ( talk) 14:26, 11 November 2014 (UTC)
Would it be possible for a bot to find instances of 1. 2. 3. and change them to # characters, as I did here? It's obviously an ongoing thing (people unfamiliar with # won't stop adding numbered lists this way), so if it's possible, I expect that the best idea would be to add it as a task for an existing bot. Nyttend ( talk) 13:20, 12 November 2014 (UTC)
Please can someone replace {{ HighBeam}} (which I created) in citations that use Citation Style 1 templates, as in this edit, so that the former template, now redundant, can be deleted? Approx 113 articles are involved, but there may be more than one instance in a single article. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:34, 6 November 2014 (UTC)
See wikidata:Wikidata:Bot requests#Import Persondata from English Wikipedia. Thanks!
We don't normally "fix" redirects, but when they occur in navboxes, and the user views the article in question, the link is not de-linked and emboldened as it should be.
Can a bot find and fix them, as I did in this edit? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 22:20, 12 November 2014 (UTC)
Is it possible for a bot to change the link National Highway 1 (India) form all the articles to National Highway 1 (India)(old numbering)? If yes I would provide other links too. To find the context of such a change please see Talk:National Highway 1D (India)(old numbering). Marlisco ( talk) 02:18, 17 November 2014 (UTC)
There is a large series of articles involved here that need links migrated from National Highway XXX (India) to National Highway XXX (India)(old numbering) so that new articles can be placed at the National Highway XXX (India) titles without making existing links incorrect. Still hoping for help on this one! Dekimasu よ! 20:08, 18 November 2014 (UTC)
This is a bit complicated, but it is a major headache. Frequently, articles will include a links to templates for which the templates themselves call other templates. In some cases, a parameter in the template will refer to a list of terms on a third page which will then appear linked in the article, but do not appear as a searchable linked title on the page from which they are called.
A stable example would be Template:S-rail/lines. If you look at the page, you only see a snippet of template coding. However, if you look at the page source, you can see dozens and dozens of terms that are called from the page when a particular code is used in the Template:S-rail template. An item on this page, like [[Alaska Railroad]], will therefore be called when Template:S-rail is used on a page and "AKRR" is used as a parameter in that template. The problem arises from the fact that this template does not show up on the "What links here" results for templates linking to "Alaska Railroad". Sometimes entries on pages like this are renamed or made into disambiguation pages, and because the tools for fixing disambiguation pages tend rely on searching the "What links here" results, templates from which a term can be called without appearing as a linked term can be very frustrating.
I would like a bot to add to every page that has such terms a "noinclude" section (or in the existing "noinclude" section) a list of links to all terms found on any page from which those terms can be called. That way, if one of those links changes, it will easier to find and fix the template containing the term. Cheers! bd2412 T 17:10, 18 November 2014 (UTC)
Could there be a bot that follows these steps:
Would that be too much trouble? Retartist ( talk) 03:17, 19 November 2014 (UTC)
Please can someone redirect all the documentation sub-templates of the templates listed at Template:Election box/doc#List of templates (except for the target template, {{ Election box/doc}}), as I did in this edit? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:38, 15 November 2014 (UTC)
I can't believe such a bot doesn't exist, but I can't find it: I want to change all urls of the form http://www.chess.co.uk/twic/twicX.html to the form http://www.theweekinchess.com/html/twicX.html , where X is a number (to be precise, a positive integer in the range of 1 to 1100, inclusive). (Because the chess news web site The Week in Chess has moved). e.g. as a random example, http://en.wikipedia.org/wiki/Yuri_Shabanov#cite_note-2 needs to change to point to http://www.theweekinchess.com/html/twic473.html Adpete ( talk) 08:42, 17 November 2014 (UTC)
#something
that is occasionally found on a URL is called the fragment, and it normally indicates a point on the page that is somewhere other than the very top. In the case of
http://www.theweekinchess.com/html/twic473.html#6 for example, it's the subheading "6) 13th World Championship for Seniors 2003". In such uses, the purpose is identical to the section linking used by Wikipedia. I would imagine that if the part from the second /twic
to the end is left alone, and the only change is the replacement of http://www.chess.co.uk/twic
with http://www.theweekinchess.com/html
then the URLs which have fragments should then work as intended. --
Redrose64 (
talk) 00:31, 20 November 2014 (UTC)
http://www.chess.co.uk/twic/
and http://www.chesscenter.com/twic/
with http://www.theweekinchess.com/html/
. In a few cases it will produce a dead link, but that's ok, because the existing links are all dead links anyway. If a log of all changes can be sent to me somehow (dumped on my Talk page is fine) then I can go through and fix the dead links manually.
Adpete (
talk) 04:23, 20 November 2014 (UTC)Instance of {{ Infobox Korean name}} which are underneath a biographical infobox (for example {{ Infobox person}}) need, where possible to be made a module of that infobox, as in this edit. This could perhaps be done on the basis of proximity: say, "if there is no subheading between them, make the edit"? Or "If nothing but white space separates them"?
A list (drawn up last year; feel free to edit it) of articles which use both of the infoboxes is at User:Pigsonthewing/Korean names. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:15, 21 November 2014 (UTC)
Hello,
I've done a little helping out at WP:MILLION, which aims to boost morale by identifying good articles or featured articles that receive a certain amount of views each year and awarding a little "million award" to the main contributors of those articles. You can read more about it on the project page, which does a much better job of explaining itself than I ever could.
In practice, identifying and awarding these articles is a big pain in the butt. I do it all manually, and I feel it has three main steps:
1. Identifying GAs and FAs that receive at least 250,000 views a year. My approach to this is anything but sophisticated. I just check Wikipedia:Good articles/recent, click on articles that look promising, then click over to their talk page to check their view counts. I take their total views for the last 90 days and multiply by four.
2. Identifying the contributors of a particular qualifying article that contributed most to its passing GA/FA. This generally involves me searching through the contributions page and often isn't as straightforward as you might think.
3. Awarding these contributors. This involves copying and pasting one of the templates from WP:MILLION onto their talk page and editing it to make it personal to them. I also have to add the article and contributors to the list at the bottom of the project page.
I feel it's important to keep morale up, and contributors do seem to appreciate receiving the award, but I hate how tedious the above process is. Would a bot be able to assist at any or all of the three stages? Are people already using them for that? I don't know, but any help would be much appreciated.
Bobnorwal ( talk) 14:11, 18 November 2014 (UTC)
I need a CSV file (or something similar) please, showing our links to ChemSider IDs. This will enable the import of Wikdiata IDs into ChemSpider.
For each instance of {{
Chembox}} with a |ChemSpiderID=
, I need the value of that parameter in one column, and the corresponding Wikidata ID in the next column. The article title should go in the next column.
e.g for Boron nitride, that would return a row with:
59612,Q410193,Boron nitride
I would like a separate report for {{ Chembox Identifiers}}, a module of the former template, where the prams are:
|ChemSpiderID=
|ChemSpiderID1=
|ChemSpiderID5=
|ChemSpiderIDOther=
with one row per parameter (and this duplicate Wikdiata IDs).
I'll request a similar report from wikdiata, but there may be values recorded here and not there, or vice versa. I'll also try to resolve those cases.
Note: ChemSpider is a product of the Royal Society of Chemistry, where I employed as Wikimedian in Residence. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:28, 19 November 2014 (UTC)
A SpaceRemover bot could do:
- Yellow Dingo ( talk) 11:21, 23 November 2014 (UTC)
<pre>...</pre>
<source>...</source>
and similar). Outside of preformatted sections, changing multiple spaces to single spaces is very much
WP:COSMETICBOT and if done by AWB, it would fall foul of
WP:AWB#Rules of use item 4. We have a respected admin who habitually uses double spaces not just between sentences, but between random pairs of words, something like 50% of the time. I asked him about this at
Wikimania London 2014 and I was more than satisfied with his answer (which I won't repeat here); he's aware that it happens, but it's an unconscious act. I suspect that if people were to "clean up" the posts of this editor, it would not go down well. --
Redrose64 (
talk) 16:09, 24 November 2014 (UTC)
Does someone have, or could someone gen up, a bot that would substitute all occurrences of something like WP:X to W:Y? The number of occurrences is small, like less than five hundred? I don't have a consensus for the change and just want to get a quick answer on feasibility before I go start discussing it with other folks. NE Ent 23:11, 27 November 2014 (UTC)
Hi!
I'm working on fr.wikisource and unfortunately, I have no programming knowledge nor skills. I was wondering if it was possible to create a tool that allows this kind of report:
— Within a specified category (given as parameter)
— Within the namespace Livre:name of book
— creates a report that states for each Livre with this info:
This tool/bot would be a great addition to any of the 60 languages Wikisrouce projects as it would allow us to concentrate on specific books and complete books that are almost finished.
-- Ernest-Mtl ( talk) 19:40, 28 November 2014 (UTC)
Could we have a bot to do all of the stuff described here: Template:Editnotices/Page/Wikipedia:Goings-on? Oiyarbepsy ( talk) 04:50, 5 December 2014 (UTC)
I want a robot where you can ask that robot and he'll archive some talks in the talk page for you. Ask for archiving with 60-day-or-older talks and he'll do this. Qwertyxp2000 ( talk) 07:43, 6 December 2014 (UTC)
Some articles can really be messy and I want some of the talk pages to have archives. Qwertyxp2000 ( talk) 07:50, 6 December 2014 (UTC)
Which robot does such thing? Qwertyxp2000 ( talk) 08:12, 6 December 2014 (UTC)
Help me. I don't know how. Please give me a brief way to grant want I wanted. Qwertyxp2000 ( talk) 08:22, 6 December 2014 (UTC)
{{User:MiszaBot/config | algo=old(90d) | archive={{SUBST:FULLPAGENAME}}/Archive %(counter)d | counter=1 | maxarchivesize=400K | archiveheader={{Automatic archive navigator}} | minthreadsleft=4 | minthreadstoarchive=1 }}
...and paste it at the top of the relevant talk page. Some time in the next 48 hours you should find that the older threads have been archived for you. -- John of Reading ( talk) 08:29, 6 December 2014 (UTC)
So I copy that piece of text and paste it to the page that needs archiving? Qwertyxp2000 ( talk) 08:34, 6 December 2014 (UTC)
It would be very useful to have User:SDPatrolBot revived or replaced. This one used to pick up where a creator of a new article removed a speedy deletion tag, which they're not supposed to do. Editors are doing this all the time and these are the savvier ones, some of whose pages probably need deleting more than most. The bot faded out in August 2013, can't see where this was ever commented on : Noyster (talk), 10:41, 2 December 2014 (UTC)
Could someone make a bot that strips the accountID from Proquest URLs? Here is a search to illustrate the problem that needs to be fixed. A Proquest URL by default affixes an accountID to the end which is specific to only one institution. For example: http://search.proquest.com/docview/229617956?accountid=14771 should be changed to http://search.proquest.com/docview/229617956
It is already a problem that these Proquest links are behind a paywall and only university users with a library subscription to Proquest can view them. But it's even worse when an institution-specific account ID on the URL prevents a user from another institution from accessing that resource, even if all they have to do is edit the URL to delete the accountid.
Does this seem like the kind of task a bot could do? Search for any URL that begins with http://search.proquest.com/docview/ and then remove any characters that come after the docview number? Lugevas ( talk) 18:19, 10 December 2014 (UTC)
T cedilla (Ţ) was wrongly atributed to Romanian language, which is using T comma (Ț) instead. There are 458 articles containing T cedilla in the name. They must be renamed, but also the S cedilla must be changed into S comma too (for example Căpăţâneşti has to become Căpățânești. I made the list with the articles to be renamed at User:Ark25/Robot#T Cedilla and I created a list with the current names and the correct names at User:Ark25/Robot#T Cedilla - Paired with the desired result. Most of the destination titles already exist as redirects to the respective page - they can be re-written without any problem. Thanks. — Ark25 ( talk) 22:43, 4 December 2014 (UTC)
I renamed manually about 1,500 articles containing s/S/t/T-cedilla.
In order to replace the diacritics inside articles, I made four lists with articles containing s/S/t/T-comma in title - only the Romanian language uses those letters, so all those articles refer to Romanian-language related things. In those articles, it's safe to replace cedilla diacritics with comma diacritics. Except a very few articles where they might contain Turkish words using S/T-cedilla. Therefore it requires a manually assisted robot. Turkish names are quite obvious so it's not necessary to have a bot master with knowledge of Romanian language. But I can volunteer to do that in case I my user will get the approval to become a bot - Special:Contributions/ArkBot - ro:Special:Contributions/ArkBot.
The same kind of semi-automatic replacements can be made in the articles from Category:Romanian-language surnames and other similar categories on Romanian-language topics.
@ Oiyarbepsy: I think your idea is a very good one. The template should be created and then advertised on Wikipedia:WikiProject Romania. — Ark25 ( talk) 12:13, 13 December 2014 (UTC)
@ Anomie: I noticed you created a lot of redirects like for example HC Steaua București. Can you remember where you got the list of redirects to create from? — Ark25 ( talk) 12:51, 13 December 2014 (UTC)
I found it: User:Strainu/ro - Wikipedia:Bot_requests/Archive_36#Make_redirects_from_titles_with_correct_Romanian_diacritics_to_the_currently_used_diacritics and also a sandbox — Ark25 ( talk) 13:31, 13 December 2014 (UTC)
I made a list of categories at User:Ark25/Robot#T Cedilla – Categories. They must be renamed, replacing T-cedilla (Ţ) with T-comma (Ț). Is it possible to do that it automatically (including the re-categorization of the articles)? — Ark25 ( talk) 16:13, 13 December 2014 (UTC)
At the Film Project three new task forces have been created. Please could a bot tag the {{ WikiProject Film}} banner of articles in the following categories with the appropriate tags:
|Mexican-task-force=yes
(or "Mexican=yes")|Documentary-film-task-force=yes
(or "Documentary=yes")|Silent-film-task-force=yes
(or "Silent=yes")If an article currently doesn't have a talkpage or a film project tag on an existing talkpage, to add that too. Thanks. Lugnuts Dick Laurent is dead 13:53, 3 December 2014 (UTC)
I am on it. -- Magioladitis ( talk) 23:58, 14 December 2014 (UTC)
Hi! There are currently 6,297 articles with video clips; see list here (put in 7000 as the limit to see them all). They should all be in Category:Articles containing video clips, which currently has 622 articles. Possible? Thanks, -- phoebe / ( talk to me) 06:17, 17 December 2014 (UTC)
Can a bot be developed to automatically update data in Wikipedia reference tables from external website sources? — Preceding unsigned comment added by 99.240.252.181 ( talk • contribs) 23:25, 19 December 2014
Presumably this could be an additional task for an existing bot. User:Nyttend/Pennsylvania is a long list of municipalities with no photo or poor-quality photos; it's a collaborative project to get all of them illustrated, and when we add (or discover that someone else added) a workable photo to one of the municipalities, we remove it from the userspace list and (ideally) add the image to List of municipalities in Pennsylvania. Given the length of the list (nearly 1000 items currently), it's quite likely that one will get a new photo every so often, and we won't notice.
I'm wondering if a bot could be instructed to visit User:Nyttend/Pennsylvania every so often (perhaps once per week or once every other week), examine all of the linked pages, and look to see if any image has been added since the last time. Presumably the bot could examine the edit history, ignore all pages that hadn't been edited since the last run, and examine each edit made since the last run to see if any of them had added an image. When it finds such an edit, it logs it and goes to the next article, and when it's run through all the articles, it leaves a simple note on my talk page saying something like "Images have been added to ___, ___, and ___ in the last [amount of time]". Almost all of these locations are small towns and rural areas that get very few edits (for example, before I added a photo this month, Franklin Township, Beaver County, Pennsylvania was edited just twice in the past year), so the bot won't need to check many edits. Some of the image-adding edits will likely be vandalism reversion, addition of non-photographic images (e.g. maps), and other things that I'm not looking for, but there's no need for the bot to filter for anything; after all, it's just giving me a list of pages to check, and there won't be many. Probably most runs won't find any new images; if this is the case, the bot should still leave me a message, saying something like "There weren't any new images in the last [amount of time]". Nyttend ( talk) 18:50, 20 December 2014 (UTC)
I was thinking that a bot could generate a list of articles with overlinking, but it seems a daunting task to comb through the existing articles looking for instances of overlinking. But perhaps it would not be too difficult to have a bot look at newly-created pages for multiple links to the same article? I'm thinking that the bot could look for all wikilinked strings, and find any with three or more instances. Then it could either generate a list somewhere or perhaps tag the article if, say, three or more different articles are multiply linked. The bot could ignore wikilinks in infoboxes and other templates, and perhaps also tables; then the number of overlinks that trigger the bot could be reduced to 2+. Abductive ( reasoning) 19:52, 18 December 2014 (UTC)
An automated process is requested to change the word "Category" into "Kategori" in the Indonesian wikipedia pages. Any bots available for use?
Thanks, JohnThorne ( talk) 00:23, 20 December 2014 (UTC)
See [3]. Can that be done? Seattle ( talk) 23:31, 25 December 2014 (UTC)
I would like a bot to sync transclusions of {{ Official website}} with WP:Wikidata. This is an example of what I would like to see done. If the external link matches the Wikidata entry it can be safely removed. Then we should make a list of what is left. -- Magioladitis ( talk) 07:16, 24 December 2014 (UTC)
Back in October, a request was made for a bot to fix "bgcolor" markup, so that background colours would display properly on mobile devices (see the discussion for full details). Mdann52 kindly volunteered to take up the task but encountered difficulties. Does anyone else want to have a go? FYI, I've recently noticed that Dispenser's Dab solver tool seems to incorporate the desired functionality (see this edit as an example), in case that is of any help. DH85868993 ( talk) 23:17, 29 December 2014 (UTC)
AN3 would be easier to browse and quickly determine if edit warring is occurring if entries like this
were formatted as diffs that looked like this:
Among other things, the dramatically consistent page size (first 99,800 bytes then 99,779 bytes) comes out.
I'd like to write or see written a bot that would perform this task automatically. Jsharpminor ( talk) 06:08, 1 January 2015 (UTC)
{{Make diff}}
, that takes a URL like https://en.wikipedia.org/?title=Robin_Williams&diff=next&oldid=639931566
and outputs a formatted diff? That would be useful in many other circumstances, too, and should be doable in Lua.
Andy Mabbett (Pigsonthewing);
Talk to Andy;
Andy's edits 11:37, 1 January 2015 (UTC)I just wanted some input regarding a bot idea which I am planning on implementing. The bot would monitor the requests for confirmed page to check whether:
I've already started to write this (but obviously won't run it without approval, etc.) Note that the bot won't need admin privileges because it will not be taking any actions which involve accepting requests. Hopefully this will take some of the load off the administrators there. CarnivorousBunny talk • contribs 18:11, 22 December 2014 (UTC)
You are already autoconfirmed
(or something similar). Thanks for the input.
CarnivorousBunny
talk •
contribs 17:19, 26 December 2014 (UTC)Not done We need a robot that can automatically add talk page tags that are almost always used. For example a page at this link will most likely not have a talkheader tag. EMachine03 ( talk) 12:29, 24 December 2014 (UTC)
ugh ok nvm :( EMachine03 ( talk) 20:41, 25 December 2014 (UTC)
More like requesting for AWB operation to replace the transclusion of {{ HK-MTRL color}} and {{ HK-MTRL lines}} with {{ HK-MTR color}} and {{ HK-MTR lines}} (without L) respectively because they now use the same syntax to invoke module:MTR. If you would, please also nominate the former 2 templates for speedy after the replacement. Thank you. -- Sameboat - 同舟 ( talk · contri.) 12:53, 30 December 2014 (UTC)
@ Sameboat: Done -- Magioladitis ( talk) 21:02, 30 December 2014 (UTC)
GoingBatty I still see some transclusions of the 2 templates but I am not sure why. -- Magioladitis ( talk) 21:17, 30 December 2014 (UTC)
{{
s-line|system=HK-MTRL}}
. Per testing in
my sandbox, the fix isn't as easy as removing the "L" from the template.
GoingBatty (
talk) 21:32, 30 December 2014 (UTC)
@ Sameboat: then I think it's better if you follow the right procedure and first send the templates for TfD before any other action. -- Magioladitis ( talk) 08:02, 31 December 2014 (UTC)
{{s-line|system=HK-MTRL|
with {{s-line|system=HK-MTR|
from all
MTR Light Rail stops articles which loads bunch of "HK-MTRL" templates but now moved by me under the "HK-MTR" prefix. {{s-rail|title=HK-MTRL}}
should be left intact because it's needed to call for the "
MTR
Light Rail" title, otherwise it becomes just "
MTR". --
Sameboat - 同舟 (
talk ·
contri.) 15:05, 3 January 2015 (UTC)
Currently, WP:NYC encompasses 13,478 articles. According to a recursive search of Category:History of New York City on AWB, the category and its subcategories include 22,363 articles. Category:Geography of New York City contains 57,564 articles. I started using AWB to tag the talk pages, but it's too long and cumbersome to do this through AWB, especially since a bot can automatically give the article its rating from existing project templates. Does someone have a bot they can lend to this task? – Muboshgu ( talk) 17:58, 4 January 2015 (UTC)
(sorry for my english)
This bot has stopped working for one year and more than
200 portals are no longer updated. The source code is
here. --
SleaY(
t) 05:23, 6 January 2015 (UTC)
Function: Add {{
user sandbox}} to pages of the form User:((.)*)/sandbox where the page does not already contain
Namespace: User
Run frequency: Daily
Expected run size per batch = 25-50 pages.
Remarks: I've recently been doing some New Page Patrol on user pages, (mostly subpages.), and have noted that marking sandboxs seems to be a substantial part of the process. Given it's essentially mechanical, I feel it is amenable to automation.
Userspace drafts, will still have to be identified manually as at present.
Sfan00 IMG (
talk) 13:32, 10 January 2015 (UTC)
Sfan00 IMG ( talk) 13:48, 10 January 2015 (UTC)
I hate having to work for WP:TAFIACCOMP and it is very tedious to convert the table to some other template. Could we have a new functioning robot that could finish every week's TAFI accomplishments, achievements, finish off converting the table, and do those?
Thanks, Qwertyxp2000 ( talk) 20:33, 13 January 2015 (UTC)
{{Wikipedia:Today's articles for improvement/Accomplishments/row |YYYY = |WW = |oldid = |olddate = |oldclass = |newid = |newdate = |newclass = |edits = |editors = |IPs = |bots = |reverts = |prose_before = |prose_after = |size_before = |size_after = }}
If this is what you're looking for, [13] is an example of what needs to be done. -- Ypnypn ( talk) 16:41, 15 January 2015 (UTC)
Good day!
Currently, the overall 2009 data included the word articles, however, the 2014 data to the public. Wikipedia, the updated data is famous, yet still 2009 data there are some places. The hungarian settlements categories: Category:Populated places in Hungary by county
The data are available in XLS format of the Central Statistical Office website: [14]
However, if you've updated population data, the mayors would be useful to inject into the page's data boxes. The mayors elected in 2014 valasztas.hu available on the website: [15]
Have a nice day and good luck with the expansion of Wikipedia.-- นายกเทศมนตรี ( talk) 16:06, 15 January 2015 (UTC)
Can your bots search for factors that help in stock investing? — Preceding unsigned comment added by Jdhurlbut ( talk • contribs) 03:35, 18 January 2015 (UTC)
May a bot be scheduled to move all those pages whose names include (case-insensitive) "Labelled Map" or "Labeled Map" from the categories Category:Graphic templates and
Category:Graphics templates to
Category:Labelled map templates, please?
Sardanaphalus (
talk) 12:11, 15 January 2015 (UTC)
When two Regional Indicator Symbols are combined to form a country code, some mobile devices interpret the letters by displaying the flag of the country in question. We've already created several of these combinations as redirects to the flag article, e.g. 🇷🇺 redirects Flag of Russia. Robin0van0der0vliet has proposed that all country codes be created for this purpose ( full list), and while I've created 🇳🇱 and would like to do the rest, all those page creations would take quite a while for a human. Could someone write a bot to do it? Titles that include Regional Indicator Symbols are blacklisted, so you'll need an adminbot to do this project. I've already looked through the full list and can't see any entries that I wouldn't be willing to create. Nyttend ( talk) 02:39, 10 January 2015 (UTC)
FindArticles.com used to be (I believe) a large aggregator of magazine articles from a variety of publications. We have a substantial number of articles linking to the original site - either to individual articles which were hosted at the site, or to generic "find articles by this person" searches (eg this removal) - I count around 17000 links in the main namespace, almost all of which are specific article links.
Unfortunately, it looks like the entire site now redirects to search.com, an obscure but not very useful search engine, and this material is completely gone; the links just provide traffic to a commercial search engine. Every one of these will need converted to archive.org links/plain-text magazine citations, or (for the few search links) simply removed outright. Does a bot exist that can do this? Andrew Gray ( talk) 18:41, 19 January 2015 (UTC)
There was an RFC last year on the use of date abbreviations, that closed in April 2014, the result was not to allow Sept as an abbreviation for September and also not to use full stops following the abbreviated month names. See RFC detail.
Could a BOT implement this as there seems to still be articles that fail to meet this change to the MOS. Probably could continue to run on a periodic basis to catch new non-compliances.
Detail would be to change dates that use Sept to Sep and to remove the full stop following a shortened month, assuming it is not at the end of a sentence. Though if in running text then Sept should be expanded to September and other months expanded as appropriate, as short months are not allowed in running text. Obviously the BOT should avoid quotes and reference titles when doing this.
Keith D ( talk) 22:41, 22 January 2015 (UTC)
Hello! I have a splendid idea for a good bot. I need code (probably in Python) so that there will be a bot that updates the BTC Price, on the article Bitcoin. I just put that part in the article, and I was wondering if that would be a good idea for a bot. I am ready to create another account so I can implement this bot in place when it is approved at BRFA. Let me know what you think. Yoshi24517 Chat Absent 17:15, 23 January 2015 (UTC)