From Wikipedia, the free encyclopedia
Archive 70 Archive 74 Archive 75 Archive 76 Archive 77 Archive 78 Archive 80

Bot to update the Adopt a User list

Hi- Theo's Little Bot 19 used to update the adopt a user list marking inactive adopters as such but this doesn't seem to have run since June 2016. Could someone take over the task? The request for approval can be found here, my original request here and there's a link to the source code there too if that helps. Thanks, jcc ( tea and biscuits) 01:21, 28 January 2018 (UTC)

Ok, if anyone takes this on, please ping me since I'm not watching this page. jcc ( tea and biscuits) 13:31, 30 January 2018 (UTC)
@ Jcc: BRFA filed -- Gabrielchihonglee ( talk) 01:45, 2 February 2018 (UTC)

Unreliable source? documentation

Hello, I have been pinged by User:Mattythewhite, who was informed of this by User:Helper201 that there is a documentation that the unreliable source? tags should be outside the ref tags not in them.

There may be unreliable source? tags found on articles in the references section, so a bot should be used to change the following:-

<ref>Reference {{Unreliable source?|date= }}</ref> → <ref>Reference </ref>{{Unreliable source?|date= }}
so it looks something like this [1] [2] unreliable source? Reference 1 does not abide to the documentation while ref 2 does. It is a difficult task to manually find all the articles with unreliable source? tags in the references sections. Iggy ( Swan) 16:32, 13 January 2018 (UTC)

  1. ^ Reference unreliable source?
  2. ^ Reference
Personally, I've always included any tags within the reference just before the closing tag, especially with {{ sps}}. I don't necessarily think this is a good idea. – Fredddie 16:39, 13 January 2018 (UTC)
Looking at documentation, it seems that {{sps}} should be inside the ref tags. It would then seem to me that rather than have a bot clean up the {{ Unreliable source?}} tags, we should come up with consistent rules for their usage. – Fredddie 16:44, 13 January 2018 (UTC)
I'd say the use of the unreliable source? tags in some articles within text and others in the references section would be somewhat inconsistent within the project, whether or not it agrees with the documentation is a different question. Iggy ( Swan) 16:59, 13 January 2018 (UTC)
@ Fredddie: {{ sps}} explicitly states it should be used outside ref tags. Nihlus 11:50, 14 January 2018 (UTC)

bot for creating new categorys

please make bot for creating new categorys example people birth by day — Preceding unsigned comment added by 5.75.62.30 ( talk) 07:08, 20 January 2018 (UTC)

 Not done. We do not sort people by birth day, only birth month. Primefac ( talk) 16:10, 20 January 2018 (UTC)
Year surely? Not month. -- Redrose64 🌹 ( talk) 21:43, 20 January 2018 (UTC)
You're right, I was thinking maintenance cats like {{ citation needed}}. Primefac ( talk) 21:46, 20 January 2018 (UTC)
why not creating categorys birth by day — Preceding unsigned comment added by 37.254.139.175 ( talk) 07:07, 21 January 2018 (UTC)
Because we could need anything up to 366 categories in a year, up to 36524 each century instead of the present level of 1 per year, 100 per century. You would need to show a demonstrable requirement for such a huge amount, and obtain plenty of support for it. See also WP:BOTPCAT. -- Redrose64 🌹 ( talk) 11:06, 21 January 2018 (UTC)
A single category per date, ignoring year (e.g. Category:21 January births) would not seem so harmful. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:05, 21 January 2018 (UTC)
If you want to find all the people born on a given day ("21 January 1966") or a given date regardless of the year ("21 January"), you can do so with a query at Wikidata. You can refine such queries, so that it only lists people with articles on a given Wikipedia, or just opera singers or whatever. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:05, 21 January 2018 (UTC)

no i want just birth by day not by day and year example i want november 15 birth not november 15 1994 birth — Preceding unsigned comment added by 5.22.3.31 ( talk) 05:10, 22 January 2018 (UTC)

You would need to start a discussion and reach a consensus to create such a set of categories. Primefac ( talk) 13:25, 22 January 2018 (UTC)

Where to make a discussion — Preceding unsigned comment added by 37.254.179.162 ( talk) 14:33, 22 January 2018 (UTC)

I'd go with WP:VPR. Primefac ( talk) 14:34, 22 January 2018 (UTC)

User:AarghBot , currently owned by User:Mikaey

BG19bot

The bot BG19bot was very helpful but has not been working for more than 6 months. And the bots "owner" has not been on Wikipedia since August. is there a way to start it up again, or a similar bot can be created?. BabbaQ ( talk) 23:30, 25 October 2017 (UTC)

Under circumstances not completelly investigated, BG19bot is at the moment inactive. I am willing to fill out a BRFA for all of its tasks. I will probably do in the nxt few days. -- Magioladitis ( talk) 22:51, 12 November 2017 (UTC)

us-highways.com

http://us-highways.com/ was previously used for a website called U.S. Highways: From US 1 to (US 830), a self-published site on the history of the United States Numbered Highway System. The creator of the website (Robert V. Droz) ran into some unrelated legal issues in his home state of Florida and let the site lapse. The domain name has been assumed by a commercial enterprise completely unrelated to the former site. As an SPS, the site should have never been used as a source in articles, but it was. Fredddie and I feel that it would be preferable to remove citations and links to the site at this time. Would some bot operator be amenable to replacing any citations to the site with {{ citation needed}} tags and removing any links in an external links section of the articles? Imzadi 1979  12:04, 13 January 2018 (UTC)

There are also a few links labeled "Florida in Kodachrome", but the domain is the same. – Fredddie 16:35, 13 January 2018 (UTC)
@ Imzadi1979 and Fredddie: Not taking this on yet, but do you have a consensus in hand to do this? Hasteur ( talk) 23:33, 15 January 2018 (UTC)
I personally support this. It may be AWB-able though. -- Rs chen 7754 01:21, 16 January 2018 (UTC)
@ Fredddie, Imzadi1979, Hasteur, and Rschen7754: SPS has a criterion for use, which is "Self-published expert sources may be considered reliable when produced by an established expert on the subject matter, whose work in the relevant field has previously been published by reliable third-party publications." (Emphasis original.) Does Robert V. Droz meet this criterion? If he does not, then I don't see a problem with mass removal of the references. Search indicates there are a fairly low number of links, so I would agree, this can be WP:AWBd.
If he does meet the bar for use in SPS, then what should instead happen is that InternetArchiveBot be updated (if such has not yet occurred) to understand these webpages have been usurped and to auto-archive the lot. -- Izno ( talk) 18:07, 23 February 2018 (UTC)
@ Izno: Droz doesn't meet the bar. He wasn't a historian nor employed by a highway department before his recent change in status. Imzadi 1979  23:45, 23 February 2018 (UTC)
Removal seems good to me then too. I think there are enough here that we could execute. -- Izno ( talk) 01:11, 24 February 2018 (UTC)
I've done this now on User:IznoRepeat and made something of a mess of it the first time through which I noticed about 100 pages in (repeated refs got me--note to future self). AnomieBot helped snag the repeated ones I missed. There were also, toward the end, a few references to old Rand McNally maps clearly hosted by him on a personal server, which may be valuable. There was also evidence of Geocities references hosted by him in the batch, but I can't point to any in specific without some re-digging. Question: Why are Google Maps referenced on roads? Those seem like not-great references. -- Izno ( talk) 20:14, 24 February 2018 (UTC)
Thanks for that, Izno. Re: Google Maps, I only use it in conjunction with the official printed state highway map (and maybe the Rand McNally atlas for other details), and then I use it for its satellite view to provide the extra details about the landscape/surroundings/etc, not for routing, etc. Imzadi 1979  02:16, 26 February 2018 (UTC)
There is no problem with referencing Google Maps or any other reliable map, as long as one recognizes the limitations of the source. -- Rs chen 7754 03:39, 26 February 2018 (UTC)

Move articles with the "telenovela" disambiguator to "TV series"

Per discussion at Wikipedia:Village pump (policy)#RfC: Is "telenovela" a suitable disambiguator? and updated guideline at WP:NCTV, is it possible to get a bot to move all articles with the disambiguator "(telenovela)", "(COUNTRY telenovela)" or "(YEAR telenovela)" to "(TV series)", "(COUNTRY TV series)" and "(YEAR TV series)"? -- wooden superman 16:31, 15 February 2018 (UTC)

Special:Search/intitle:"telenovela" does not indicate there are more than 600 titles that need to change, and a good chunk of those are probably false positive redirects already. -- Izno ( talk) 20:01, 21 February 2018 (UTC)
Current title Conflicting article
Araguaia (telenovela) Araguaia (TV series)
Cain and Abel (telenovela) Cain and Abel (TV series)
Caribe (telenovela) Caribe (TV series)
Claudia (telenovela) Claudia (TV series)
Esperanza (telenovela) Esperanza (TV series)
Magdalena (telenovela) Magdalena (TV series)
Vanessa (telenovela) Vanessa (TV series)
Above are the outstanding pages based on quarry:query/25043. —  JJMC89( T· C) 00:41, 26 February 2018 (UTC)
Great, thanks. I've manually moved these. -- wooden superman 12:21, 26 February 2018 (UTC)

Archiving stale reports at AIV

A consensus is emerging here that a bot to clear stale AIV reports would be desirable. Reports that have been open for more than 6-8 hours are usually considered declined by default. An edit summary along the lines of "listed for >6 hours without any admin willing to block" is appropriate. I see a couple main obstacles to this, and I was hoping this board could help with them.

  1. This task needs to be run at least every 2 hours. That way we can set the minimum archive time to 6 hours, and get all the threads before they're 8 hours old. This has been the sticking point with the currently extant archive bots that I have checked.
  2. It has to archive individual bullet points, not sections. I don't think this is a great technical hurdle but if it is we can reformat AIV to accommodate the bot.
  3. It mustn't break HBC AIV helperbot5, which archives actioned reports. Pinging the current operator, @ JamesR:.

Cheers, Tazerdadog ( talk) 23:42, 1 January 2018 (UTC)

The current bot archiving actioned reports should probably also be responsible for stale unactioned reports. -- Izno ( talk) 13:54, 2 January 2018 (UTC)
How difficult would it be to update the current bot? Tazerdadog ( talk) 22:25, 6 January 2018 (UTC)
The current not just clears to my knowledge, not archives, so I think it’d be pretty easy to add another clear condition to it. TonyBallioni ( talk) 05:23, 7 January 2018 (UTC)
Can someone with more technical knowledge update the bot to do this? Tazerdadog ( talk) 16:50, 13 January 2018 (UTC)

I meant to note this here much earlier, but I wanted to add that in addition to archiving declined/stale reports, a bot adding a note about recently declined reports or users recently off a block might be helpful as well. ~ Amory ( utc) 02:20, 22 February 2018 (UTC)

@ Amorymeltzer and Tazerdadog: You should probably go and ask the operator of HBC AIV helperbot5, who is JamesR. -- Izno ( talk) 18:27, 23 February 2018 (UTC)

Bot for automatically updating Alexa rank in infobox

I recently updated the infobox information in On-Line Encyclopedia of Integer Sequences. As part of that update, I added an Alexa parameter to the infobox. It seems that the value of this parameter requires frequent updates. I just checked the Alexa link and found that the rank information in the infobox is no longer up-to-date. I think there are probably many more infoboxes with this parameter and regular bot runs to update those parameters seem like a good idea to me. -- Toshio Yamaguchi 14:49, 6 January 2018 (UTC)

This report says that |alexa= is used 2,257 times in {{ Infobox website}}. – Jonesey95 ( talk) 15:41, 6 January 2018 (UTC)
There is an Alexabot over at Wikidata, though the operator, Tozibb, says that they have been having technical issues so the last updates are from December. I have been working on Module:Alexa/{{ Alexa/sandbox}} (credit to RexxS for writing the initial module), though it's not as useful right now since it generates arrows based on the two most recent Wikidata values (or based on four local parameters) and most Wikidata items with the Alexa rank property only have one value (furthermore, I provided an incomplete enwiki search to Tozibb for finding items to add data to). Referencing capability needs to be added before it can be used, either with local values or with the Wikidata data. Jc86035 ( talk) 16:13, 6 January 2018 (UTC)

Bot that notifies users on their talk page if a set of pages are created.

I'm looking to create a bot that can automatically use {{ AC notice}}~~~~ to inform users an article from a set of pages has been created. I noticed that I have a lot of redlinks in my Watchlist that I only have there to find out if a page is created eventually. If a bot could use addtext.py to select an article it detects has changed after a refresh of a a group of pages. If an article has been created, regardless of its contents (or lack thereof), uses it for 1= and adds the parameter to the talk pages of users that have provided it with a request to notify them upon its creation. Does this make sense? I don't know, but I hope it does. Thank you anyways! ― Matthew J. Long -Talk- 22:03, 6 January 2018 (UTC)

I'm sorry but it is Impossible because a user's watch page is not publicly available according to Help:Watchlist#Privacy -- Gabrielchihonglee ( talk) 13:00, 15 January 2018 (UTC)
This request isn't asking for a bot to access people's watchlists, so it's not impossible for the reason you claim. Anomie 21:56, 15 January 2018 (UTC)

List of non-marine molluscs of a country

I request to create lists by country "List of non-marine molluscs of COUNTRY" based on data from the http://www.iucnredlist.org/search website. It is very time consuming task, even if I can filter out freshwater / terrestrial / gastropods / bivalves of certain country at the IUCN website. If a Bot could make a list of species with the references, that would be great.

This is realizable (there onece existed a Polbot, that was able to create stubs like this [1] based on iucnredlist.org and there is possible to make various list based on the such as this one List of least concern molluscs).

Examples of the work in progress.

There are number of lists missing:

You can virtually make lists of non-marine molluscs for all countries (I will manually merge them with existing lists when needed).

It would be great, if you could at least sort those species into sections "Freshwater gastropods", "Land gastropods" and "Freshwater bivalves" (or make working lists of those three groups).

This task is suitable for non-marine molluscs. This task is not suitable for marine molluscs (that are placed in separate lists on Wikipedia), because there are not enough data for them on IUCN.

If you could pre-prepare such lists (in a User namespace), I would finish the task manually (I will sort species in systematic order, I will add families, I will update outdated info, I will generally check-out lists). Thanks. -- Snek01 ( talk) 21:11, 9 January 2018 (UTC)

Snek01, this should be brought up at Wikipedia talk:WikiProject Tree of Life first, to determine whether or not it's desirable, if it would duplicate existing info, or if it could be accomplished via the existing category structure, etc. I believe there's also a moratorium on bots mass-creating articles.   ~  Tom.Reding ( talkdgaf)  21:32, 9 January 2018 (UTC)
I requested to do it in my User namespace. If you are afraid that could happen something bad, bot operator can create one list only first. You will see what will happen. Thanks for your solicitude. There is even no need a Wikipedia approved Bot for this task. There is need knowledge how to datamine data from iucnredlist.org. -- Snek01 ( talk) 23:34, 9 January 2018 (UTC)
Oh, I misunderstood (re Polbot). Dumps into userspace would be fine. I'm currently working on another IUCN related project that's fairly large, otherwise I'd offer to help. It would still be worthwhile x-posting to WT:TREE, as there are others there that don't watch WP:BOTREQs who might also be able to help.   ~  Tom.Reding ( talkdgaf)  23:44, 9 January 2018 (UTC)

Remove red links and redirects from lists in a category

Can anyone use a bot to scan the drafts in Category:Abyssal temporary Russia cat and remove entries from the lists that are redlinks and redirects? Abyssal ( talk) 15:18, 20 February 2018 (UTC)

I can't really proceed on my big list of prehistoric life articles series until this is resolved. Could anyone please help Abyssal ( talk) 15:56, 21 February 2018 (UTC)
@ Abyssal: User:AlexTheWhovian/script-redlinks run on each of the pages will probably get you most of the way there. Will that work for you? -- Izno ( talk) 18:40, 23 February 2018 (UTC)
@ Izno:Thanks for the recommendation. I won't be able to tell until sometime after the weekend. I have some long shifts coming up. Abyssal ( talk) 18:45, 23 February 2018 (UTC)
@ Izno:Actually I managed to get this working right now and it didn't help. It only de-linked them rather than deleting the links altogether. Abyssal ( talk) 18:52, 23 February 2018 (UTC)
You wanted the entire lines to be deleted? Is there a purpose in your working flow to not having the lines at all? -- Izno ( talk) 18:54, 23 February 2018 (UTC)
I'm trying to take long lists of species and boil them down only to the more scientifically important ones. Since the most important species are the ones most likely to have pre-existing articles, I'm trying to remove all the red links. Abyssal ( talk) 19:24, 23 February 2018 (UTC)
@ Abyssal: I made an edit which should have done the job of removing the red links at Draft:List of the prehistoric life of Arkhangelsk Oblast. Does that work for you? -- Izno ( talk) 01:06, 24 February 2018 (UTC)
As for redirects fixing, redirects should become obvious if you are using the Page Previews feature, which appears in Special:Preferences#mw-prefsection-rendering in the "Reading preferences" section. -- Izno ( talk) 01:14, 24 February 2018 (UTC)
@ Izno:Yeah, that's good. How did you do it? Can you do that for the rest of the category as well? Abyssal ( talk) 15:20, 26 February 2018 (UTC)
 Doing... Manually. This is easy enough with a little regex. Tazerdadog ( talk) 19:03, 26 February 2018 (UTC)
@ Abyssal: How does Draft:List of the prehistoric life of Kaliningrad Oblast look? If that's what you wanted, I can do the same thing for the rest of the pages without much trouble. Tazerdadog ( talk) 21:21, 26 February 2018 (UTC)
@ Tazerdadog:Could you preserve the bullet structure so the species stay under their respective genera? Abyssal ( talk) 21:41, 26 February 2018 (UTC)
@ Abyssal: That shouldn't be too hard to do. How do you want me to handle cases where the species exists, but the genus is a redirect or redlink? Example: Xylolaemus sakhnovi, but Xylolaemus
You can keep the genus in that case. Abyssal ( talk) 22:07, 26 February 2018 (UTC)

Sounds good. Everything else look as you want it to? Tazerdadog ( talk) 22:15, 26 February 2018 (UTC)

@ Tazerdadog:OK, let's see how it goes. Abyssal ( talk) 22:27, 26 February 2018 (UTC)
Yeah, what I did was use the redlink script and then I pulled the output of the script into an offline .txt editor. There, I regex removed all the lines without links. -- Izno ( talk) 22:38, 26 February 2018 (UTC)
@ Tazerdadog: Kaliningrad Oblast Is still missing its genus-species hierarchy... Abyssal ( talk) 22:53, 26 February 2018 (UTC)
Working on it now. Tazerdadog ( talk) 23:10, 26 February 2018 (UTC)
That one is done Tazerdadog ( talk) 23:18, 26 February 2018 (UTC)
All undesired redlinks should be gone now. Working on redirects. Tazerdadog ( talk) 02:48, 27 February 2018 (UTC)
Sweet! Thanks, @ Tazerdadog:! Abyssal ( talk) 03:02, 27 February 2018 (UTC)

Redlinks remover

@ Abyssal: I wrote a script that removes redlinked list entries. It's called User:The Transhumanist/RedlinksRemover.js.

It's designed specifically for cleaning up outlines and lists, and I noticed your drafts are in outline format, except for the little cross at the beginning of the entries.

The script keeps nipping off the ends of branches until it reaches one that shouldn't be pruned.

It won't strip out an entry that has descendants in the tree.

After it is done pruning redlinked ends from the tree, it goes back and delinks any redlinks that are leftover and red categories (this part comes from AlexTheWhovian's script).

It would work for your lists, if you removed the little crosses first. Then you could put them back in after the script was done. That's easy to do with WikEd.

If you didn't remove the crosses, it would just delink the links, because they aren't at the beginning of the entries, and so the script would consider them to be embedded links, rather than linked entries.

To use it, you install it, and it provides a menu item in the tools menu on the sidebar. When you are ready to remove the redlinked entries from a page, just click on "Remove red links", and it will process the current page you are on.

I hope you find this does the trick for you.     — The Transhumanist   00:48, 1 March 2018 (UTC)

P.S.: I haven't added this to the user scripts list yet, because it hasn't undergone enough testing. Beware, it is alpha software. -TT

Thanks, @ The Transhumanist: I'll take a look at it. Abyssal ( talk) 14:18, 1 March 2018 (UTC)

Removing links to copyrighted videos?

Hey folks, pursuant to this discussion, I'm curious to know if it would be possible to build a bot that would remove links to copyrighted material hosted in violation of the creator's copyright, such as YouTube. Ed  [talk]  [majestic titan] 18:12, 19 February 2018 (UTC)

I'm not a botop, but I'd imagine it would be impossible. It's not a simple case of seeing if the uploader's name matches the title; a video clip from a TV show, for instance, could legitimately be released by the TV station, by the presenter of the show, by the third-party production company which made it, or by up to 200 different holders of overseas rights, depending on exactly what the terms of the contracts say. This is something even a company with the resources of Google struggles with, and is why YouTube is so reliant on rights holders flagging problematic content. ‑  Iridescent 18:19, 19 February 2018 (UTC)
Agreed. CorenSearchBot used to flag article text if it was copyvio, but I'm not sure you'd be able to determine if an elink was a link to a page violating copyright or just a page with a copyright. Primefac ( talk) 18:26, 19 February 2018 (UTC)
Does Youtube publish a list of videos it deletes for copyright reasons? Or similar  Lingzhi ♦  (talk) 11:34, 23 February 2018 (UTC)
no. Primefac ( talk) 17:09, 23 February 2018 (UTC)

Tag all remaining disambiguation links

A consensus is forming at Wikipedia talk:WikiProject Disambiguation#Proposal to tag all disambiguation links to tag all remaining disambiguation links in Wikipedia with a {{ disambiguation needed}} tag. From our most recent count, about 16,454 disambiguation links remain. Around 5,500 of these are already tagged, leaving a little under 10,000 to tag. What is needed here is, first, to get a list of all links to disambiguation pages from mainspace pages that do not already have this tag; second, wait about ten days to see if any of those are short term links that will be fixed quickly; third, re-check that list to see what links from that initial list have been fixed; and fourth, have a bot tag all remaining disambiguation links. bd2412 T 21:51, 14 December 2017 (UTC)

Note: In order to distinguish these from older uses of the tag to identify difficult links, we will actually need to tag these with the template redirect {{ Needdab}}. bd2412 T 13:35, 15 December 2017 (UTC)
@ Bd2412: From my personal experience, the remaining disambiguation links fall into 2 categories: recently created links and links that are difficult or impossible to disambiguate. In the first case, adding {{ disambiguation needed}} would be useful, in the second case, not as much. Plus there are rare cases where it actually makes sense to link to the disambiguation page (when more than one of the meanings or uses of a term are relevant to the link). Kaldari ( talk) 07:26, 2 January 2018 (UTC)
@ Kaldari: Longstanding difficult links are probably already tagged with {{ disambiguation needed}}, for the most part. For those that are not, it may still be useful because tagging may draw the attention of subject matter experts, who have the specialized knowledge to fix the link even if it is difficult for the average editor. As for intentional links to disambiguation pages, these must conform to WP:INTDABLINK. If they do, then they don't show up as errors, and there is no need to tag them. If they do not, then they need to be fixed like any other link. bd2412 T 02:48, 4 January 2018 (UTC)

Tag talk pages of articles about English with Template:WikiProject English language

WP:Article alerts recommends having a bot tag the talk pages of articles with relevant topical wikiproject banners so that the AA bot produces more meaningful results. This would also be useful for getting this barely active project rolling better; I'd been looking into manually going article to article doing this, but it looked to be a rather daunting task even with AWB, and I'm on a Mac, so I'd have to run AWB in a VM or something anyway.

Would start with Category:English languages and its subcats.

Various subcats of Category:Words are going to qualify but will probably have to be done manually (e.g. about 99% of the content of Category:Neologisms, Category:Slang, etc., are English, but a handful of articles in such categories are not and so should not be tagged as within the scope of this project. Similarly, the majority of articles under Category:Punctuation have a section on English and would get tagged, but in a few cases the English coverage has been split out into separate spinoff articles like Quotation marks in English which should get tagged while the main article on the mark would not. We'll probably want to exclude most literature-related categories, but would include Shakespeare (for having had a profound effect on English, in contributing more stock phrases than any other body of work besides the King James Bible). Category:Lexicographers and other such bios will also need manual tagging.  —  SMcCandlish ¢ >ʌⱷ҅ʌ<  19:09, 17 January 2018 (UTC)

Convert protocol relative URLs to http/https

All protocol relative links on Wikipedia should be converted to either http or https. As of June 2015, Wikipedia is 100% HTTPS only and because protocol relative links are relative to where they are hosted it will always render as HTTPS. This means any underlying website that doesn't support HTTPS will break. For example:

[2] (//americanbilliardclub.com/about/history/)

..the http version of this link works. The article American rotation shows it in action, the first three footnotes are broken because they use a protocol relative link to a HTTP only website. But Wikipedia is rendering the link as HTTPS.

More info at WP:PRURL and Wikipedia:Village_pump_(technical)#Protocol_relative_URLs. It's probably 10s of thousands of links broken. -- Green C 21:06, 8 June 2017 (UTC)

This should only be done if the existing link is proven to be broken, and where forcing it to http: conclusively fixes it. Otherwise, if the link is not dead under either protocol, it is WP:COSMETICBOT. -- Redrose64 🌹 ( talk) 21:45, 8 June 2017 (UTC)
Well let's ask, what happens if you keep them? It creates a point of failure. If the remote site stops supporting HTTPS then the link immediately breaks. There is no guarantee a bot will return years later and recheck. WP:COSMETICBOT is fine but it shouldn't prevent from removing a protocol that causes indefinite maintenance problems and MediaWiki no longer really supports. By removing it also discourages editors from further usage, which is good. -- Green C 22:07, 8 June 2017 (UTC)
That reasoning makes no sense. If a bot converts the link to https and the remote site stops supporting HTTPS, then the link immediately breaks then too. Anomie 00:22, 9 June 2017 (UTC)
Different reasoning. IABot forces HTTPS on all PR URLs since Wikipedia does too, when it analyzes the URL. It's erroneously seeing some URLs as dead as a consequence since they don't support SSL. The proposal is to convert all non-functioning PR URLs to HTTP when HTTPS doesn't work.— CYBERPOWER ( Message) 02:22, 9 June 2017 (UTC)
@ Cyberpower678: The proposal, as specified above by Green Cardamom ( talk · contribs) is not to convert all non-functioning PR URLs to HTTP when HTTPS doesn't work, but to convert all PR URLs to either http or https. No exceptions were given, not even those that are presently functioning. This seems to be on the grounds that some are broken. -- Redrose64 🌹 ( talk) 09:06, 9 June 2017 (UTC)
Do I want to get rid of PR URLs? I personally think we should because they confuse editors, confuse other bots, ugly and non-standard etc they're an unnecessary complication. If we don't want to get rid of them (all), we still need to the fix broken HTTP links either way. -- Green C 14:35, 9 June 2017 (UTC)
  • As someone who's been strongly involved with URL maintenance over the last 2 years, I think this bot should be run on Wikipedia, and should enforce protocols. It's pushing WP:COSMETICBOT but if the link ends up being broken because only HTTP works, then that will create other issues. The task can be restricted to only converting those not functional with HTTPS, but my first choice is to convert all. — Preceding unsigned comment added by Cyberpower678 ( talkcontribs) 01:38, 13 June 2017 (UTC)
Opining as a bot op: I personally don't think this can be read as having community consensus because it's going to create a lot of revisions for which there is no appreciable difference. Yes it would be nice if wikipedia was smart enough to figure out if the relative URL is accessable only via HTTP or can be accessed via https, but the link is clicked in the user's browser and therefore the user doesn't know that the content may be accessable via HTTPS or HTTP. Ideally, users entering relative URLS could be reminded via a bot that it's better to be explicit with what protocol needs to be used to get to the content. The counter is we could set a bot to hunt down all the relative URLS and put a maintanance tag/category in the reference block so that a human set of eyes can evaluate if the content is exclusively available via one route or if the content is the same on both paths.

TLDR: This request explicitly bumps against COSMETICBOT, needs further consensus, and there might be a way to have "maintenance" resolve the issue. Hasteur ( talk) 12:38, 13 June 2017 (UTC)

Those are all good ideas but too much for me to take on right now. Agree there is no community consensus about changing relative HTTPS links; However existing relative HTTP cases broken in June 2015 should be fixed asap. A bot should be able to do it as any broken-link job without specific community consensus (beyond a BRFA). Broken links should be fixed. That's something I can probably do, unless someone else wants to (I have lots of other work..). Note this fix would not interfere with any larger plans to deal with relative links. -- Green C 15:26, 13 June 2017 (UTC)
Bump. -- Green C 17:13, 9 August 2017 (UTC)
  • Strong support: This is definitely not COSMETICBOT; these URL errors directly interfere with exercise of WP:Verifiability. They also cause editwarring and article damage; various times I've had to revert people – including some long-experienced editors – removing "dead links" and inserting {{ citation needed}} tags, when all that was required was adding the characters "http:".  —  SMcCandlish ¢ >ʌⱷ҅ʌ<  21:02, 3 October 2017 (UTC)
  • Bump thread expire -- Green C 04:16, 22 October 2017 (UTC)
  • Support per GreenC and SMcCandlish Jon Kolbert ( talk) 23:59, 28 October 2017 (UTC)
  • Needs wider discussion. I happen to agree with this strongly, but this would need a very wide discussion at a village pump before being considered. This is far too many edits to do without very clear consensus, some of which fail WP:COSMETICBOT if no consensus is obtained to override it. ~ Rob13 Talk 14:50, 8 December 2017 (UTC)
  • Support per GreenC. Also this is seriously needed and would benefit the project. BabbaQ ( talk) 13:24, 30 December 2017 (UTC)
  • I'll chime in as a WP:BAG member here, that the scope of the task means we need wider discussion, if only to identify possible pitfalls and cornercases. WP:VPT/ WP:VPR would be the natural places to hold it. I personally support the task FWIW. Headbomb { t · c · p · b} 02:44, 26 January 2018 (UTC)

Bot to search and calculate coordinates

Please look at this table: Lands_administrative_divisions_of_New_South_Wales#Table_of_counties

My goal is to add a column to this table that shows the approximate geographical coordinates of each county. Those county coordinates can be derived form the parish coordinates that are found in each county article, by taking the middle of each northernmost and southernmost / easternmost and westernmost parish coordinates. Is it possible to write a script or a bot to achieve this? -- Ratzer ( talk) 21:27, 25 January 2018 (UTC)

For illustration, I did the work for the first county in the list, Argyle County, manually. The table of parishes in this article shows that they range from 34°27'54" and 35°10'54" latitude south and 149°25'04" and 150°03'04" longitude east. The respective middle is 34°49'24" and 149°44'04", which I put in the first table entry of Lands administrative divisions of New South Wales and the info-box of Argyle County.-- Ratzer ( talk) 07:39, 26 January 2018 (UTC)

Please Review...

Please review my Draft at Cpt. Alex Mason. Thanks! — Preceding unsigned comment added by Amason1930 ( talkcontribs) 23:16, 21 March 2018 (UTC)

@ Amason1930: what does this have to do with bots? Headbomb { t · c · p · b} 00:12, 22 March 2018 (UTC)

Newspaper

I would like to request for a bot that could fill in the Publisher after use of the Refill tool such as |publisher=Aftonbladet. As many Swedish subject articles uses one or two of the few main newspaper sources that are available in Sweden I would like for the bot to fill in for the sources aftonbladet.se as Aftonbladet, expressen.se as Expressen, svd.se as Svenska Dagbladet, kvp.se as Kvällsposten and dn.se As Dagens Nyheter. If those could be filled in at Publisher it would help seversl thousands of articles. BabbaQ ( talk) 13:32, 30 December 2017 (UTC)

Anything that is known to be a newspaper should use |newspaper= and definitely not |publisher=, which should instead be removed. -- NSH001 ( talk) 16:53, 21 February 2018 (UTC)
@ BabbaQ: Idea is not well explained. How would these pages be found/determined? -- TheSandDoctor ( talk) 22:30, 11 March 2018 (UTC)

Replace "IMDb award" by "IMDb event"?

There is a template which takes an IMDb page, I think, and an event name as parameters - e.g. {{ IMDb award|Venice_Film_Festival|Venice Film Festival}} - but it creates broken links. Maybe it relied on some redirect on IMDb's side and they changed their format, I do not know. There is another template which uses a IMDb event code instead of a page name - e.g. {{ IMDb event|0000681|Venice Film Festival}}, which creates a correct link. See both at work:

Is there any chance a bot could fix those? I guess it would need to search the IMDb to get the event codes, which I do not know if it is allowed... (both by us and them). Thanks. - Nabla ( talk) 17:23, 30 December 2017 (UTC)

I've done a couple of hundred mannually. There's only about 50 left now. -- WOSlinker ( talk) 01:03, 31 December 2017 (UTC)
And I have done the remaining ten or so. Thank you. - Nabla ( talk) 23:08, 24 February 2018 (UTC)
In that case, N Not done as was done manually. --- TheSandDoctor ( talk) 21:11, 11 March 2018 (UTC)

Adding category to articles

Pleas add this category to the articles related to the Children's literature portal, because I need it in arabic wikipedia. Thank you. أبو هشام ( talk) 12:10, 4 March 2018 (UTC)

@ أبو هشام: Why do you need it in the Arabic Wikipedia? If you need it there, why ask on the English Wikipedia? Also, doesn't Category:Children's literature already contain the related articles? If I am misunderstanding, I apologize (also why I am asking for clarification) -- TheSandDoctor ( talk) 16:21, 9 March 2018 (UTC)
The problem solved, thanks. أبو هشام ( talk) 00:27, 10 March 2018 (UTC)

Olympic competitors: Project tagging

Can a bot be created to add the {{WikiProject Olympics}} to the talkpage of all the articles in the sub-cats of Category:Olympic competitors by country that don't already have their TP tagged? If the tag already exists, ignore it, and if it's not there already add it with stub class and low importance, unless the article is already tagged at a higher class than stub by another project. Now the 2018 Winter Olympics are over, it would be good to catch all those athletes who are missing the tag, along with countless others that have been created/updated too. Many thanks. Lugnuts Fire Walk with Me 13:02, 8 March 2018 (UTC)

For interest, Petscan results show 104k pages in this cat and its subpages. It's likely the majority are already tagged, but that's still a hell of a lot of pages to parse. Primefac ( talk) 13:18, 8 March 2018 (UTC)
Excluding those already tagged gives 13,872 results. Are Ancient Olympians within the scope of the project though? They also fall within the category. -- Paul_012 ( talk) 14:53, 8 March 2018 (UTC)
Duh, should have thought of that. And I suppose Ancient Olympians would be in the scope of WikiProject Olympics. Primefac ( talk) 15:04, 8 March 2018 (UTC)
When requests like this are made, we normally ask for an explicit list of categories and not a blanket "plus all subcategories" approach - that way lies mistagging. -- Redrose64 🌹 ( talk) 16:12, 8 March 2018 (UTC)
Eyeballing the Petscan for all subcats looks okay, though I didn't check the entire list. -- Izno ( talk) 16:23, 8 March 2018 (UTC)
Going deeper:
  1. Checking for sports inclusion: with 1 cat/row, removing matches to the regex [\r\n][^\r\n]*?\b(archers|artists|athletes|(bi|tri|pent)athletes|bobsledders|boxers|canoeists|competitors|cricketers|curlers|cyclists|divers|equestrians|fencers|footballers|golfers|gymnasts|jumpers|lugers|managers|medall?ist stubs|medall?ists|Members of|Olympians|pilots|players|practitioners|racers|rowers|sailors|shooters|skaters|skiers|snowboarders|swimmers|weightlifters|wrestlers)\b[^\r\n]* leaves only 173 Category:Olympic judoka of Japan-type cats, Category:Olympic pelotaris by country, Category:Olympic pelotaris of France, Category:Olympic pelotaris of Spain, and Category:1980 US Olympic ice hockey team. Since, as I just found out, Judoka is one who practices Judo, and pelotaris refers to players of various court-sports (the pelotaris cats only contain people too), everything looks legit here.
  2. Checking for Olympics inclusion: with 1 cat/row, removing matches to the regex [\r\n][^\r\n]*?\b(olympics?|olympians)\b[^\r\n]* leaves only Category:Canoeists of the Republic of Macedonia, which only contains Olympic athletes.
All 5048 cats look good to me. The last canoeists cat deserves a name change though.   ~  Tom.Reding ( talkdgaf)  19:45, 8 March 2018 (UTC)
I've done project tagging before. Looks like it'd be best to take class/importance from {{ Wikiproject Biography}}.   ~  Tom.Reding ( talkdgaf)  15:34, 8 March 2018 (UTC)
There are two problems with taking importance from {{ WikiProject Biography}}: one is that the importance rating varies between WikiProjects - a topic that is high-importance to one might be low importance to another; the other is that {{ WikiProject Biography}} doesn't have importance ratings. Taking the class rating from {{ WikiProject Biography}} should be fine though. -- Redrose64 🌹 ( talk) 16:32, 8 March 2018 (UTC)
Good point. By 'best' I only meant that it seems to be the most prevalent WP banner in the lot.   ~  Tom.Reding ( talkdgaf)  16:39, 8 March 2018 (UTC)
I should qualify that. {{ WikiProject Biography}} doesn't have general importance ratings, but it does have workgroup-specific importance ratings, and these are described as priority ratings. For example, when |sports-work-group=yes is set, then |sports-priority= is recognised; but somebody who is |sports-work-group=yes|sports-priority=low for {{ WikiProject Biography}} might rate |importance=mid for {{ WikiProject Olympics}}, see for example Talk:Christopher Dean (don't forget to [show] the "WikiProject Biography / Sports and Games" row). So I still think that the importance shouldn't be copied. -- Redrose64 🌹 ( talk) 10:39, 9 March 2018 (UTC)
Thanks for everyone's input - is this likely to happen? Thanks again. Lugnuts Fire Walk with Me 09:21, 9 March 2018 (UTC)
BRFA submitted.   ~  Tom.Reding ( talkdgaf)  15:45, 9 March 2018 (UTC)
Thanks Tom! Lugnuts Fire Walk with Me 14:22, 11 March 2018 (UTC)
  Done! 13,068 pages tagged, with 0 remaining. 590 are missing {{ WP Bio}} so were excluded from the run, but I'll help tag them manually.   ~  Tom.Reding ( talkdgaf)  14:21, 17 March 2018 (UTC)
Thanks Tom! Lugnuts Fire Walk with Me 14:27, 17 March 2018 (UTC)

Xulbot

Not an issue for Bot requests. Referred elsewhere.

ShareMan 15 ( talk · contribs) 17:43, 25 March 2018 (UTC)

This is not the place to request bot approval. WP:BRFA is the appropriate venue. —  JJMC89( T· C) 18:18, 25 March 2018 (UTC)
@ JJMC89: Thank's guy for the information. ShareMan 15 ( talk · contribs) 18:24, 25 March 2018 (UTC)

Special character de-corrupter

Very often, because of encoding issues, you have situations like é → é.

This is often due to copy-pasting, or bot insertions. It would be nice if a bot could find all corrupted equivalent of all special Latin characters (possibly others too), and then do a de-corruption pass e.g. [3]/ [4].

This might be best as a manual AWB run though.. Headbomb { t · c · p · b} 13:41, 1 December 2017 (UTC)

Related problems with file names on Commons: Rename files with wonky Unicode encoding. — Dispenser 06:16, 2 December 2017 (UTC)
@ Dispenser: could you adapt your script for enwiki? Headbomb { t · c · p · b} 02:46, 26 January 2018 (UTC)
@ Headbomb: Well I had to write a dump parser. Wasted a few hours in writing a word frequency collector. Ultimately regex on the dump was the fastest (4 hour runtime). It only does UTF-8 → mojibake and we need to figure out which of the 4,018 matches across 2,166 articles actually need to be fixed. I've done some already: [5] [6]Dispenser 03:21, 29 January 2018 (UTC)
Wikiget can return a regex dump search in about 30 seconds. Only limit it maxes out at 10,000 hits (limited by Elasticsearch).
./wikiget -a "insource:/<regexcommand>/"-- Green C 05:21, 29 January 2018 (UTC)
Supposedly our Elasticsearch times out easily such that [0-9] needs to be split to properly work: [0-4], [5-9]. — Dispenser 11:45, 29 January 2018 (UTC)
See T106685, which has been marked as "Resolved", to the dismay of those of us who want to search using regexes. – Jonesey95 ( talk) 14:10, 29 January 2018 (UTC)
Someone should setup a dedicated instance just for searching with no limitations. Cirrus dumps + setup info. -- Green C 16:10, 29 January 2018 (UTC)
I'm tempted to create a web version of AWB's Database Scanner since I find it a pain to download a new dump, find a way to update AWB, take 15 minutes to decompress the dump, then try and fail to get my regexp working. Is there interest in building something better? — Dispenser 01:13, 31 January 2018 (UTC)
There would be interest but the disk space.. I wrote a fast and simple program for regex'ing XML dumps. -- Green C 02:36, 31 January 2018 (UTC)
I have 1.2TB of compressed monthly dumps for the top ten wikis going back to September 2015. For enwiki, I have early 2008, 2010, 2012, and 2014 dumps. I also have a spare 120 GB end-of-write-life SSD which could be useful in a high throughput read only mode. But this would be running on my home server/work machine, so I'd be worried about CPU usage and would have to figure out a way of limiting abuse. — Dispenser 04:52, 31 January 2018 (UTC)

Reduce all caps title to title case: BIOGRAPHICAL INDEX OF FORMER FELLOWS ...

There are over 1200 cases of "title=BIOGRAPHICAL INDEX OF FORMER FELLOWS OF THE ROYAL SOCIETY OF EDINBURGH 1783 – 2002" that should be reduced to "title=Biographical Index of Former Fellows of the Royal Society of Edinburgh 1783 – 2002". Chris the speller  yack 13:35, 6 April 2018 (UTC)

"1783–2002" would be even better, per MOS. – Jonesey95 ( talk) 15:57, 6 April 2018 (UTC)
I don't usually change spacing in quoted titles, but, after checking, the RSE's own web site shows it without the spaces, so yes, that would be even better. Chris the speller  yack 16:27, 6 April 2018 (UTC)
BRFA filed. Primefac ( talk) 16:51, 6 April 2018 (UTC)

Removing unnecessary piping

There are many instances where a link is piped to another link, but both parts of the link actually target the same article. For instance [[Chicago, Illinois|Chicago]] (where the piping simply redirects back to the original link) or [[Lakewood Amphitheatre|Aaron's Amphitheatre at Lakewood]] (where both parts of the link are redirects to the same destination), or [[MidFlorida Credit Union Amphitheatre|Live Nation Amphitheatre]] (where the visible part of the link is a redirect to the piped part. These last two types occur particularly often with sports teams, which change their name with a change of hometown or sponsor, and venues, which change as they sell naming rights, and newspapers, as they merge. Normally a redirect will be set up to point the old name to the new one, but many well-meaning editors will nonetheless pipe the old name to the new one, thinking they're doing good (and then often the piping is not updated when the name changes yet again, so even the trivial efficiency benefit of bypassing a redirect is lost). This sort of piping has many failings (as described at WP:NOTBROKEN and WP:NOPIPE). Would it be possible to set up a bot that would detect and fix these sorts of unnecessary piping? Colonies Chris ( talk) 20:00, 30 March 2018 (UTC)

Sounds like it would be largely WP:COSMETICBOT, and a bot is not well equipped to decide when the difference in tooltip might be significant in most cases. Anomie 21:12, 30 March 2018 (UTC)
I suspect that CC may be trying to circumvent this decision. -- Redrose64 🌹 ( talk) 21:37, 30 March 2018 (UTC)
This has absolutely nothing to do with states, state abbreviations or any change in the appearance of links to the reader. I suggest you reread my proposal and then withdraw this unfounded accusation. Colonies Chris ( talk) 21:50, 30 March 2018 (UTC)
I'm not sure what you mean about tooltips. And it's not just cosmetic; WP:NOTBROKEN explains how using redirects instead of piping directly benefits the encyclopaedia. Colonies Chris ( talk) 21:50, 30 March 2018 (UTC)
These changes look controversial. Please provide consensus for your requested bot (this page is not the place to generate that consensus). -- Izno ( talk) 23:03, 30 March 2018 (UTC)
Please clarify in what way you think they are controversial. And if it's just because of Redrose64's disgraceful accusation, you might wish to read the decision he's linked to, and the proposal I've made. and you'll see that they are in no way connected. I just wish Redrose64 had taken the trouble to read this bot proposal properly before interfering. Colonies Chris ( talk) 23:29, 30 March 2018 (UTC)
BAG note I agree that consensus for these changes need to be shown first. Any coder willing to take this task could very well waste their time should they agree to code this without proper consensus to back it up. It's very possible that consensus for such a task, or something close to it such as making these changes part of WP:GENFIXES, exist, but short of an RFC on the issue this cannot go to trial. Headbomb { t · c · p · b} 00:00, 31 March 2018 (UTC)
My purpose in coming here was to gauge whether there is any consensus and willingness to carry through such changes on a large scale. That's why I'm here. I think this would be a worthwhile improvement, and I hope others agree. Why accuse me of failing to show consensus when that's the very purpose for my coming here? Where else would I go? Colonies Chris ( talk) 09:48, 31 March 2018 (UTC)
See WP:BOTREQUIRE, item 4. Any bot is allowed to perform tasks only when those tasks have consensus, which is developed in an appropriate discussion forum where that sort of task is discussed. – Jonesey95 ( talk) 13:59, 31 March 2018 (UTC)
In this case, WP:VPPRO would probably be appropriate. -- Izno ( talk) 14:11, 31 March 2018 (UTC)
Much Ado auditions - Grads — Preceding unsigned comment added by Colonies Chris ( talkcontribs)
The recent thread on your behavior regarding redirect links and city names indicates you are not the right person to make the call on controversiality of what looks to be a related task, however good faith or desirable you think it might be. RR64, regardless of the text in his message here, was correct to point to the ANI thread. If you believe there is consensus for your proposed task, show it, rather than assuming it. -- Izno ( talk) 00:11, 31 March 2018 (UTC)
I'd go further and say that the community has, for years, opposed making these sort of cosmetic edits en masse as pointless and disruptive. ~ Amory ( utc) 00:18, 31 March 2018 (UTC)
These are not cosmetic changes. None of them would affect the reader's view at all. This is behind-the-scenes stuff, designed to improve the overall usability of the encyclopedia by eliminating some of the problems listed at WP:NOTBROKEN. Colonies Chris ( talk) 09:48, 31 March 2018 (UTC)
Exactly my point. Cosmetic to editors, invisible to readers. ~ Amory ( utc) 13:19, 31 March 2018 (UTC)
So, to summarise: I came here with an idea, based on 12 years' gnoming experience, for a way to improve the encyclopaedia. I expected some scepticism, requests for clarification, discussion of possible drawbacks and how to avoid them, and hopefully a plan of action emerging from the discussion. What I got, however, was unrelenting hostility. A false accusation, from someone who hasn't the decency to come back and withdraw it; an objection from someone who couldn't be bothered to explain it when asked; being told that even though the accusation was untrue, I was nonetheless not to be trusted; a rejection from someone who evidently hadn't bothered to read the links I provided. In short, I have been told: it's a bad idea; I'm a bad person; and I shouldn't have come here at all. In the midst of all this there was exactly one helpful suggestion (thank you, Izno). You guys really need to change the poisonously negative culture you have here. I certainly won't be back. Colonies Chris ( talk) 08:26, 2 April 2018 (UTC)
it's a bad idea; I'm a bad person; and I shouldn't have come here at all Decidedly not. Your takeaway from this discussion should be that you should seek consensus for the task--because regardless of any of the other words said here, the task may still be valuable but it doesn't have an obvious consensus. It is normal for people to come here with not-obviously uncontroversial bot requests; when we receive such requests, we ask for consensus. This does two things: a) makes it very obvious to anyone inspecting the bot while it is running that they should not contest the edits without a similar consensus, and b) stops the task from getting to the WP:BRFA process, where the WP:BAG would request the same (because, as one BAGger above commented, this task does not look uncontroversial). -- Izno ( talk) 12:34, 2 April 2018 (UTC)
Eleven words, and you blow it up out of all proportion. -- Redrose64 🌹 ( talk) 20:08, 2 April 2018 (UTC)
@ Colonies Chris: More than just Izno provided constructive feedback. You were told by multiple editors that that idea doesn't have obvious consensus behind it. It's possible that it does, it's possible that something like it, but not exactly-as proposed has consensus (e.g. this might be a good idea for a subset of all such redirects, but a bad idea in other cases), and it's possible that it has nowhere near consensus. Go to WP:VPR, start an WP:RFC, and if there's consensus for something specific, we can move to a bot task. Headbomb { t · c · p · b} 19:06, 4 April 2018 (UTC)

I support this task which will reduce WP:OVERLINKING. WP:AWB is a popular semi-automated tool that can help in doing this task. Many editors use AWB for similar tasks. -- Magioladitis ( talk)

Further anatomy infobox series help

I have another request for a bot to help tighten up our {{ Infobox anatomy}} series. Ping to Nihlus who helped out last time.

Request is to:

  1. In all articles that use {{ Infobox anatomy}} and all subtemplates* remove the empty |MapCaption=, and |ImageMap= (which have been integrated into the "image" parameter)
  2. In all articles that use {{ Infobox muscle}} remove deprecated parameters |NerveRoot=, |PhysicalExam=
  3. In all articles that use {{ Infobox anatomy}}, {{ Infobox brain}}, {{ Infobox neuron}} and {{ Infobox nerve}} remove the parameters: |BrainInfoType=, |BrainInfoNumber=, |NeuroLexID=, |NeuroLex= (now moved to Wikidata)
  4. In all articles that use {{ Infobox anatomy}} and all subtemplates* remove from pages the field |Code= which I have gone through and checked, and duplicated other fields.

I would be very grateful for this, it will significantly help tidy up both our articles and the infoboxes.-- Tom (LT) ( talk) 00:32, 3 March 2018 (UTC)

@ Tom (LT): I'll look into this. When it comes to point #1, do you just want |MapCaption= and |ImageMap= removed or their values integrated elsewhere? If so, where? (Would MapCaption just have its value put in |Caption=?) -- TheSandDoctor ( talk) 17:07, 9 March 2018 (UTC)
This is an extension of something my bot already did. I just need to alter the settings and run it for this. I can run it some time this weekend. Nihlus 21:11, 9 March 2018 (UTC)
@ TheSandDoctor I have already replaced ImageMap with Image in all articles that used the parameter. Now there's just stacks of empty parameters sitting around (which will display an error message when I finally remove it from the infobox in totalis). -- Tom (LT) ( talk) 21:57, 9 March 2018 (UTC)
Thanks very much Nihlus. To let you know, I notice I have made a spelling mistake above and have now corrected it (|NerveRoom= -> |NerveRoot=)). Like last time, once the bot runs I'll be able to remove the parameters, then I will manually go through all articles that have parameter problems and fix them. -- Tom (LT) ( talk) 21:57, 9 March 2018 (UTC)
@ Tom (LT): @ Nihlus: Roger. Had started writing it, but hadn't finished & was good practice anyways . Surprised that this page was not on my watchlist, have solved that problem now. --All the best, TheSandDoctor ( talk) 00:11, 10 March 2018 (UTC)
@ Tom (LT): Can you clarify point 4? Should |Code= be removed from all of those templates or do you have a separate list of affected pages somewhere else? Nihlus 11:25, 11 March 2018 (UTC)
@ Nihlus the bot will need to run through all pages that use those templates and remove the blank parameters - See eg [7] - I removed the "ImageMap" and "MapCaption" parameters which are blank. Point 4 is that the bot will also need to run through and remove any blank "Code" parameters, too (eg as I have done here [8]). Once that's done I'll remove it from the templates -- Tom (LT) ( talk) 19:17, 11 March 2018 (UTC)
@ Nihlus how are you going?? very eager to finish up my editing of this infobox series, hoping you might have time this weekend? -- Tom (LT) ( talk) 23:14, 16 March 2018 (UTC)
Hmm... It seems that currently Nihlus is bit busy with something. Although I'm not so experienced like Nihlus, I have basic knowledge. So if there are no response from him, I'm going to try this task at next weekend (April 7). -- Was a bee ( talk) 17:43, 1 April 2018 (UTC)
Thanks Was a bee, that would be wonderful. Happy also to check some edits on April 7th before you do a full run.-- Tom (LT) ( talk) 21:42, 1 April 2018 (UTC)
Tom Here is 20 test edits [9]. By the way, I interpreted the request simply as follows.
  1. Removing these 9 deprecated parameters: |MapCaption=, |ImageMap=, |NerveRoot=, |PhysicalExam=|BrainInfoType=, |BrainInfoNumber=, |NeuroLexID=, |NeuroLex= and |Code=
  2. From {{ Infobox anatomy}} and its all subtemplate.
So, for example, in this edit [10], I removed |NerveRoot= and |PhysicalExam= from {{ Infobox anatomy}} (not from {{ Infobox muscle}}). Is my interpretation right? -- Was a bee ( talk) 21:17, 2 April 2018 (UTC)
@ Was a bee my request was clearly phrased in a complicated way given how you've simply and accurately rephrased it :). And have had a look at your edits - have checked through and they're great! Can't wait, and thank you again!! -- Tom (LT) ( talk) 11:42, 3 April 2018 (UTC)

Thanks for this Was a bee. I consider this task Y Done. -- Tom (LT) ( talk) 04:59, 7 April 2018 (UTC)

Mass editing {{DEFAULTSORT}} values

Related to but more generic than #Fixing sort keys for biographies of Thai people above, I'm looking for a bot to make mass edits to {{DEFAULTSORT}} keys (or add them if they don't exist) for a pre-defined list of articles, i.e. Special:PermaLink/829542720. These are articles that may have previously been tagged with incorrect defaultsort keys. Optimally, the bot should also skip the edits if changes are made only in capitalisation. Edits which result in no changes would of course be skipped. -- Paul_012 ( talk) 08:23, 9 March 2018 (UTC)

@ Paul 012: Is it just for those articles in that version of the sandbox? Is it just for Thai people? How would these be found exactly? -- TheSandDoctor ( talk) 00:46, 10 March 2018 (UTC)
@ Paul 012: I have a functional proof of concept now (for changing the defaultsort value), just need the confirmation on details above & will file BFRA. -- TheSandDoctor ( talk) 03:50, 10 March 2018 (UTC)
This would be a one-time edit for just the 215 listed articles (which are not part of the Thai name sort task above.) -- Paul_012 ( talk) 09:59, 10 March 2018 (UTC)

As said, now explicitly, at Wikipedia talk:Categorization of people#Thai names, I'm opposing this bot operation. Since only two people commented there thus far (the OP and me), with a 50%/50% division of opinions, this needs more time for discussion, with let's hope a bit more input from other editors, before firing up a bot. The same goes for the #Fixing sort keys for biographies of Thai people BotReq proposal above, although that one might be more in line with current guidance (can't really get my head around it yet). -- Francis Schonken ( talk) 17:00, 10 March 2018 (UTC)

Still something else, bot-assisted insertion of a {{ DEFAULTSORT}} value that is exactly equal to the article title of the page where it is inserted would be a WP:COSMETICBOT infringement, as far as I understand the applicable policy. -- Francis Schonken ( talk) 17:13, 10 March 2018 (UTC)

Thanks for the comments, Francis Schonken. This request (for the 215 articles) is in accordance with the current guidelines. All the listed articles here are multi-word names which do not contain surnames, which is why comma-separated sort keys would be incorrect. This bot task is to rectify those that have been mistakenly added. Regarding your concerns of the difference between Luang Pu Sodh Candasaro and Luang Pu Thuat, this is because all the other Luang Pus are titles preceding the person's name, but Luang Pu Thuat is a specific name in and of itself—the subject's name wasn't Thuat. (I think this is quite like how Lady Gaga isn't sorted Gaga, Lady because she isn't a lady named Gaga.) -- Paul_012 ( talk) 17:29, 10 March 2018 (UTC)
Sorry, no, the request goes beyond what is mandated by the applicable guidance afaics, so would need to find consensus elsewhere first. -- Francis Schonken ( talk) 17:33, 10 March 2018 (UTC)
Specifically, the guidance only mentions per-category sort keys for Thai categories (clearly assuming that the {{ DEFAULTSORT}} is defined as "surname, given name(s)"), and does not mandate to set the {{ DEFAULTSORT}} to "given name(s) surname", which should not normally be done for any article with an actual title in that format, and is thus not mandated by any policy, and is an infringement on WP:COSMETICBOT if done by bot. -- Francis Schonken ( talk) 17:40, 10 March 2018 (UTC)
You might want to re-read my above comment. None of the articles titles here contain a surname. Most of them are royalty and nobility, and are covered by WP:PEERS. As for the WP:COSMETICBOT concerns, one could also argue that manually inputting any DEFAULTSORT would be unnecessary, as it results in no changes in the sorting. But the point here is to prevent unknowing editors from inserting incorrect values. I don't think this violates the spirit of WP:COSMETICBOT. -- Paul_012 ( talk) 17:45, 10 March 2018 (UTC)

I originally thought manually going through all those articles would be an unnecessary waste of time and effort. Seeing the difficulty I'm having in explaining the task, however, it has become clear that further discussions would actually waste more time and effort on everybody's part than just manually performing the edits. I have gone ahead and done so. Thanks to TheSandDoctor for the assistance, but this is now moot. Marking as N Not done. -- Paul_012 ( talk) 10:26, 11 March 2018 (UTC)

@ Paul 012: So is it N Not done for both this and the above? Should I move the template back to my userspace & tag U1 then? (Sorry for the delay in my response, this was sent at around 3:30am & the previous one at around 1:10am.) -- TheSandDoctor ( talk) 13:22, 11 March 2018 (UTC)
@ TheSandDoctor Just this one. The above is still pending further discussion, so please keep the template for now. -- Paul_012 ( talk) 13:26, 11 March 2018 (UTC)

Bot to convert New York Times abstract URLs to archive PDF URLs

There are a lot of New York Times URLs that begin with http[s]://query.nytimes.com/gst/abstract.html?res= or http[s]://select.nytimes.com/gst/abstract.html?res=. However, all this does is take people to the abstract page. If these Wikipedia readers aren't NYT members, they encounter a paywall, and if they are members, they are allowed to select a PDF/TimesMachine version to continue reading the article. Either way, they have to click at least one more time once they reach the abstract page.

Would it be practical to convert these to http://query.nytimes.com/mem/archive/pdf?res= URLs? These PDF versions can be seen by everyone, even non-members, and is much easier to verify. The hexadecimal string after the equals sign will remain the same before and after, but it does have to be an HTTP URL for these NY Times PDF links to work. epicgenius ( talk) 22:01, 4 February 2018 (UTC)

So, just to double-check this, you're doing a find/replace of gst/abstract and converting to mem/archive, as well as changing any select. into query.
And this will work for all the articles? Primefac ( talk) 12:50, 5 February 2018 (UTC)
Actually, it's gst/abstract.html to mem/archive/pdf, select. into query., and all HTTPS to HTTP. Yes, this will work for all articles. However, KolbertBot is converting http://nytimes.com URLs to https://nytimes.com, so I will ping Jon Kolbert for feedback.
I am requesting that HTTPS be converted specifically to HTTP, because http://query.nytimes.com/mem/archive/pdf?res=9801E7DF1330E333A25755C0A96E9C94669ED7CF&legacy=true (for instance) will work, but not https://query.nytimes.com/mem/archive/pdf?res=9801E7DF1330E333A25755C0A96E9C94669ED7CF&legacy=true, which displays an empty frame. epicgenius ( talk) 18:42, 5 February 2018 (UTC)
@ Epicgenius: A few weeks ago there were reported issues with query.nytimes.com links, I had fixed issues with links to query.nytimes.com/mem/archive/pdf?res= in response. KolbertBot doesn't act on select.nytimes.com links. Is the desired outcome to have select.nytimes.com/gst/abstract.html?res= and query.nytimes.com/gst/abstract.html?res= changed to query.nytimes.com/mem/archive/pdf?res=? That shouldn't be too hard to do with KolbertBot, I can create a new bot task to do this job. Jon Kolbert ( talk) 00:46, 6 February 2018 (UTC)
@ Jon Kolbert: Yes, that is what I am trying to do. epicgenius ( talk) 00:48, 6 February 2018 (UTC)
This strikes me as a "needs consensus" task given it goes from https -> http. Additionally, this strikes me as something which may be a temporary workaround. Whether TNYT allows this deliberately or through some failure of configuration is obviously unknown--but I would guess that if they notice persons jumping straight to their PDF versions from external to their website, they'll cut off the access (which leaves us in a definitely worse spot than current). -- Izno ( talk) 18:22, 23 February 2018 (UTC)

Please.     — The Transhumanist    10:20, 5 March 2018 (UTC)

Fixing sort keys for biographies of Thai people

According to WP:NAMESORT (and expanded upon at WP:MOSTHAI), biographical articles about Thai people should be sorted like this:

{{DEFAULTSORT:Surname, Forename}}
[[Category:International people]]
[[Category:Thai people|Forename Surname]]

However, this has very inconsistently been adhered to, with some articles specifying the Thai order in the DEFAULTSORT and some not following the Thai order at all.

Would it be plausible for a bot to help fix this? A possible process I have in mind is something along the lines of:

  1. Manually compile a list of all Thai people categories.
  2. Manually compile a list of all biographies of Thai people.
  3. Manually list preferred DEFAULTSORT and Thai sort keys for all of them.
  4. Have a bot go through all the articles, adding and/or replacing the DEFAULTSORT and sort keys according to the listed values.

And, for the long term:

  1. The bot, during the aforementioend run, also adds a {{Thai name sort}} template, which does nothing but notes the correct Thai sort key for future reference.
  2. During periodical runs, a bot looks up the sort key in the {{Thai name sort}} template and adds it to any Thai people categories (from the previous list, which would have to be manually updated) which have been later added and are missing the sort key.

I realise this is pretty labour-intensive, but a more automated process would likely not be able to identify names which don't follow the Forename Surname format. I'd like to know that a bot was available for the task before attempting to review all the names. -- Paul_012 ( talk) 10:09, 9 February 2018 (UTC)

Paul 012, Category:Thai people states Note on sorting: Thailand people are usually called by the first name, even telephone books are sorted by the first name. This of course also applies to the subcategories.. Could you point to the relevant passages in the guides you mentioned?   ~  Tom.Reding ( talkdgaf)  17:44, 11 February 2018 (UTC)
Tom.Reding, sorry but I'm not sure you're reading the request correctly. It's asking to add sort keys so that Thai people categories will be sorted by first name. Anyway, the quotes are:

Thai names have only contained a family name since 1915 and the name follows the western pattern of "Forename Surname". However, people in Thailand are known and addressed by their forename. In categories mostly containing articles about Thai people, Thai names should be sorted as they are written with the forename first. Thaksin Shinawatra is sorted [[Category:Thai people|Thaksin Shinawatra]].

and

When categorizing biography articles, do not specify sort keys to sort by surname in Thai people categories. However, sorting by surname is still desirable for non-country-specific people categories, and this is done with the DEFAULTSORT magic word. A biography article for Given-name Surname should therefore be categorized like this:

{{DEFAULTSORT:Surname, Given-name}}
[[Category:International people]]
[[Category:Thai people|Given-name Surname]]

-- Paul_012 ( talk) 19:21, 11 February 2018 (UTC)
Paul 012, this should be doable, as long as all of the special given-name-first-sortkey cats are identified and appropriately not affected by {{ DEFAULTSORT}}. I'm not available to do this, unfortunately, but no reason someone else can't pick it up. In the meantime, you could compile the list of all such special cats, to do some of the legwork and to entice a potential bot op.   ~  Tom.Reding ( talkdgaf)  20:49, 16 February 2018 (UTC)
Hi there Paul_012, do you have an idea of what {{ Thai name sort}} would include/what it would look like? Would it be substituted? Would it have parameters? (How would it note the preferred format?)
As for compiling a list of all Thai people categories and biographies, I see two viable approaches to that part of the problem:
  1. A bot runs through Category:Thai people and just works off of that category
  2. A bot runs through Category:Thai people and compiles a list (easily writeable to a local text file; one article per line) and uses that to work off of, updating periodically (in this case, that part wouldn't even have to be part of a bot's regular function, I could theoretically make generating said list its own program and run periodically for simplicity's sake)
Once I have some more details (above), I would be happy to consider working on this program and already have a rough idea of how I would do it (shouldn't take that long once things are clarified). --All the best, TheSandDoctor ( talk) 16:40, 9 March 2018 (UTC)
I just saw the discussion here, I should clarify that I am happy to considering moving ahead with this one clarifications above are made and consensus is reached. I would consider this a fun little project, but will not move unless adequate consensus and discussion has taken place. -- TheSandDoctor ( talk) 16:42, 9 March 2018 (UTC)
Thanks a lot for offering to help, TheSandDoctor. If a bot becomes available then there should be no need to modify the guideline; I hope it can be settled soon.
I'm imagining the template as taking only one parameter, which is the desired sort key, with no visible output. (In most cases it would be identical to the article title, but there may be some exceptions.) So for the Abhisit Vejjajiva article, the desired outcome would be:
{{DEFAULTSORT:Vejjajiva, Abhisit}}
{{Thai name sort|Abhisit Vejjajiva}}
[[Category:Prime Ministers of Thailand|Abhisit Vejjajiva]]
etc.
[[Category:People educated at Eton College]]
etc.
The list of articles would be only needed for the initial run. I've already begun compiling it at Special:Permalink/829616472—It's still a work in progress, and will need to be double-checked. I'm expecting that subsequent periodical runs will identify the articles by looking up inclusions of {{ Thai name sort}}. This way, new articles can easily be picked up.
I was thinking that the list of categories would also need to be manually compiled in order to avoid false positives such as expatriates, whose names wouldn't be relevant to this task. But then again, expatriates are already excluded from the article list, so it shouldn't make any difference. Having the bot periodically go through Category:Thai people would require less maintenance, and would be preferable. -- Paul_012 ( talk) 19:44, 9 March 2018 (UTC)
Oh, wait. Automatically going down the category would include categories like Category:American people of Thai descent, so this approach wouldn't work. I'll see if I can make a list of the specific categories that should be browsed instead. -- Paul_012 ( talk) 20:04, 9 March 2018 (UTC)
@ Paul 012: I went and created a template in my userspace to have it ready to go (feel free to edit it, just please leave moving to me or others with the page mover user right as I don't want a redirect there).
Do you want {{ DEFAULTSORT}} modified (on pages) to also match "Given Surname"? Adding of the Thai name sort template should be easy as the bot could (theoretically) just take the page name and "plop" it in as the parameter. It won't always work right (ie House of Abhaiwongse), but should (most likely will) work the majority of the time and might see about creating a "blacklist" of titles, where if the page title is equal to X, then it will skip it). I assume that the sub-categories of categories within the list your sandbox are also meant to be included (recursive)? -- TheSandDoctor ( talk) 00:39, 10 March 2018 (UTC)

Okay, TheSandDoctor, here's a newer summary of the task:

  1. A one-time task performed on the articles listed at Special:Permalink/829616472* Special:Permalink/829756891 which, entails, for each article:
    1. Modify {{DEFAULTSORT}} to match that given in the table.
    2. Add {{Thai name sort}}, with the desired sort key** as the parameter.
    3. For each [[Category:...]] in the article, see if it is included in the list***. If it is, add the same sort key to the category, but don't replace existing values.
  2. A recurring task performed on articles which are tagged with {{Thai name sort}}:****
    1. For each [[Category:...]] in the article, see if it is included in the list***. If it is, copy the sort key listed in {{Thai name sort}}, and add it to the category, but don't replace existing values.
  • *As previously mentioned, please wait for a finalised version. House of Abhaiwongse and similar non-person-name articles won't be affected as they're already excluded from the table.
  • **I'll add it to the table—there may be some exceptions that don't exactly follow the article title.
  • ***This list should be automatically compiled by scanning through Category:Thai people, with the exception of Category:Thai diaspora‎ and its subcategories, plus Category:Orders, decorations, and medals of Thailand. Please disregard the list currently in my sandbox.
  • ****I not sure how often this should be run, but once a month would probably be plenty often enough.

-- Paul_012 ( talk) 10:44, 10 March 2018 (UTC)

I just realised that there should also be a fallback in case {{Thai name sort}} is called with missing/empty parameters. In such cases, the article title should be used as the sort key. -- Paul_012 ( talk) 11:29, 10 March 2018 (UTC)
@ Paul 012: Alright, let me know when you're ready. One last clarification: "with the exception of Category:Thai diaspora‎ and its subcategories, plus Category:Orders, decorations, and medals of Thailand" means to exclude Category:Orders, decorations, and medals of Thailand as well, correct? -- TheSandDoctor ( talk) 15:40, 10 March 2018 (UTC)
Oops, sorry. What I meant was to (1) Include Category:Thai people. (2) For all subcategories of Category:Thai people except Category:Thai diaspora, also include them and all their subcategories (recursive). (3) Include Category:Orders, decorations, and medals of Thailand and all its subcategories (recursive). -- Paul_012 ( talk) 16:12, 10 March 2018 (UTC)
@ Paul 012: Okay, thanks for the clarification. Sorry to be asking so many questions and to be somewhat anal retentive about this, just trying to make sure we are on the same page and that I know what you want etc (need detailed details to be able to make bot and to ensure it does what you want).
Let me know when you are ready. I have to head out for a bit, but when I get back I will see about continuing to work on the bot (& then file a BFRA if things are looking good). -- TheSandDoctor ( talk) 16:30, 10 March 2018 (UTC)
Final list is at Special:Permalink/829756891. Still pending further discussion to address Francis Schonken's concerns below. -- Paul_012 ( talk) 18:44, 10 March 2018 (UTC)

TheSandDoctor, Francis Schonken has requested that manual placement of the template be manually trialled on article pages first. Could you go ahead and move your sandbox version into the template space? Thanks. -- Paul_012 ( talk) 09:07, 11 March 2018 (UTC)

Well, err, no, that's not what I suggested (and I certainly didn't "request" anything). In the approach I suggested {{ Thai name sort}} (or a template with a different name) would be applied to *category* pages (i.e. Categories of Thai people where the collation should be according to actual article titles), not a template that would be inserted in mainspace. Anyhow, such templates—whether according to the original idea or according to my suggested scheme—should be experimented with, would have needed to have found consensus, and would have needed to be explained in the WP:SUR guidance (etc) before any sort of bot operation. This is not a page where to request manual operations, nor a page to find consensus about things that go beyond the mandate of current guidance and particular consensuses. -- Francis Schonken ( talk) 10:43, 11 March 2018 (UTC)
Sorry, I misread. Thanks for the clarification. The point about moving the sandbox template into template space is still valid anyway. I'll continue at Wikipedia talk:Categorization of people‎. -- Paul_012 ( talk) 10:57, 11 March 2018 (UTC)
@ Paul 012: Template moved to Template:Thai name sort.

Requests from Amirh123

Previous requests include: Wikipedia:Bot requests/Archive 75#make a translate bot; Wikipedia:Bot requests/Archive 75#please make bot for adding articles for footballdatabase.eu; Wikipedia:Bot requests/Archive 75#wwikia bot; Wikipedia:Bot requests/Archive 75#bot for upbayt articles; Wikipedia:Bot requests/Archive 75#Bot to update articles; Wikipedia:Bot requests/Archive 75#geoname bot; Wikipedia:Bot requests/Archive 75#catalogueoflife bot; Wikipedia:Bot requests/Archive 76#bot for creating new categorys (plus some that were deleted without being archived). -- Redrose64 🌹 ( talk) 23:01, 28 March 2018 (UTC)

rsssf

hi rsssf.com have many articles about soccer please make bot to added articles from rsssf.com — Preceding unsigned comment added by Amirh123 ( talkcontribs) 15:48, 8 March 2018 (UTC)

Hi there Amirh123, I think that WP:MASSCREATION might apply in this case. Also, due to copyright restrictions, Wikipedia could not just take content from other sites. If you have any questions, please feel free to let me know. -- TheSandDoctor ( talk) 16:24, 9 March 2018 (UTC)
hi rsssf.com is free content please make bot to adding articles for this site thanks — Preceding unsigned comment added by Amirh123 ( talkcontribs) 08:19, 11 March 2018 (UTC)
@ Amirh123:, please define "free" in this context ("free" to view, or open license/public domain?). Please also see Wikipedia:Copying text from other sources. It is seldom ever appropriate to directly copy content from sources as doing so (in most cases) would be a copyright violation and would be speedily deleted. -- TheSandDoctor ( talk) 13:29, 11 March 2018 (UTC)
hi copy write see this link — Preceding unsigned comment added by Amirh123 ( talkcontribs) 14:01, 13 March 2018 (UTC)
N Not done/ Declined Not a good task for a bot. @ Amirh123: Please see the last sentence in section 2 of the charter you sent, "The data made available on the WWW in the RSSSF Archive are subject to copyright". That means that the text is free to view but that it is still subject to copyright, meaning that it could not be copied directly to Wikipedia, therefore meaning that it is not a suitable task for a bot. Please also keep future responses to this here, rather than posting on my talk page, as it keeps discussions together/centralized. -- TheSandDoctor ( talk) 15:47, 13 March 2018 (UTC)

bot

hi some articles on english wiktionary add links to Wikipedia german But there are also english Wikipedia example Berndorf link the Wikipedia german but english wiktionary must link to English Wikipedia thanks — Preceding unsigned comment added by Amirh123 ( talkcontribs) 13:30, 18 March 2018 (UTC)

@ Amirh123: I have no idea what you are asking for. -- Redrose64 🌹 ( talk) 18:10, 18 March 2018 (UTC)
I think Amirh123 is saying that on the English Wiktionary, there are sometimes cross-wiki links to articles at the German Wikipedia. That they should be converted to point to the English Wikipedia version, if it exists. Amirh123 gives the example Berndorf which has a link to de:Berndorf instead of Berndorf. -- Green C 18:29, 18 March 2018 (UTC)
It looks like a lot of these may be the result of one editor whose talk page is full of complaints about poor quality work. -- Green C 18:37, 18 March 2018 (UTC)
ok Please edit these edits — Preceding unsigned comment added by Amirh123 ( talkcontribs) 06:39, 19 March 2018 (UTC)

years

hi please make bot to adding years articles automaticly example make 1432 in iran or 528 in india thanks — Preceding unsigned comment added by Amirh123 ( talkcontribs) 17:57, 24 March 2018 (UTC)

Would there be any content to these pages. I'm feeling that this would possibly by "Bots to create massive lists of stubs" which is on the list of Frequently Denied Bots Pi (Talk to me! ) 20:15, 24 March 2018 (UTC)

categorys

hi very articles don't described categorys example Selenophorus pedicularius described in 1829 but not any category — Preceding unsigned comment added by Amirh123 ( talkcontribs) 14:55, 26 March 2018 (UTC)

Declined Not a good task for a bot. Adding categories to articles requires (at least some level of) individual evaluation (unless there is consensus for a list of articles that X category should be added; your request appears to deal with all uncategorized articles in current wording). Bots aren't suited for this. -- TheSandDoctor Talk 15:11, 26 March 2018 (UTC)
hi I want see all years in one category example Category:Insects by century of formal description but I want see all years in one category example Category:Video games by year — Preceding unsigned comment added by Amirh123 ( talkcontribs) 08:08, 27 March 2018 (UTC)
@ Amirh123: I am not 100% sure what you mean, but from what I gather it is still not feasible by a bot as it could require some level of individual evaluation (unless the years are all in a specific infobox parameter?) It is probably a better suited job to do manually than with a bot. -- TheSandDoctor Talk 16:07, 27 March 2018 (UTC)

catalogueoflife.org

hi catalogueoflife.org have very articles about species animals planet and more this articles not in Wikipedia please make bot to adding this articles to Wikipedia — Preceding unsigned comment added by Amirh123 ( talkcontribs) 12:25, 28 March 2018 (UTC)

@ Amirh123: Declined Not a good task for a bot. (bots creating articles en-masse is generally not approved) and there would also be copyright concerns (as there was the last time you requested a bot do this for a different website). -- TheSandDoctor Talk 13:23, 28 March 2018 (UTC)

but in ceb.Wikipedia.org and sv.Wikipedia.org use catalogueoflife.org and adding very articles — Preceding unsigned comment added by Amirh123 ( talkcontribs) 14:37, 28 March 2018 (UTC)

This is not ceb.wikipedia or sv.wikipedia. We have our own standards and expections. Headbomb { t · c · p · b} 17:49, 28 March 2018 (UTC)
@ Amirh123: I refer you to the responses left at Wikipedia:Bot requests/Archive 75#please make bot for adding articles for footballdatabase.eu by myself and others. -- Redrose64 🌹 ( talk) 22:43, 28 March 2018 (UTC)

Removal of repetitive internal links in Wikipedia article

Does there exist bots able to detect and remove repetitive internal links in a Wikipedia article? Thanks! -- It's gonna be awesome!Talk♬ 03:48, 3 April 2018 (UTC)

Declined Not a good task for a bot. This sort of thing is very context dependent. It requires a human editor to review. ~ Rob13 Talk 17:58, 3 April 2018 (UTC)
Maybe you're right but I feel it time-consuming to tackle {{ overlinked}} tagged with a long but complete article. -- It's gonna be awesome!Talk♬ 21:26, 3 April 2018 (UTC)
AWB can help with this, to a degree. A rule that changes "\[\[([-\w ,\(\)–]+)\]\]([^\n]+)\[\[\1\]\]" to "[[$1]]$2$1" will unlink the last duplicate link in a paragraph. For example, it will change "We needed to draw yellow bananas and yellow canaries. But there were no yellow crayons." to "We needed to draw yellow bananas and yellow canaries. But there were no yellow crayons." If you use two or three such rules, it will also delink " yellow canaries". The rule will not remove a link in the seventh section of an article that also exists in the second section, but that would often be unhelpful; a reader may look at the lead section and then jump to the seventh section, not seeing the link in the second section. Use this rule with care. Chris the speller  yack 14:34, 6 April 2018 (UTC)
Thanks! I will try! -- It's gonna be awesome!Talk♬ 14:44, 7 April 2018 (UTC)

IW-ref template

I would appreciate if someone could please remove {{ Iw-ref}} from all article pages and add {{ translated page}}, with the same parameters, to the corresponding talk page. The Iw-ref template was deprecated quite a long time ago, but still remains on a lot of pages. Please note that there are a couple redirects to the template, {{ Translation/Ref}} and {{ Translation/ref}}. This should be a one time task, since once done, the old template can be deleted.

Thanks in advance, Oiyarbepsy ( talk) 05:47, 20 April 2018 (UTC)

@ Oiyarbepsy: - Doesn't look like there are any transclusions of that template, or it's redirects in mainspace. Possible that someone's gotten to it already. SQL Query me! 21:24, 23 April 2018 (UTC)
 Done courtesy of Plastikspork and AnomieBOT. —  JJMC89( T· C) 01:22, 24 April 2018 (UTC)
Yes, I made it substitute cleanly and added the {{ subst only}} to have AnomieBot replace it. After that, AnomieBot is already programmed to move it to the talk page. Thanks! Plastikspork ―Œ(talk) 02:36, 24 April 2018 (UTC)

Remove redundant links from See Also sections

A huge number of articles have See Also links that are already linked from the bodies of the articles. Per MOS:NOTSEEALSO, these redundant links should be removed. If there are no non-redundant links in a See Also section, the entire See Also section should be removed. Kaldari ( talk) 21:51, 21 March 2018 (UTC)

This strikes me as something that could easily find objection if someone runs a bot to try to make it happen, and "as a general rule" doesn't sound very convincing for a bot task. If someone decides to take this on, they should be prepared for pushback. Anomie 22:56, 21 March 2018 (UTC)

Vital Articles Bot

Hello, is it possible to create or edit a current bot to help with Level 5 vital articles? My idea is that it will take all articles of top importance in a Wikiproject, check to see if they are already in the list, and if not, tag them as Level 5 vital articles and add them to the list. There are already bots controlling lists so this may be feasible. Please reply! — Preceding unsigned comment added by SuperTurboChampionshipEdition ( talkcontribs) 16:22, 10 April 2018 (UTC)

I think it is Declined Not a good task for a bot.. Vital articles shouldn't be simply a result of wikiproject results. Many articles are "top-importance" for that specific project, but overally for Wikipedia it may be low importance. -- Edgars2007 ( talk/ contribs) 16:01, 11 April 2018 (UTC)

Add a textual remark to all pages of a private wiki

Hello, I'm owner of Westmärker Wiki, a small Mediawiki which I copied from a predecessor.

I want to add "Dieser Artikel wurde am 21.04 2018 aus dem Hombruch-Wiki kopiert." to the bottom of each article and media page.

Is there any bot I can use and possibly a person who can run it there? Wschroedter ( talk) 11:24, 29 April 2018 (UTC)

Semi -automatic change of categories of articles in private wiki

Hello, I'm the owner of Westmärker Wiki (a small Mediawiki) and I want to change certain categories.

I'm looking for a tool by which I can change several articles from a list or all articles of a category.

Any ideas or hints? Wschroedter ( talk) 11:29, 29 April 2018 (UTC)

@ Wschroedter: You should try WP:AWB. -- Izno ( talk) 13:53, 29 April 2018 (UTC)
Looks good, I'll try it out. Thanks a lot, Izno! Wschroedter ( talk) 17:08, 30 April 2018 (UTC)

AndBot

Could someone (@ Tokenzero:?) create this bot

You can tell if something is a journal or magazine by looking for the 'journal'/'magazine' string in the categories of the article, or the presence of {{ infobox journal}}/{{ infobox magazine}} on the page. Headbomb { t · c · p · b} 23:56, 26 March 2018 (UTC)

@ Headbomb: The IFF is for both ways ('&' -> 'and', 'and' -> '&'), right? -- TheSandDoctor Talk 16:09, 27 March 2018 (UTC)
I don't think it needs to be both ways, but I suppose it's better to be safe and restrict this to publications, yes. Some further digging would be required before knowing if this would be done across the board. Headbomb { t · c · p · b} 16:31, 27 March 2018 (UTC)
Oh, one thing, this should be spaced ampersands only (i.e. 'A&A' should be left alone and not converted to say, AandA). Headbomb { t · c · p · b} 16:33, 27 March 2018 (UTC)

@ Headbomb: Coding... Basically done. Should they be categorized as {{ R from modification}}? There doesn't seem to be anything more relevant (except {{ R from railroad name with ampersand}}, curiously); examples I checked (both journal and general) are somehow almost always without any rcat. The first run on infobox-journals would create ~1500 redirects. Do you want to have it run once or eg. monthly? Minor remark: there is a chance it'll create a dumb redirect when the title is in another language, say Ora and labora, but I can't find any actual example and I don't think it's a significant problem anyway. Also, some would remove a serial comma when replacing ', and' with an ampersand, but some style guides advocate keeping the comma, so I would just keep it. Tokenzero ( talk) 20:32, 6 May 2018 (UTC)

Yeah, might as well got for {{ R from modification}} since we don't have something more specific like {{ R from and}} or {{ R to and}}. The bot could run Daily/Weekly/Monthly, the exact time period doesn't really matter, but monthly seems too long. I'd go weekly at the longest. Headbomb { t · c · p · b} 17:20, 7 May 2018 (UTC)
BRFA filed Tokenzero ( talk) 18:28, 9 May 2018 (UTC)

Update links to www.fiu.edu/~mirandas and www2.fiu.edu/~mirandas

Hi,

The links to http://www.fiu.edu/~mirandas ( 1453 links) and http://www2.fiu.edu/~mirandas ( 896 links) do not work anymore. The content is now available on http://webdept.fiu.edu/~mirandas. Could a bot update those links?

Most pages should work after updating the domain. However, the alphabetical index ( http://webdept.fiu.edu/~mirandas/bios-a.htm, http://webdept.fiu.edu/~mirandas/bios-b.htm, ...) only contain empty pages. Fixing this is more complicated, but can be partially automated by matching article names and/or old link anchors with entries in http://webdept.fiu.edu/~mirandas/494-2017-a-z-all.htm. I fixed them on frwiki, so you can also try to take them from the French page when there is one.

Orlodrim ( talk) 21:51, 21 March 2018 (UTC)

The easy part is the non-bios entries, which can be solved via this query and this query. SQL Query me! 02:58, 22 March 2018 (UTC)

Who Was Who link formatting error

We seem to have a large (three-figures, at least) number of external links to Who Was Who, formatted with an extraneous comma at the end of the URL, like the one I fixed in this edit. Can someone fix them all, please?

Better still would be to apply the {{ Who's Who}} template, like this, but I appreciate that may not be so straightforward. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:52, 21 January 2018 (UTC)

@ Pigsonthewing: This search drags up about 1500. -- Izno ( talk) 18:15, 23 February 2018 (UTC)
I've got this one written - but since the URL works with or without the comma - would this pass WP:COSMETICBOT? SQL Query me! 18:05, 22 March 2018 (UTC)

Flagicon to flagdeco in country year navboxes

I went through part of Category:Country year navigational boxes, replacing {{ flagicon}} with {{ flagdeco}} to remove the double link and double alternative text — flagicon's alt attribute repeats the country name in nearby visible text. Example diff. I'd like to request a bot finish the rest of the category, making the same flagicon to flagdeco change. Matt Fitzpatrick ( talk) 22:02, 15 May 2018 (UTC)

Matt Fitzpatrick, how many have been done so far? If it's more than 20 or so an AWB run would probably suffice. Primefac ( talk) 13:41, 16 May 2018 (UTC)
I did everything under "*" and "A". I think most of the rest contain a flagicon, though some don't. Matt Fitzpatrick ( talk) 22:06, 16 May 2018 (UTC)
Matt Fitzpatrick, it's not really a large enough task for a bot run, but if you don't have AWB access I can take care of it. Primefac ( talk) 01:46, 17 May 2018 (UTC)
N Not done Thanks for the offer. I went ahead and made all the changes manually, so this request can be closed. Matt Fitzpatrick ( talk) 10:51, 21 May 2018 (UTC)

Automatically pull data from various wiki pages to create a table?

Hello! I'm sure there's already a bot in existence for the task that I'm trying to do, but I'm new to using Wiki bots so I'm not sure which one I'm looking for or how to use it. Basically, I'm trying to automatically copy from a particular set of wiki pages all of the sentences that contain a particular word, then paste those sentences into a spreadsheet or word document so I can look it over manually.

The specific purpose is that I'm trying to find a list of plebeian tribunes of the Roman Republic, but a quick search around the internet doesn't furnish many promising results. There is a wiki page for a list of all types of tribunes here ( /info/en/?search=List_of_Roman_tribunes), but it looks like the author had only just started this article, since it's not very comprehensive (for example, there were supposed to be ten plebeian tribunes elected every year from 457 to about 48 BC). Obviously it would in all likelihood be impossible to furnish a complete list of every plebeian tribune given the enormous number of these office holders and the relative scarcity of primary sources we have from the time, but considering that only a small fraction of the tribunes were noteworthy enough to make it into the history books (many of whom have their own wiki pages already), I think it's possible to get a reasonably well-represented list by just trawling the existing wiki pages to see which articles are about people who served as plebeian tribunes. One particularly helpful place to start would be the Wikipedia page that lists all Roman gentes (family names, at /info/en/?search=List_of_Roman_gentes). Each family name on that list links to a page that lists all of the notable members of that family, along with a short description of their careers. So the bot would start on the page for gens Abronia, do a word search for "tribun" (so that it catches variations of the word like "tribune," "tribunate," or "tribuneship"), and find nothing. Then it would go to the next page for gens Aburia, search for "tribun" again, and copy the sentence that says "Marcus Aburius, as tribune of the plebs in 187 BC, opposed Marcus Fulvius Nobilior's request for a triumph, but was persuaded to withdraw his objection by his colleague, Tiberius Sempronius Gracchus," paste that as a line in a word document or spreadsheet, continue searching for other occurrences of the phrase in the gens Aburia page until it's out of results, and then moves on to the page for gens Accia, and so forth.

Again, this probably won't furnish a comprehensive list of tribunes, but it'll at least give us a good start. Depending on how many results it returns, I might be able to just go through the resulting data and manually add each name, date of office, and link to the relevant tribune's wiki page as entries on the table at /info/en/?search=List_of_Roman_tribunes, so the bot would only need to read text from existing Wikipedia pages, and not need to write anything to them automatically. I appreciate any help or feedback that you can provide. Thanks! Dfault ( talk) 02:51, 22 March 2018 (UTC)Dfault

@ Dfault: Sounds interesting. If you have access to Unix command-line, this should work:
./wikiget -F "List of Roman gentes" > list (manually edit list to remove any unwanted pages)
  • Download plain-text (wikiget -w -p) and extract sentences containing "tribun" (case-insensitive)
awk '{print "./wikiget -w \"" $0 "\" -p | awk -v titl=\"" $0 "\" \x27{IGNORECASE=1; split($0,a,\".\"); for(b in a){if(a[b] ~ /tribun/) print titl \" : \" a[b]} }\x27" }' list | sh
(above is a single unbroken line)
For each match, it will give the article title followed by a ":" then the sentence containing "tribun". Do you want to do it, or should I post the script output? -- Green C 21:29, 24 March 2018 (UTC)

Brilliant! Just ran it now, worked perfectly! Looks like there are about 750 results; I'll get to work formatting them now and let you know if I run into any trouble or have any updates. Thanks for your help! Dfault ( talk) 23:11, 25 March 2018 (UTC)

Great! Glad it is useful. The script uses metaprogramming (generative programming) with awk emitting awk code, thus the \x27 (ie. '). -- Green C 16:09, 26 March 2018 (UTC)

Could anyone remove all lines beginning with two bullets from these articles?

Could anyone remove all lines beginning with two bullets from the commented-out list of articles? I'm trying to remove all the species entries listed under the genera. Abyssal ( talk) 12:48, 16 April 2018 (UTC)

Just copy the text into a text editor and do a find and replace. Easy. ··· 日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 17:59, 17 April 2018 (UTC)
For 112 articles? I think the number of articles involved is the reason for this request, not the difficulty of editing each individual article. (That being said, could someone not do this with WP:AWB? I don't actually know…) - dcljr ( talk) 00:26, 22 April 2018 (UTC)
Yes, this is a trivial task in AWB. -- Izno ( talk) 18:41, 22 April 2018 (UTC)
@ Abyssal: does this still need doing now the pages are in mainspace? ƒirefly ( t · c · who? ) 19:13, 25 May 2018 (UTC)
@ Firefly:Nah, this request can be canceled. Abyssal ( talk) 04:42, 26 May 2018 (UTC)

Wikiproject task force tagging (Reality TV)

Can I request a bot to tag the articles that are in Category:Reality television series with "|reality-tv=yes|reality-tv-importance="(importance can be assessed manually afterwards). I have created a list of each individual category here after removing certain subcategories, mostly participant and container categories. Please let me know if you think that needs more refining. A lot of the articles already have WPTV but are just missing the Reality TV task force label, and some older articles don't have the project at all. So would need the bot to add the full WPTV+realitytv tag to any that are missing the project, and only add the task force parameter to those that are already under WPTV. WikiVirus C (talk) 16:30, 9 March 2018 (UTC)


Coding...

Hi, can I just clarify the requirement. What I understand is necessary is:
  • All the articles that are in any of the categories here need to have the WPTV template on their talkpage. Most already have it, and this tag should be added to those that don't.
  • Within the WPTV template, the parameter "|reality-tv=yes|reality-tv-importance=" should be added it it is not already added?
I am working on a test script to do this. I am looking to use my first bot to do this, so I'd greatly appreciate a little patience Pi (Talk to me! ) 15:37, 25 March 2018 (UTC)
@ Pi: Yeah that about sums it up. There is no rush, so take your time. Thanks for the response. WikiVirus C (talk) 17:28, 25 March 2018 (UTC)
@ WikiVirusC: I've looked at the data. I ran a script through the categories that you provided, and found a total of about 1900 subcategories. Within these subcategories are approx 16,000 articles. I can't help thinking that the list might be too broad. I don't want to overload the WikiProject with too many articles which might only be loosely related to Reality TV Pi (Talk to me! ) 22:21, 25 March 2018 (UTC)
Although if you do want all these articles tagged I do now have a bot that I am ready to put in a BRFA Pi (Talk to me! ) 22:24, 25 March 2018 (UTC)
The list includes several subcategories, I took out the ones I didn't want, can you just run it on the ~600 categories that are listed? Not everything beneath them, unless it's listed as well. How many total pages is it with just those? WikiVirus C (talk) 23:02, 25 March 2018 (UTC)
That makes sense I'll run it when I get home from work and see how many articles it is Pi (Talk to me! ) 11:32, 26 March 2018 (UTC)
On just the categories you provided there are 8,752 pages. I have made a list here of the pages. If you are happy with this then I will put in the request for bot approval. Pi (Talk to me! ) 23:31, 26 March 2018 (UTC)
I have adjusted the list of categories a bit [11] [12]. Mostly removing Category:Dance competition television shows and related categories since some of those were strictly dance competition and not reality shows. I will manually go through those afterwards, but from the few I checked, the ones that belong fall under another relevant category anyways. The list of individual pages, had a few that were redlinked, and I'm not sure if that was just a parsing error or not. I'm sure there shouldn't be any big issues, but if you want to updated the list I can look through it again to see if any more refining is needed. WikiVirus C (talk) 15:54, 27 March 2018 (UTC)
Hi, I've run the list again. It's now 8,493 pages, and I've resolved the redlink problem (that was about non-Ascii characters). The list is here Pi (Talk to me!) 23:59, 27 March 2018 (UTC)
Looks fine to me. I will go over it again in morning, but I don't see any reason not to request approval when you are ready. WikiVirus C (talk) 04:34, 28 March 2018 (UTC)

Please head to WP:VG/ViGoR for article-of-the-day improvement project

Hi! I recently made a proposal at WP:VG regarding a "featured article of the day" premise. Please head to that page to add your expertise reading the automation nature of my proposal. We're discussing the validity of it too, but I think if I have the automation nature sorted it will be much easier to prove I will have a successful implementation. Essentially I want a bot to: -- Coin945 ( talk) 22:01, 30 March 2018 (UTC)

  1. Randomly choose an article from a list of WP:VG stubs
  2. Place it in on a page
  3. (Possibly notify people - still discussing)
  4. Do this for every day of the week until it has seven
  5. After seven days remove the oldest one and archive it (the page always has seven listed articles)
  6. Have a table ala this one taht shows the accomplishments made to each article.
  7. Note: This is a better system than at TAFI methinks because it does not require humans to nominate articles and such - it's all automated.

See Wikipedia talk:List of Wikipedians by article count#Updating?.     — The Transhumanist    10:18, 5 March 2018 (UTC)

It seems to be working again.     — The Transhumanist    07:32, 20 March 2018 (UTC)
Are you sure? The main list hasn't been updated since October. Lugnuts Fire Walk with Me 08:10, 31 March 2018 (UTC)

Updating vital article counts

Wikipedia:Vital articles/Level/5, the list of Level 5 vital articles, is currently under construction, with editors adding articles to the list. Right now, article counts for each subpage and section are updated manually, which is quite inconvenient and often leads to human errors. I would like to request a bot to update the article counts in each section of each sublist (such as Wikipedia:Vital articles/Level/5/Arts), the total article count at the top of each sublist, and the table listed on the main Wikipedia:Vital articles/Level/5 page. I think such a bot would ideally update the article counts once daily. feminist ( talk) 04:27, 25 May 2018 (UTC)

This should also apply to Levels 1/2/3/4. And if the bot could also automatically update the assessment icons, that would be nice. Headbomb { t · c · p · b} 09:37, 25 May 2018 (UTC)
I'll take a look at this tonight - can't be too hard, and I can run it on Toolforge. ƒirefly ( t · c · who? ) 10:07, 25 May 2018 (UTC)
Coding... I can definitely do this - proof of concept for counts. Just working on the assessment icon updating. ƒirefly ( t · c · who? ) 23:46, 25 May 2018 (UTC)
BRFA filed ƒirefly ( t · c · who? ) 19:48, 26 May 2018 (UTC)

question

hi I find csv files do make bot to adding articles of csv files with csv loder in auto wiki browser this — Preceding unsigned comment added by Amirh123 ( talkcontribs) 13:53, 6 May 2018 (UTC)

Do you mean mass-creating articles from a CSV file? That's a bad idea. Richard 0612 22:30, 14 May 2018 (UTC)

A category I created about World Series-winning managers.

I'm trying to properly alphabetize Category:World Series-winning managers. It was saved from deletion, but now, for some reason, the category isn't in proper alphabetical order. I did get some help regarding the sort key, but to no avail. Perhaps a bot can kindly help me. Thank you. Mr. Brain ( talk)

This is a known problem caused by a configuration change. Patience will be required. It is supposed to be fixed in a few days. – Jonesey95 ( talk) 02:29, 14 April 2018 (UTC)

Y Done

GAR archiving

Hi all. I used to work on community Good Article reassessments area years ago and have recently returned. Old community reassessments are archived by User:VeblenBot. However Veblenbot has been inactive for a while now and the unarchived GARs are building up, Category:GAR/60 has 134 entries in it. Could someone please take over it or find another way to do the archiving. I found two relevant discussions at the Bot noticeboard ( Wikipedia:Bots/Noticeboard/Archive 8#New operator needed for VeblenBot and PeerReviewBot and Wikipedia:Bots/Noticeboard/Archive 10#User:VeblenBot) . I also tried Wikipedia:Village pump (technical)#Archiving community Good Article reassessments. It does not seem like a difficult task. Regards AIRcorn  (talk) 23:40, 25 March 2018 (UTC)

@ TheSandDoctor: in case you want to expand the GA bot you are working on. Kees08 (Talk) 23:44, 25 March 2018 (UTC)

Thanks for the ping Kees08. I will certainly look into this, just need to research more on exactly what it did (but it is late now, so will do that tomorrow). I wish the source code was still available. -- TheSandDoctor Talk 06:17, 26 March 2018 (UTC)
There are links from the bots user page. @ CBM and Ruhrfisch: have been very good also, they should be able to help out with code. AIRcorn  (talk) 07:06, 26 March 2018 (UTC)
@ Aircorn: The links on the user page are dead (for me at least they are 404 errors), that is why I said that. I will reach out to them (then again, you have pinged them, so to avoid pestering, I shall wait a couple days). -- TheSandDoctor Talk 15:06, 26 March 2018 (UTC)
Thanks much appreciated. AIRcorn  (talk) 19:34, 26 March 2018 (UTC)
CBM is the author of Veblenbot. I did not write the bot or modify its code - I only took it over trying to find someone with more knowledge than I to hand it off too. SOrry I cannot be of more help, Ruhrfisch ><>°° 11:25, 4 April 2018 (UTC)

Updating inaccurate unreferenced templates

In articles like this one, I often find {{ unreferenced}} templates in sections that contain references. Is there a Wikipedia bot that can be configured to replace Template:Unreferenced with Template:Refimprove? Jarble ( talk) 03:01, 29 March 2018 (UTC)

This isn't really possible without some pre-processing because {{ unreferenced}} can be used to indicate an unreferenced section rather than an entirely unreferenced article. -- Izno ( talk) 04:48, 29 March 2018 (UTC)
@ Izno: Then it would be relatively easy to automatically replace {{ Unreferenced section}} with {{ refimprove section}}. Jarble ( talk) 23:42, 29 March 2018 (UTC) Jarble ( talk) 23:39, 29 March 2018 (UTC)
I brain-farted on this one--of course it's possible even without pre-processing--I just was thinking of trying to get some numbers for affected articles. -- Izno ( talk) 00:11, 30 March 2018 (UTC)

For a similar bot action check Wikipedia:Bots/Requests for approval/Yobot 11. Yobot could do this as part of general tagging actions but it will need a new BRFA approved since this BRFA is not valid anymore. -- Magioladitis ( talk) 23:32, 4 April 2018 (UTC)

Maps on sawiki

I tried importing Module:Location map to sawiki (latest code) but it has 1000s of dependencies. It is very difficult to import all associated countries / states / cities to sawiki manually. Can someone please help. Capankajsmilyo ( talk) 12:44, 6 April 2018 (UTC)

Bot to search and calculate coordinates (where did my previous request go? nothing was done to it)

Please look at this table: Lands_administrative_divisions_of_New_South_Wales#Table_of_counties

My goal is to add a column to this table that shows the approximate geographical coordinates of each county. Those county coordinates can be derived form the parish coordinates that are found in each county article, by taking the middle of each northernmost and southernmost / easternmost and westernmost parish coordinates. Is it possible to write a script or a bot to achieve this?

For illustration, I did the work for the first county in the list, Argyle County, manually. The table of parishes in this article shows that they range from 34°27'54" and 35°10'54" latitude south and 149°25'04" and 150°03'04" longitude east. The respective middle is 34°49'24" and 149°44'04", which I put in the first table entry of Lands administrative divisions of New South Wales and the info-box of Argyle County. -- Ratzer ( talk) 11:35, 12 April 2018 (UTC)

Your request was automatically archived approximately 3 weeks ago. In over 2 months no one had responded to the task. That suggests no operator is interested in performing the work. -- Izno ( talk) 13:33, 12 April 2018 (UTC)

Misplaced punctuation: add this to an existing task

Could someone already running a cleanup bot request permission to add a simple task, punctuation before cleanup tags? Example, which judging by the date on the tag, had been there since 2015. Inline cleanup tags, e.g. {{ fact}} and {{ which}}, should go after punctuation, so {{cleanup tag|date=whenever}}, and {{cleanup tag|date=whenever}}. are wrong. It's quite visible to the reader, and a bit jarring, so this isn't WP:COSMETICBOT. Nyttend ( talk) 12:59, 5 April 2018 (UTC)

You might want to make a request to bundle this into WP:AWB / WP:GENFIXES as well. Headbomb { t · c · p · b} 14:42, 12 April 2018 (UTC)

AWB already fixes this. And it's also part of CHECKWIKI error 61. -- Magioladitis ( talk) 15:01, 12 April 2018 (UTC)

Clearing articles in the backlogs 'articles without infoboxes' that now have an infobox

I would propose that to help clear the 'articles without infoboxes' maintenance categories, a bot could go through these pages and see if an infobox has already been added to the page, then if it has, remove the |needs-infobox=y from the WikiProject banner templates.

This would allow editors who wanted to add infoboxes to articles that are in these categories not to have to sift through articles that already have infoboxes on them so that they can clear the backlog more quickly.

Thanks. Wpgbrown ( talk) 17:39, 3 April 2018 (UTC)

@ Wpgbrown: Could you provide the category these are in, please? ~ Rob13 Talk 17:58, 3 April 2018 (UTC)
@ Bu Rob13: Categories (to name a few) are Category:Biography_articles_without_infoboxes, Category:Ship_articles_without_infoboxes, Category:Song articles without infoboxes. I can find several extra categories by searching on this page [13]. I would assume however it would make sense for a potential bot to only crawl through categories with large enough backlogs. Wpgbrown ( talk) 18:38, 3 April 2018 (UTC)
The categories vary by WikiProject. Some of these cats are subcats of Category:Wikipedia articles with an infobox request, but by no means all. Examples:
Notice that the parameter name varies, also that some have variant values - this can be confusing, since {{ WikiProject Derbyshire|ibox=yes}} means that that article already has an infobox and doesn't need another. -- Redrose64 🌹 ( talk) 18:43, 3 April 2018 (UTC)

There are cases where I have suggest an article should not have an infobox, but this typically means is there is no current infobox that would make the article better than having none. If a human editor can't deal with that, how will a bot do it? Furthermore, you can't have missed that we've just had a huge Arbcom case about conduct around infoboxes. I think if the bot stuck an infobox on Buckingham Palace, all hell would break loose and there would probably be an ANI thread requesting said bot be blocked. Sorry, bots should only work on uncontroversial and boring stuff, and this just isn't that. Ritchie333 (talk) (cont) 12:33, 4 April 2018 (UTC)

Ritchie333, obviously infobox changes are controversial, but I think you may have missed the intended effect here. Unless I'm the one who is confused, this request would not lead to any infoboxes being added or removed; it would eliminate the request for an infobox from articles that already have one. I suppose that could be seen as controversial in that it would ease the task of adding infoboxes to the remaining articles with requests, but it doesn't sound like that's what you're commenting on. Mike Christie ( talk - contribs - library) 13:21, 4 April 2018 (UTC)
@ Ritchie333: Yes, Wpgbrown ( talk · contribs) is asking for the removal of |needs-infobox=yes (or equivalent) where this is no longer applicable. It's pure WP:GNOMEing. -- Redrose64 🌹 ( talk) 18:35, 4 April 2018 (UTC)

I used to go this as part of my gnomish actions. I can do it agai using WP:AWB which is a powerful tool to make repeative actions and the tool resposnible for a large amount of edits in English Wikipedia. -- Magioladitis ( talk) 23:26, 4 April 2018 (UTC)

Are you sure you can do that?Jonesey95 ( talk) 03:42, 5 April 2018 (UTC)
Jonesey95 Yes, I only need to use my bot account after approval. This is what I plan to do. -- Magioladitis ( talk) 07:14, 5 April 2018 (UTC)
Jonesey95 Wikipedia:Bots/Requests for approval/Yobot 17. -- Magioladitis ( talk) 07:15, 5 April 2018 (UTC)
Wikipedia:Bots/Requests for approval/Yobot 60 for that. -- Magioladitis ( talk) 07:27, 5 April 2018 (UTC)
@ Jonesey95:, as long as it's BRFA'd and properly trialed and reviewed, there shouldn't be any issue. Headbomb { t · c · p · b} 14:36, 5 April 2018 (UTC)
It might also be worth removing {{reqinfobox}} or {{Infobox requested}} if an infobox has been added as this template also adds them to a maintenance category. Wpgbrown ( talk) 22:12, 14 April 2018 (UTC)

Can anyone use a bot to scan many lists for red links and then remove them from the articles?

I have a series of lists of prehistoric life articles that I need to condense. Can anyone use a bot to scan those articles for red links and remove the entries that contain them from the lists? Abyssal ( talk) 12:50, 2 April 2018 (UTC)

Yes, but you're going to have to tell us which lists. — Dispenser 10:43, 3 April 2018 (UTC)
Thanks, @ Dispenser:! I've hidden the list here in a comment. Abyssal ( talk) 12:38, 3 April 2018 (UTC)
@ Dispenser: Still interested in helping me condense these lists? Abyssal ( talk) 03:49, 5 April 2018 (UTC)
Your question is ambiguous. Am I to be removing red links or blue links? Or delete each draft since they each contain some red links (as stated)? Are you looking for the full list of red links? — Dispenser 10:45, 5 April 2018 (UTC)
Remove the items containing red links from the lists altogether, eg:

to

Abyssal ( talk) 14:19, 5 April 2018 (UTC)

@ Dispenser: Tried to clarify as best I could. Abyssal ( talk) 03:19, 8 April 2018 (UTC)
Done [14]Dispenser 00:35, 9 April 2018 (UTC)
Thanks, @ Dispenser:, that looks great. While we're at it, could you remove all the lines starting with two bullets from the same lists? Some of them are still a bit long. Abyssal ( talk) 16:15, 9 April 2018 (UTC)
I was only able to find one set on Draft:List of the Paleozoic life of South Dakota. — Dispenser 01:46, 10 April 2018 (UTC)
@ Dispenser:Could you adjust it to remove the Caseodus eatoni line? I'm trying to get rid of all of the species even if they're blue links. Abyssal ( talk) 13:47, 10 April 2018 (UTC)
@ Dispenser: Abyssal ( talk) 12:03, 12 April 2018 (UTC)
The script I wrote only works for pages with live red links (uses the Database). And frankly this doesn't seem like a productive use of my time. — Dispenser 12:34, 17 April 2018 (UTC)

Implement article history after a good article reassessment

Hi. I asked this years ago at User talk:Gimmetrow#Update the article history following Good article reassessments. Since I am back in this area I thought I would try again. I am not sure what bot updates article histories now so am posting this here instead of at an individuals page.

The {{ GAR/link}} template renders the following {{GAR/link|~~~~~|page=|GARpage=|status=}}. Status can be changed to kept, delisted or a number of other similar positions. An example of a delisted template is here and a kept one here. The issue is that any reassessment has been preceded by at least one assessment, so that template is not ideal. It really needs to be integrated into the {{ articlehistory}} template.

So far the only way to do that is manually. [15] This requires finding the oldid, copying the reassessment page, dates and updating GA to DGA. It would be useful if a bot did this like it does for other similar article history processes. There is a complication however as a reassessment can be opened as a community reassessment or an individual one. As far as I can tell this will only affect the link parameter. Individual reassessments will link to a talk subpage Talk:Foo/GA?, while community ones use a WP:GAR subpage Wikipedia:Good article reassessment/Foo/?. Foo being the name of the article and ? being the number of the reassessment.

For delisted articles, it would also be useful if the bot could change or remove the GA class from the wikiproject template (changing to C is probably the best, but since the difference is mostly arbitrary B would do). Another useful feature would be the removal of the {{ good article}} template from the article itself (it produces the green spot at the top of the page).

There are 2 280 delisted articles (I don't think this includes ones that were delisted and then later regained good or featured status so the true number may be higher), so this feature could save editors quite a bit of manual work. Thanks in advance. AIRcorn  (talk) 00:29, 18 April 2018 (UTC)

@ TheSandDoctor: This would be a good addition to the GA bot once it is working properly. I did not do GARs before because it was a little daunting to close them (turns out it is not too bad, but pretty annoying still). Kees08 (Talk) 22:05, 19 April 2018 (UTC)

I just thought of another useful feature. Updating the lists at Wikipedia:Good articles/all. When an article is delisted it would need to be removed from there. AIRcorn  (talk) 22:37, 19 April 2018 (UTC)

Removing succession boxes from song and album articles

The consensus during a recent RFC was to remove succession boxes from song and album articles. Since these appear in over 4,200 song [16] and 2,000 album articles, [17] it seems that this may be a good job for a bot. — Ojorojo ( talk) 14:33, 30 May 2018 (UTC)

BRFA filed Ronhjones   (Talk) 15:36, 10 June 2018 (UTC)
Y Done Ronhjones   (Talk) 20:23, 23 June 2018 (UTC)

Can anyone bulk undo edits by a single user?

Can anyone bulk undo the most recent edit by User:Dispenser tot he commented out list of articles? They were fine edits, but I need the previous state of the article to show up for the public and the information from those edits can be gotten later out of the article history. Abyssal ( talk) 20:30, 4 May 2018 (UTC)

I would recommend asking Dispenser directly to see if they'd be able to mass-rollback the edits in question. If not, ping me and I'll look into it. Richard 0612 22:34, 14 May 2018 (UTC)
Declined Not a good task for a bot. It was only 111 edits not worth getting a bot operator involved. — Dispenser 19:03, 16 June 2018 (UTC)

Tag covers of academic journals and magazines with Template:WikiProject Academic Journals / Template:WikiProject Magazines

The task is "simply"

I believe in both cases, the parameters may be simply the name of the file (e.g. File.svg), or a full [[File/Image:....]] thing.

The task would need to be run daily/weekly. Headbomb { t · c · p · b} 14:31, 24 May 2018 (UTC)

Not to ask the stupid question, but why not just check the talk page of any article using the above infoboxes and place the appropriate talk page tag if necessary? Primefac ( talk) 17:42, 24 May 2018 (UTC)
Because it's the (non-free) image files (i.e. pages in the File namespace) that need to be tagged, not the articles. feminist ( talk) 03:04, 25 May 2018 (UTC)
Oh, right, my apologies. I misread "the associated file" as "the associated talk page" for some reason. Primefac ( talk) 12:49, 25 May 2018 (UTC)
Coding... should hopefully get this done. Dat Guy Talk Contribs 07:16, 29 May 2018 (UTC)
Headbomb want me to also use the class from any wikiprojects in the talk page? Dat Guy Talk Contribs 16:13, 5 June 2018 (UTC)
Never mind, stupid question. Dat Guy Talk Contribs 16:26, 5 June 2018 (UTC)
BRFA filed. Dat Guy Talk Contribs 16:18, 15 June 2018 (UTC)

MeetUp: Women of Library History

Hello There, I would like to send a MeetUp invitation to all active Wikipedians in New Orleans (particularly librarians)--here is our MeetUp page: Wikipedia:Meetup/New_Orleans/WomeninLibraryHistory Please let me know if I need to do anything else--thanks! RachelWex ( talk) 01:16, 19 May 2018 (UTC)

@ RachelWex: Well the first thing you need is to craft the message to be sent and have a list of people/pages to notify. Then several people can sent those notices. Headbomb { t · c · p · b} 01:24, 19 May 2018 (UTC)
@ Headbomb: I can craft the message, but I do not know how to locate the active Wikipedians in New Orleans. Any suggestions? RachelWex ( talk) 01:37, 19 May 2018 (UTC)
I'd suggest lookling at Wikipedia:WikiProject Louisiana/ Wikipedia:WikiProject New Orleans. Headbomb { t · c · p · b} 01:40, 19 May 2018 (UTC)
Not sure why this is at BOTREQ - it seems more of a task for WP:MMS (if you have a list of recipients) or for WP:GN (if you have a geographical area to target). -- Redrose64 🌹 ( talk) 07:37, 19 May 2018 (UTC)

WikiSpaces wikis linked from all Wikipedias

Hello! WikiSpaces is closing on July 2018. It would be helpful having a list of all "subdomain.wikispaces.com" from external-links table in all Wikipedias (and sister projects too, why not). In WikiTeam we will try to preserve all these open-knowledge sites. Thanks. emijrp ( talk) 13:02, 5 May 2018 (UTC)

@ Emijrp: You can get this yourself with a really simple search. @ Cyberpower678: may want to do a botjob for the domain. -- Izno ( talk) 13:55, 5 May 2018 (UTC)
The domain needs archiving first. I've submitted a list of discovered Wikispaces URLs that IABot found during the course of its runs to the maintainers of the Wayback Machine for mass archiving.— CYBERPOWER ( Chat) 18:11, 5 May 2018 (UTC)
@ Cyberpower678: Can you send me a copy of that URLs? Wayback Machine is good (archive HTMLs), but I am coding a bot to export the wikicode from the wikis. emijrp ( talk) 07:39, 6 May 2018 (UTC)
Here you go.— CYBERPOWER ( Chat) 13:32, 6 May 2018 (UTC)

WP:RESTRICT archive bot

I asked for this over a year ago, and one bot op said they would do it... but they never did, so I’m asking again.

WP:RESTRICT is an incredibly bloated list of everyone who is currently sanctioned by arbcom or the community as well as those under “last chance” unblock conditions. In order to reduce the size of these lists and make them easier to navigate, it was decided that any sanction on a user who had been inactive or blocked for more than two years be moved to an archive. The sanction is still valid, just not displayed on the main page anymore, and can be moved back if the user returns to editing.

I did the initial archiving myself 14 months ago. It ranks as pretty much the most tedious thing I have ever done in nearly 11 years of contributing here. I would therefore like to again request that some bot or other be instructed to review listings there once a month or so and remove any fitting the criteria to the archive. If it could move back those that have returned to editing that would be amazing. We seem to be able to auto-generate such data for inactive admins so I am guessing (as someone who admittedly knows nothing at all about programming bots) that this should be fairly straightforward. Thanks for your time. Beeblebrox ( talk) 03:32, 8 June 2018 (UTC)

For now, I gave the page WP:RESTRICT#Active_editing_restrictions a spitshine with collapsible tables. Headbomb { t · c · p · b} 13:46, 8 June 2018 (UTC)
@ Beeblebrox: Looking at it. Will message you on your talk page later. Ronhjones   (Talk) 15:43, 24 June 2018 (UTC)
Coding... Ronhjones   (Talk) 21:15, 26 June 2018 (UTC)
BRFA filed Ronhjones   (Talk) 15:51, 3 July 2018 (UTC)
Y Done Ronhjones   (Talk) 21:48, 3 July 2018 (UTC)

Invalid fair use media

How are we doing on getting a bot together that detects improper use of non-free media (if not the actual removal from the articles)? That is, the use of non-free media on articles for which the file description page lacks a valid WP:FUR specific to that article - I've just found and removed this, 366 days after this image was added lacking a valid FUR for the article, contrary to WP:NFCCP#10c. -- Redrose64 🌹 ( talk) 19:10, 20 April 2018 (UTC)

My bot is approved for this, however, there was far too much whining during the brief time that it was running. —  JJMC89( T· C) 00:03, 21 April 2018 (UTC)
Whining at being told about copyright issues is a tradition almost as old as Wikipedia itself. I can recall many heated debates and people threatening to quit the project... Someguy1221 ( talk) 00:19, 21 April 2018 (UTC)
  • I think that a proposal on a larger forum would not find consensus for this. On the other, producing a list of articles and what images are problematic is still helpful and no one would object to a list. Oiyarbepsy ( talk) 00:07, 21 April 2018 (UTC)
    OK, take my original post and ignore the parenthesis "(if not the actual removal from the articles)". Can we at least do the detection? I don't mind if it's a list, a notice placed on the talk page of the file, or a notice on the talk page of the article. The latter two would need some sort of tracking category. -- Redrose64 🌹 ( talk) 08:19, 21 April 2018 (UTC)

JJMC89, could your bot task be modified to simply log these uses instead of removing them? Oiyarbepsy ( talk) 01:12, 22 April 2018 (UTC)

It is easier to write something new than to modify the other script. The issue will be the time needed to check the 602,549+ files. I'm doing some testing. —  JJMC89( T· C) 05:28, 23 April 2018 (UTC)
Already at about 5,000 violations and only in the G's. —  JJMC89( T· C) 01:23, 24 April 2018 (UTC)
Report available at User:JJMC89 bot/report/NFCC violations (Warning: large page). —  JJMC89( T· C) 01:06, 26 April 2018 (UTC)
Thank you, I'll look at it next time I have a day off work (Saturday?) -- Redrose64 🌹 ( talk) 07:06, 26 April 2018 (UTC)
  • Redrose64 I've been going thru the resulting list (starting at the top), but the large size of the page has left me unable to edit to remove the ones I've completed. If you do work on it, consider starting from the bottom so we don't duplicate each others work. Oiyarbepsy ( talk) 04:20, 28 April 2018 (UTC)
    OK, will do... unfortunately, although today is Saturday, I've been called in to work to cover an absence. Will get round to it ASAP. -- Redrose64 🌹 ( talk) 10:51, 28 April 2018 (UTC)
    I've updated the report to only list 1000 files at a time to make the page size manageable. This is configurable, so let me know if you want a different limit. —  JJMC89( T· C) 07:37, 6 May 2018 (UTC)

Athletics piped links

There is a historical link issue that needs sorting out for the article Sport of athletics.

Would it be possible to amend all piped links to Athletics (sport) (an old title and currently a redirect) to point directly to Sport of athletics? The old title is still ambiguous with Athletics (physical culture), which was the reason for the subsequent move. 99% of the incoming links are valid, as it's a non-natural title choice.

There is also a sub-sport distinction link issue with track and field. I've seen many links in the style [[track and field|athletics]] and [[track and field athletics|athletics]] – these piped links should also be piped to sport of athletics to remove the WP:EASTEREGG aspect. Similarly, links like [[sport of athletics|track and field]] should simply point to track and field. SFB 19:22, 4 May 2018 (UTC)

@ Sillyfolkboy: Has consensus been established for this change? It makes sense on its face, but it's best to ask on the article talk page or at the appropriate WikiProject first. Richard 0612 22:36, 14 May 2018 (UTC)
@ Richard0612: I've added this to the Wikiproject talk page and at Talk:Sport of athletics.
  • [[track and field|athletics]] → [[sport of athletics|athletics]]
  • [[track and field athletics|athletics]] → [[sport of athletics|athletics]]
  • [[sport of athletics|track and field]] → [[track and field]]
  • [[sport of athletics|track and field athletics]] → [[track and field]]
  • [[athletics (sport)|track and field]] → [[track and field]]
  • [[athletics (sport)|track and field athletics]] → [[track and field]]
  • [[athletics (sport)|athletics]] → [[sport of athletics|athletics]]
Better specified the targeted changes too SFB 15:00, 15 May 2018 (UTC)

Replace architecture= parameter value in Infobox religious building post-merge

{{ Infobox Mandir}} and {{ Infobox Hindu temple}}, and maybe a couple of other related templates, have been merged into {{ Infobox religious building}}. As part of the conversion, the value of the |architecture= parameter in the merged templates has been assigned a different meaning.

In the pre-merge templates, |architecture= could take a value like " Dravidian architecture". In {{ Infobox religious building}}, |architecture= takes a value of "yes" to indicate that the infobox should have an Architecture section, and the actual architectural style is placed in |architecture_style=.

In Category:Pages using infobox religious building with unsupported parameters, templates with an unsupported value for |architecture= are listed under the "Α" section heading (note that "Α" is a Greek letter that is listed after "Z" in the category listing).

I am looking for someone who would be willing to run through that section of the tracking category with AWB and replace this:

| architecture = [any value] |

with this:

| architecture = yes | architecture_style = [any value] |

The "[any value]" string should be preserved in each infobox. For example, | architecture = Dravidian architecture | would be changed to | architecture = yes | architecture_style = Dravidian architecture |

Here's a sample edit.

This will have to be a supervised run, since there could be some strange stuff in the parameter values. It looks like there are about 1,000 pages to fix. – Jonesey95 ( talk) 16:04, 12 April 2018 (UTC)

That seems very poor template design. Why isn't the |architecture= automatically set to yes (or something functionally equivalent) when there's a non-null |architecture_style=? Headbomb { t · c · p · b} 01:36, 19 May 2018 (UTC)
That sounds like a good idea, but we need these 800 or so pages fixed first so that the merge can be completed. – Jonesey95 ( talk) 14:26, 19 May 2018 (UTC)
Well it seems to me no bots need to be involved here if I understand the situation correctly. Just treat |architecture= as an alias of |architecture_style=. A bot could replace |architecture= with |architecture_style= if the old parameter is to be deprecated, but it seems good to update the template before the bot, rather than after the bot. Headbomb { t · c · p · b} 14:37, 19 May 2018 (UTC)

Upon review the template, I think your original course of action is better and that my suggestion above isn't adequate for the current functionality. |architecture=yes enables a whole section of the infobox. I still think it'd be good to have the section be displayed on whether its parameters are empty or present, but that's a different discussion entirely. Headbomb { t · c · p · b} 14:41, 19 May 2018 (UTC)

Friendly Search Suggestions

Hi, suggesting that the Template:Friendly search suggestions be added by bot to every stub article talk page to aid the improvement of the articles, thanks Atlantic306 ( talk) 20:53, 6 June 2018 (UTC)

That seems like an incredible waste of time and effort, as well as the patience of the community. Is there a consensus that this should be done? Primefac ( talk) 12:42, 7 June 2018 (UTC)
  • Disagree, its completely uncontroversial and helpful to the community as the template has a large number of search options to improve stub articles and surely thats a very good use of time and effort to improve the Encyclopedia, for something so minor is consensus really needed? thanks Atlantic306 ( talk) 20:30, 7 June 2018 (UTC)
Every stub article talk page - you're talking hundreds of thousands if not millions of stubs (Just checked the cat, which is at 2+ million). InternetArchiveBot and Cyberpower got harassed simply for placing (in my opinion completely relevant) talk page messages on a fraction of that. So yes, I do think you need consensus. Primefac ( talk) 22:08, 7 June 2018 (UTC)
I would oppose that with tooth and nail. Absolutely not suitable for a bot task. Headbomb { t · c · p · b} 03:03, 8 June 2018 (UTC)
    • Well, its certainly too much for a human editor - if it was limited to 300 articles a day it would not cause much disruption, Atlantic306 ( talk) 19:17, 14 June 2018 (UTC)
    • Will start an RFC when I have more time, thanks Atlantic306 ( talk) 20:38, 16 June 2018 (UTC)

Sort Pages Needing Attention by Popularity/daily views

I suggest, for example, that someone sort the items on this page Category:Wikipedia_requested_photographs by page popularity, similar to how this page is sorted: Wikipedia:WikiProject_Computer_science/Popular_pages Instead of clicking through random obscure pages, a sorted table would allow people to prioritize pages that need attention the most. The example bot is found here User:Community_Tech_bot. Turbo pencil ( talk) 00:57, 8 June 2018 (UTC)

@ Turbo pencil: Try Massviews. -- Izno ( talk) 02:23, 8 June 2018 (UTC)
@ Izno: Thanks a lot Izno. Super helpful! — Preceding unsigned comment added by Turbo pencil ( talkcontribs) 04:12, 8 June 2018 (UTC)

Suspicious User Watcher

Watches suspicious users because they might wreak havoc on the wiki. Bot reports back to the operator(s) so they know what the user is doing, just in case the user is committing vandalism, or anything else. Bot finds suspicious users by seeing if they vandalized (or as I mentioned before, anything else) past the 2nd warning. Manual bot. — Preceding unsigned comment added by SandSsandwich ( talkcontribs) 08:19, 9 July 2018 (UTC)

Idea is not well explained.. I have a funny feeling this is a joke request anyway, but whatever. Primefac ( talk) 12:02, 9 July 2018 (UTC)
@ SandSsandwich: that would be a lot of work. But to begin with, how should the bot decide/recognise which users are suspicious? —usernamekiran (talk) 13:01, 11 July 2018 (UTC)
I mean, technically speaking, we already have an anti-vandal bot. Primefac ( talk) 17:13, 11 July 2018 (UTC)
Do you mean cluebot or actual anti-vandal bot? Who we had to retire, cuz he was getting extraordinarily intelligent ( special:diff/83573345). I mean, he had unlimited access to entire wikipedia afterall. Anyways, this idea is not much feasible: a bot generating list of users (obviously after observing contribution history of every non/newly auto confirmed users. Then posting this list somewhere, other humans examining these users. Too much work for nothing. Too many resources would be wasted. Current huggle/cluebot/RCP/watchlist/AIV pattern is better than this. —usernamekiran (talk) 01:25, 12 July 2018 (UTC)
We already have multiple tools that are used to give increasing attention to editors after 1, 2, 3 or 4 warnings. I'm not sure whether an additional process is required, or why 2 warnings is such a significant threshold. Ϣere SpielChequers 09:30, 16 July 2018 (UTC)

Missing big end tags in Books and Bytes newsletters

Around 1300 pages linking to Wikipedia:The Wikipedia Library/Newsletter/October2013 contain <center><big><big><big>'''''[[Wikipedia:The_Wikipedia_Library/Newsletter/October2013|Books and Bytes]]'''''</big> after a misformatted issue 1 of a newsletter. [18] I guess it looked OK before Remex but now it gives an annoying large font on the rest of the page. A few of the pages have been fixed with missing end tags. It happened again in issue 4 (only around 200 cases) linking to Wikipedia:The Wikipedia Library/Newsletter/February2014 with <center><big><big><big>'''''[[Wikipedia:The_Wikipedia_Library/Newsletter/February2014|Books and Bytes]]'''''</big>. [19] None of the other issues have the error. The 200 issue 4 cases could be done with AWB but a bot would be nice for the 1300 issue 1 cases. Many of the issue 4 cases are on pages which also have issue 1 so a bot could fix both at the same time. PrimeHunter ( talk) 00:36, 19 July 2018 (UTC)

I like how this problem magnifies with each newsletter addition; (see User_talk:Geraki#Books_and_Bytes:_The_Wikipedia_Library_Newsletter). Wonder why issue 5 is bigger than issue 4? -- Green C 20:48, 19 July 2018 (UTC)
wikiget -f -w <article name> | awk '{sub(/Books[ ]and[ ]Bytes[ ]*(\]){2}[ ]*(\x27){5}[ ]*[<][ ]*\/[ ]*big[ ]*[>][ ]*$/,"Books and Bytes]]\x27\x27\x27\x27\x27</big></big></big>",$0); print $0}' | wikiget -E <article name> -S "Fix missing </big> tags, per [[Wikipedia:Bot_requests#Missing_big_end_tags_in_Books_and_Bytes_newsletters|discussion]]" -P STDIN
It looks uncontroversial, and 1500 talk pages isn't that much. Will wait a day or so to make sure no one objects. -- Green C 22:45, 19 July 2018 (UTC)
test edit. -- Green C 13:26, 20 July 2018 (UTC)
Small trout, well perhaps a minnow, for not having the bot check that each instance it was "fixing" was actually broken. Special:Diff/850668962/851324159 not only serves no purpose, but is wrong to boot. If you're going to fix a triple tag, it would have been trivial to check for a triple tag in the regex. Storkk ( talk) 14:35, 21 July 2018 (UTC)

Y Done -- Green C 15:21, 21 July 2018 (UTC)

Update Ontario Restructuring Map URLs in Citations

Request replacing existing instances of the URLs for Ontario Restructuring Maps in citations. While the old URLs work, the new maps employ a new URL nomenclature system at the Ministry of Municipal Affairs and Housing (Ontario) and have corrected format errors that make the new versions easier to read. The URLs should be replaced as follows:

Map # Old URL New URL
Map 1 http://www.mah.gov.on.ca/Asset1605.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6572
Map 2 http://www.mah.gov.on.ca/Asset1612.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6573
Map 3 http://www.mah.gov.on.ca/Asset1608.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6574
Map 4 http://www.mah.gov.on.ca/Asset1606.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6575
Map 5 http://www.mah.gov.on.ca/Asset1607.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6576
Map 6 http://www.mah.gov.on.ca/Asset1611.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6577

Thanks. -- papageno ( talk) 23:41, 5 July 2018 (UTC)

Qui1che, are these six URLs the sum total of the links that need to be changed? Primefac ( talk) 23:43, 5 July 2018 (UTC)
That is correct. There are only six maps in the series. -- papageno ( talk) 01:29, 6 July 2018 (UTC)

Y Done Green C 16:38, 20 July 2018 (UTC)

Redirects of OMICS journals

Here's one for Tokenzero ( talk · contribs)

OMICS Publishing Group is an insidious predatory open access publisher, which often deceptively names it journals (e.g. the junk Clinical Infectious Diseases: Open Access vs the legit Clinical Infectious Diseases). To help catch citations to its predatory journals with WP:JCW/TAR and Special:WhatLinksHere, redirects should be created. I have extracted the list of OMICS journals from its website, which I've put at User:Headbomb/OMICS. What should be done is take every of those entries and:

  • If Foobar doesn't exist, create it with
#REDIRECT[[OMICS Publishing Group]]
[[Category:OMICS Publishing Group academic journals]]
{{Confused|text=[[Foobar: Open Access]], published by the OMICS Publishing Group}}

There likely will be some misfires, but I can easily clean them up afterwards. Headbomb { t · c · p · b} 04:38, 29 June 2018 (UTC)

  • Would it be possible to tag the talk pages of these redirects with {{WPJournals}}? -- Randykitty ( talk) 07:46, 29 June 2018 (UTC)
It's not really needed, but that could be done, sure. Headbomb { t · c · p · b} 14:11, 29 June 2018 (UTC)
  • Would it be over-linking to make the "OMICS Publishing Group" in the hatnote into a wiki-link? XOR'easter ( talk) 15:30, 29 June 2018 (UTC)
    • I'd be OK with that, personally. Foobar will point to OMICS Publishing Group so that wouldn't be super useful. But if Foobar is ever created, then the links would point to different places, and that might be useful. Headbomb { t · c · p · b} 15:38, 29 June 2018 (UTC)

OK, Coding... Tokenzero ( talk) 13:34, 30 June 2018 (UTC)

BRFA filed. See also pastebin log of simulated run. Two questions: should the talk page with {{WPJournals}} be created for each redirect, or only the main one (without and/& or abbreviated variants)? Should the redirects be given any rcats? Tokenzero ( talk) 19:28, 21 July 2018 (UTC)
I don't know that any rcats need to be added. I can't think of any worth adding (beyond {{ R from ISO 4}}). As for redirects, @ Randykitty:'s the one that asked for them, so maybe he can eludicate (probably all). I don't really see the point in tagging those redirects myself, but it doesn't do any harm to tag them either. Headbomb { t · c · p · b} 21:08, 21 July 2018 (UTC)
I'd appreciate if any new redirects could be tagged with {{WPJournals}} on their talk pages. This ensure that the journals wikiproject get's notified if they go to RfD, for example (it's rare, but it happens). Thanks. -- Randykitty ( talk) 02:19, 22 July 2018 (UTC)
@ Randykitty: if you're thinking about WP:AALERTS, what's important is that the target of the redirect is tagged. Tagging redirects is only useful if the target itself isn't tagged. Headbomb { t · c · p · b} 18:45, 26 July 2018 (UTC)
Some of the fullnames could be {{ R without mention}} but I don't think it'd be necessary. ~ Amory ( utc) 10:05, 22 July 2018 (UTC)

Y Done The bot finished (2739 redirects and 21 hatnotes) and I did the few outliers by hand. Tokenzero ( talk) 09:55, 26 July 2018 (UTC)

Bot needed for updating introduction section of portals

Many portals lack human editors, and need automated support to avoid going stale.

Most portals have an introduction section with an excerpt from the lead of the root article corresponding to the portal. The content for that section is transcluded from a subpage entitled "Intro".

The problem is that the excerpts are static, and grow outdated over time. Some are many years out of date.

What is needed is a bot to periodically update subscribed portals, by refreshing the excerpts from the corresponding root article leads.

Each excerpt should end similar to this:

...except that the link should go to the corresponding root article, rather than aviation.

There are over 1500 portals, and so it would be quite tedious for a human editor to do this. Some portals are supported, while others aren't updated for years.

Portals are in turmoil, and so, this is needed sooner rather than later.

Of course, they need greater support than this. But, we've got to start somewhere. As the intros are at the tops of the portal pages, it seemed like the best place to start.    — The Transhumanist   07:06, 14 April 2018 (UTC)

Probably better to do section transclusion, i.e, like Portal:Donald Trump/Intro Galobtter ( pingó mió) 07:09, 14 April 2018 (UTC)
I tried various forms of transclusion of the lead, and they all require intrusive coding of the source. Either section markers, or noinclude tags.
I think an excerpt-updater would be better, as there would be zero impact in the source pages in article space. Cumulatively, portals include tens of thousands of excerpts. Injecting code for each of those into article space would be unnecessary clutter, when we could have a bot update the portal subpages instead.    — The Transhumanist   00:38, 15 April 2018 (UTC)
Adding code to the mainspace pages to facilitate translations is a really bad idea. GF editors will just strip the coding. I don't support automatically changing the text of portals to match the ledes I'd rather see them redirected to the matching articles. Short excerpts don't really help the reader especially for broad concept articles whicb is what most portals try to cover. Legacypac ( talk) 04:19, 15 April 2018 (UTC)
Such coding generally has comments included with it so that GF editors don't remove it. As for support/oppose, that's irrelevant, as it is allowable code, like all the other wikicode we use. They added an entire extension to MediaWiki, available on all MediaWiki sites, for transcluding content based on inserted code, and it's already a standard feature on Wikipedia. I think such code makes the source less readable, and think it is best practice to avoid it. As long as there is an alternative, like bot-updated excerpts in portals.
Redirects would be links. Portals with just links are lists, not portals. To go to merely redirects, the portal design itself would need to be changed via a new consensus. Portals display content by transcluding excerpted content, that's their core design element. One of the biggest problems with portals is that there aren't enough editors to refresh the excerpts manually. Hence, the bot request.
Short excerpts are exactly the point of portals. To let editors dip in to the subtopics of a subject, in exactly the same way the main page does that for the entire scope of Wikipedia. While you may not find them useful, I find the main page highly useful and entertaining. I rarely follow the links to the rest of the article, but am glad I read the excerpts. The thing I love about it most is that the content changes daily. If portals were set up like that, I would visit the portals for my favorite subjects often. I might even assign one as my home page. Bots can accomplish this. But rather than tackling the whole thing at once, focusing on a bot for updating the portion at the topmost part of the page, the intro, seems like a good place to start.    — The Transhumanist   05:49, 15 April 2018 (UTC)
For a way to avoid that, see this revision I did on Portal:Water. Only problem is that it transcludes the entire page which is pretty heavy..then uses regex to find the first section... But yeah, I do agree with you - I don't see how the excerpts help much. Galobtter ( pingó mió) 06:04, 15 April 2018 (UTC)
Excerpts are the current design standard of portals. Changing the practice of using excerpts would be a change in the design standard of portals, which is outside the scope of this venue. Bots are for automating routine and tedious tasks. The method for updating excerpts has been for the most part to do it manually. A bot is needed to help with this onerous chore.    — The Transhumanist   06:09, 15 April 2018 (UTC)
Excerpts not helping much is part of the my general position that portals don't help much in general haha Galobtter ( pingó mió) 06:26, 15 April 2018 (UTC)
Based on the replies, the strongest exception to portals was that they are out of date and unmaintained. Both of which problems can be solved with bots. So, I've come to the experts. I'm sure they can find an automatable solution.    — The Transhumanist   23:21, 15 April 2018 (UTC)
Excerpts are part of the problem, not a solution. Portals are a failed idea amd no amount of bot mucking around is going to fix them. Legacypac ( talk) 18:08, 16 April 2018 (UTC)
Please keep in mind when transcluding anything from mainspace, fair use media is currently restricted to "articles" and should not be transcluded to Portal space. — xaosflux Talk 19:15, 17 April 2018 (UTC)
You mean, like pictures of book covers, logos, and the like?    — The Transhumanist   04:17, 18 April 2018 (UTC)
I tend to think both bot updating and transclusion of content from article space are problematic approaches. The stated problem that this request is trying to fix is that portal intros become stale over time because no one is paying attention. If some automated process is adopted, portal pages could well become broken and stay that way for long periods of time because no one is paying attention. I'd take stale over broken any day. Of course, the risk of such breakage depends on how the automation is done, but isn't a better solution to simply avoid potentially dated language/information in portal intros, or to mark such stuff with, say, {{ as of}} or {{ update after}}? This would require an initial round of assessments to add such templates (/fix problematic wording, etc.), but it looks like with all the attention portals are getting there will be a concerted effort to review portals once the current RFC fails is closed. - dcljr ( talk) 22:38, 19 April 2018 (UTC)
To reduce the "brokenness rate", the bot could first add {{ historical}} to all the portals which have less than a certain threshold of edits in a certain period, then after a week perform the proposed edit to the existing "intro" section/subpage of the portals which are not marked historical. -- Nemo 12:15, 16 May 2018 (UTC)
  • At this point the vast majority of portals have been updated with a variety of templates which transclude content from mainspace directly. This was a good idea at the time, but does not seem to have been the prefered option. JLJ001 ( talk) 15:25, 29 May 2018 (UTC)

Em dashes

I find myself removing spaces around em dashes frequently. Per the MOS, "An em dash is always unspaced (that is, without a space on either side)".

Example of occurrence

Since this is such a black and white issue, a bot to automatically clean this up as it happens would be useful. Kees08 (Talk) 05:49, 31 May 2018 (UTC)

This is a context-sensitive editing task, since some spaced em dashes should be converted to en dashes, not to unspaced em dashes. Others, such as those in file names, should be left alone. – Jonesey95 ( talk) 12:46, 31 May 2018 (UTC)
True there are cases it matters. Cases such as file names will require exceptions that can be written in the code. As for em dashes that should be en dashes, since and dashes can be spaced or unspaced, switching to spaced will not hury anything, unless there is a specific context I am missing. Kees08 (Talk) 03:40, 1 June 2018 (UTC)

Association footballers not categorized by position

Would it be possible to fill Category:Association footballers not categorized by position with the intersection of:

  • AND all players not in the following 15 football position categories:

Some members of WP:FOOTY have been working on adding missing positions, this would be much appreciated in order see all players which are missing a position category. Thanks, S.A. Julio ( talk) 04:36, 14 July 2018 (UTC)

@ S.A. Julio: Is this task expected to be a "one-off" run, or do you see it being a regular task? A one-off could possibly be several runs on AWB. The number of players involved could also be be quite big - do you know the total in the 9 categories? - I got over 9000 for the first one. Ronhjones   (Talk) 15:57, 14 July 2018 (UTC)
@ Ronhjones: I think probably a one-off. I can periodically check if any new articles are missing positions in the future, but currently there are far too many which would need to be added to Category:Association footballers not categorized by position. I've currently counted ~160,000 articles (some of which are likely not players, however), with two categories still running (though most of these are likely duplicates of what I've already counted). There are just over 113,000 players already categorised by position. S.A. Julio ( talk) 16:58, 14 July 2018 (UTC)
Coding... @ S.A. Julio: That's quite a few pages - too many for a semi-automated run(s). I think I'll skip the AWB option. I'll probably do it, so it can be re-run, say quarterly. I think I'll do it in stages - make a local list of players, and then process that file one line at a time. I assume if I combine all those categories, then we will end up with duplicates, which will need to be removed? Ronhjones   (Talk) 18:59, 14 July 2018 (UTC)
@ Ronhjones: I realised a simpler option might just to use only Category:Association football players by nationality and Category:Women's association football players, and look to the players one level down. Theoretically, every player should be in the top level category of their nationality (even if they fall under a subcategory as well). In reality I'm sure there are some pages which are improperly categorised, though likely a very low amount. And most articles should be players (only a few outliers such as Footballer of the Year in Germany and List of naturalised Spanish international football players). S.A. Julio ( talk) 19:26, 14 July 2018 (UTC)
@ S.A. Julio: If the bot was going to run daily, then maybe a change might be beneficial, but not much as an Api call can only get 5000 page titles per call, so it needs multiple calls anyway. For an occasional running bot, then it might be better to ensure you get them all. A test for "Association football player categories" above has given me 162,233 pages in 14642 categories - does that sound right? How can we eliminate the non-player articles? I can see a few answers (you might know better)...
  1. Do nothing. Just ignore the non-player pages that gets added to the category
  2. Find the pages and add a "Nobots" template to the page to deny RonBot access
  3. Find the pages and put them in a category, say, "Category:Association football support page", we can then add that category to the exclusion list. I like this one, it means that if someone creates a new page and it gets added by the bot to the category, then adding the new cat will ensure it gets removed on the next run.
  4. I did think of a search ' insource: "Infobox football biography" ' - that gives 153,074 pages, can one be sure that every page has that code - looks like a big difference from the pages I found? If so we could have used that to get the first list! :-)
Other than that, after seeing how well the code works, the overall plan for processing will be...
  1. Get all in "Association football position categories" and keep as a list in memory
  2. Get "Category:Association footballers not categorized by position", and check that they all should be there (i.e. no match with above list) - if there is a match then remove the cat from that page, if not add to the list, so we don't have to try to add it again.
  3. Get all in "Association football player categories", and only keep only those that don't match the other list
  4. From the resulting "small" list, edit all the pages to add the required category.
Ronhjones   (Talk) 15:24, 15 July 2018 (UTC)
@ Ronhjones: I think option #3 sounds best, this could also be useful for other operations (like finding players not categorised by nationality). With option #4 the issue is that a small percentage of these articles are missing infoboxes (I even started Category:German footballers needing infoboxes a while back for myself to work on). I've started to gather a list of articles which should be excluded, I could begin adding them to a category, would it go on the bottom of the page or on the talk page (similar to categories like Low-importance Germany articles)? And sounds like a good plan, thanks for the help! S.A. Julio ( talk) 20:02, 15 July 2018 (UTC)
@ S.A. Julio:It will be easier to code with it on the article page - I use the same call over and over again, just keep changing the cat name ( User:RonBot/7/Source1) - which returns me the page names. I'll crack on with the plan. Let me know what you call the new category. I'll probably do a trial for the second set, and see how many we get, tonight. The comparison of the two numbers will give you the true number of pages that will end up in Category:Association footballers not categorized by position. Ronhjones   (Talk) 20:22, 15 July 2018 (UTC)
17 cats found, 113279 pages contained. That means 162233-113279=48954 pages to examine. Ronhjones   (Talk) 21:02, 15 July 2018 (UTC)
Dummy run comes up with a similar figure. I've put the data in User:Ronhjones/Sandbox5 - NB: I used Excel to sort it, so it may have corrupted some accented characters. Ronhjones   (Talk) 02:35, 16 July 2018 (UTC)
@ Ronhjones: Alright, I've adjusted some categories which shouldn't have been under players, the next run should have a few less articles. I created Category:Association football player support pages, and now have a list at User:S.A. Julio/sandbox of pages which need to be added. Could a bot categorise these? S.A. Julio ( talk) 04:10, 16 July 2018 (UTC)
I can do a semi-auto AWB run for that list, later Ronhjones   (Talk) 12:38, 16 July 2018 (UTC)
930 pages done - one was already there, and I did not do the two categories - I will only be adding the category to pages in main-space, so cats, templates, drafts, userpages, etc. don't need to go into Category:Association football player support pages. You can have them there if you want - it's not an issue, just less to add. Ronhjones   (Talk) 15:56, 16 July 2018 (UTC)

@ Ronhjones: Alright, thanks! I think the issue is that there were two articles redirecting to the category mainspace. List of Eastleigh F.C. players was inadvertently categorised (missing a colon), and List of Australia national association football team players should have redirected to an already existing article. Now fixed. S.A. Julio ( talk) 16:38, 16 July 2018 (UTC)

@ S.A. Julio: OK, I'll try another dummy run later, with the new category in the "exclusion" list. We should end up with about 1000 less matches! I'll sort the list in python before exporting, then put in my sandbox again. I'll also do a couple of tests of the "add cat" and "remove cat" subroutines in user-space, just to check the code works OK. Then once you are happy that you have put all the "odd" pages in Category:Association football player support pages, then it will be time to apply for bot approval. Ronhjones   (Talk) 17:54, 16 July 2018 (UTC)
Please also see User_talk:Ronhjones#Category:Association_football_player_support_pages. Do have a re-think about name, and let me know. maybe "Association football non-biographical" Ronhjones   (Talk) 19:55, 16 July 2018 (UTC)
@ Ronhjones: Sounds good. I've been going through some category inconsistencies, one of which is stub sorting. For example, Category:English women's football biography stubs is categorised under Category:English women's footballers, yet "football biographies" is not necessarily limited to players (can include managers, referees, officials/administrators etc.). I'm working on fixing the category structure, hopefully should be finished relatively soon. Regarding the name, what about Category:Association football player non-biographical articles? Or another title? S.A. Julio ( talk) 07:12, 17 July 2018 (UTC)
@ S.A. Julio:Tweaked code to only retrieve articles and categories. Now there are 161259 articles. 47013 are not matching - User:Ronhjones/Sandbox5 (not excel sorted this time, names look OK). I agree on cat name - will add request on cat move later (and let bot move them) Ronhjones   (Talk) 12:47, 17 July 2018 (UTC)
Now listed Wikipedia:Categories_for_discussion#Current_nominations. As soon as the move is finished, I will file the BRFA. Ronhjones   (Talk) 15:46, 17 July 2018 (UTC)
@ Ronhjones: Alright, perfect. The other day I added the position category for ~1700 articles, so the list should be slightly shorter now. I'll now finish working on fixing the stub category structure, hopefully there will be less results for the next run (like A. H. Albut, who was only a manager). S.A. Julio ( talk) 17:01, 18 July 2018 (UTC)
@ S.A. Julio: Less is better :-) I will add that my bot task 3, just adds a template to pages, and usually manages 8-10 pages a minute, so expect a quite long run when we get approval - 8 pages a minute is 4 days for 45000 articles - but of course only for the first run, subsequent runs will be much faster as there will be a lot less pages to tend to (plus the basic overhead of getting the page lists - 2h) Ronhjones   (Talk) 17:38, 18 July 2018 (UTC)

BRFA filed Ronhjones   (Talk) 19:53, 19 July 2018 (UTC)

Y Done Ronhjones   (Talk) 00:50, 2 August 2018 (UTC)

Someone to take over User:HasteurBot

Hasteur ( talk · contribs) has retired, it would be good if someone could take over the bot, that would be nice

The code can be found at is at https://github.com/hasteur/g13bot_tools_new, with hasteur stipulating "All I ask is that the credit for the work remains."

@ Firefly: Hasteur posted this on your talk page, any interest in taking over? Headbomb { t · c · p · b} 10:40, 4 June 2018 (UTC)

@ Headbomb: Yep, I'm happy to do this. Will look at it and submit a BRFA tonight (hopefully!) ƒirefly ( t · c · who? ) 13:27, 4 June 2018 (UTC)

Orphan tags

Hi, could you please give a bot an extra task of removing orphan tags from articles that have at least one incoming link from mainspace articles, lists and index pages but not disambig pages or redirects as per WP:Orphan. The category is Category:All orphaned articles but exclude Category:Orphaned articles from February 2009 as an admin is checking those. A rough estimate is there are at least 10,000 misplaced tags, thanks Atlantic306 ( talk) 17:07, 2 June 2018 (UTC)

JL-Bot already removes the orphan tag, but based on the original discussion it requires 5 or more links (ignoring type). This was done as checking the type of link is not always straightforward and adds processing time. The 5 links was a community agreed compromise. The only exception is dab pages which should never be tagged as orphans (it will de-tag those regardless of number of links). That task runs ever week or two. If someone wants to build a fancier checking, let me know and I will discontinue mine. -- JLaTondre ( talk) 22:44, 2 June 2018 (UTC)
This botreq started on my talk page, I suggested posting here first, glad as I didn't know about JL-Bot. I wouldn't know how to improve on JL-Bot other than by using API:Backlinks but it's a wash in terms of functionality. BTW I wrote a command-line utility wikiget (github) that can be hooked through a system call eg. "wikiget -b Ocean -t t" will output all transcluded backlinks for Ocean. It handles all the paging and various API:Backlink options. -- Green C 23:15, 2 June 2018 (UTC)
Atlantic306, how is this different from the request you made a month ago at Wikipedia:AutoWikiBrowser/Tasks#AWB Request 2, which was decidedly a non-starter? Pinging the other contributors from that discussion, Premeditated Chaos & Sadads. If a large # of orphans have already been manually checked and all that remains for that group is the busywork of removing the tag, then that might be ok if others agree, but we need to see a link to such a discussion.
JLaTondre, do you have a link to the 5+ link discussion?   ~  Tom.Reding ( talkdgaf)  12:03, 4 June 2018 (UTC)
Hi, this is different to the AWB proposal as that was for the early category of 9000 articles whereas this proposal leaves that category out as it is being manually checked and referrs to all of the remaining orphan categories. As above, a bot is already removing tags but I think this needs to be set at one valid link as per WP:Orphan as the JPL bot approval was back in 2008 and now consensus has changed that one valid link is sufficient for the tag removal, thanks Atlantic306 ( talk) 12:11, 4 June 2018 (UTC)
Discussion is shown here and here. GreenC ( talk · contribs) has said his bot can differentiate the links so perhaps his bot could take over the task, thanks Atlantic306 ( talk) 12:25, 4 June 2018 (UTC)
WP:ORPHAN says "Although a single, relevant incoming link is sufficient to remove the tag, three or more is ideal...", I would object to a bot removing orphan tags on articles with fewer than 3 links on this basis alone. Headbomb { t · c · p · b} 12:25, 4 June 2018 (UTC)
The conclusions reached at WP:AWB/Tasks#AWB Request 2 apply to all most orphans, from 2009 up until some arbitrary time in the near-past.   ~  Tom.Reding ( talkdgaf)  12:30, 4 June 2018 (UTC)
( edit conflict) I still think automated removal of orphan tags in general is a bad idea. To me, going through the orphan categories isn't just about making sure something else points there. Orphan-tagged articles often suffer other issues, so the tag is kind of a heads-up that the article needs to be looked at. It's like a sneeze. It could be nothing, but it could mean you have allergies, or a cold.
Same thing with an orphan-tagged article. It could be a great but under-loved topic. But maybe it's a duplicate article or sub-topic and can be merge/redirected. Maybe it's a copyvio that flew under the radar. Maybe it's not actually notable and should be deleted. Maybe the title is wrong and it's orphaned because all the links point to the right (redlinked) title. Maybe the incoming links are incorrect and are trying to point to something else, and need to be changed.
If you just strip the tags without checking the article, you're getting rid of the symptom without checking to see if there's an underlying illness, which essentially reduces the value of the tag in the first place. ♠ PMC(talk) 12:55, 4 June 2018 (UTC)
Echoing this from PMC. We don't suffer from having a neverending backlog, and the current bots (per discussion above) and AWB minor fixes already remove templates from pages that are already in the clear. I would much rather that we take the time to go through and find merges or deletes, get these pages added to WikiProjects, and generally do other minor cleanup that happens when human eyes are on the pages. Anything that has lived with few or no links for 9+ years, suggests to me that it hasn't been integrated into the Wiki adequately. If we just remove the tag, we remove the likilihood of it's discovery again. Sadads ( talk) 14:35, 4 June 2018 (UTC) 

Popular pages - indexing and WikiProject banners

Could someone help with doing the following to the pages in Category:Lists of popular pages by WikiProject?:

  • add the name of the WikiProject as a sort key to Category:Lists of popular pages by WikiProject
  • add the corresponding WikiProject category, with sort key "Popular Pages"
  • create a talk page (if it doesn't exist) and add the corresponding WikiProject banner

Oornery ( talk) 05:14, 6 June 2018 (UTC)

WikiProject Athletics tagging

It's been four years since this project last had a tagging run and I'm looking to get Article Alerts to cover the many relevant articles that have not been tagged since. Anyone interested in doing a tagging run of the articles and categories under Category:Sport of athletics? SFB 19:03, 4 May 2018 (UTC)

  Working on this tagging part over the next few days.   ~  Tom.Reding ( talkdgaf)  22:11, 4 May 2018 (UTC)
Sillyfolkboy, 2 questions:
  1. there are ~6000 ~5600 pages to tag. I will propagate the |class= of other WikiProjects, if available. Should I leave |importance= blank, or use |importance=Low? The idea being that if importance were > "Low", it probably would have been tagged as such by now. I can also do this for articles less than a certain size instead.
  2. I'll leave pages alone (for now) which do not have any WikiProject tagged. To make classification of the resulting unclassified pages faster, I can apply |class=Stub to all pages less than 1000, 2000, 3000, etc. bytes. Please take a look at that list of ~6000 ~5600 and let me know what threshold below which to tag pages as stubs (if at all).
WP Athletics notified for input as well.   ~  Tom.Reding ( talkdgaf)  23:47, 4 May 2018 (UTC)
@ Tom.Reding: I would recommend propagation of other project's class if available, or mark as stub if under 2000 bytes. You can place importance as low by default. The project is quite well developed now, so the vast majority of important content is already tagged. These will mainly be recent articles on lower level athletes and events.
Category:Triathlon, Category:Duathlon, Category:Foot orienteers‎, Category:Athletics in ancient Greece and Category:Boston Marathon bombing need to be manually excluded. Thanks SFB 01:41, 5 May 2018 (UTC)
PetScan link updated to exclude those cats, ~400 removed. Won't start on this for a few days for possible comments.   ~  Tom.Reding ( talkdgaf)  03:15, 5 May 2018 (UTC)
Orienteers were not excluded on the previous run, and consequently a lot of orienteering articles are now tagged as being within the scope of WikiProject Athletics, even though (with some exceptions) they're actually not. Would it be possible to untag them by bot? Sideways713 ( talk) 16:20, 5 May 2018 (UTC)
Sideways713, pages < 2000 b mostly done. Will leave |class= blank for those >= 2000 b. Let me know if there's any desired change to the above guidance. Can do the untagging after.   ~  Tom.Reding ( talkdgaf)  13:16, 18 May 2018 (UTC)
  Done.
Re: Orienteering+Athletics, this scan shows 459 which are tagged as both. However, just because someone is in Orienteering doesn't mean they shouldn't be in Athletics, only that they're a candidate for removal. So it's probably best to do this manually, unless there's some rigorous exclusion criteria available?   ~  Tom.Reding ( talkdgaf)  17:28, 19 May 2018 (UTC)
If you exclude those in subcategories of Track and field athletes (at any level), and those in subcategories of Sports clubs at level 3 or lower, and possibly those in subcategories of Mountain running (I'm not entirely sure about this one - how does @ Sillyfolkboy feel?), and untag the rest, that should be good enough. (That's only a few dozen exclusions.) There will probably still be some false removals - orienteers who dabble in running enough they could be marked as runners on wiki, but aren't yet - but it's a lot less effort to happen upon those later and tag them manually than it is to untag the other 400 pages manually, and the false removals should all be of athletes whose main claim to fame is orienteering and whose articles will be more naturally developed by members of that wikiproject. Sideways713 ( talk) 22:43, 19 May 2018 (UTC)
I'm good with the above. There isn't actually a whole lot of crossover between orienteering and elite long-distance running, probably because the later is much better paying than the former so it isn't something, say, a marathon specialist would consider normally. SFB 23:43, 19 May 2018 (UTC)
Sideways713 & SFB: here is the PetScan (434 results) for these doubly-tagged pages with Category:Track and field athletes & Category:Mountain running, both fully recursed, removed. I've tried removing Category:Sports clubs at level 3 or lower via PetScan and locally via AWB's variably-recursive category utility, but both timeout at depths of 5 and greater. The tree grows very quickly, with ~13,000 unique mainspace pages at a depth of 2, to ~311,000 at a depth of 4. D2's pages subtracted from D4's pages gives a ~298,000 pool of pages to try to remove from the 434, but only 2 pages are removed ( Brit Volden & Øyvin Thon), leaving 432, so this isn't a practical approach.   ~  Tom.Reding ( talkdgaf)  14:31, 20 May 2018 (UTC)
@ Tom.Reding: On that basis, I would leave this to a manual task. Given the small article base, there aren't any major downsides to the accidental inclusion in scope, especially as WikiProject Orienteering seems inactive at the moment. SFB 14:47, 20 May 2018 (UTC)
I meant exclude levels 1, 2 and 3 but don't exclude 4 and up, rather than the opposite. Sorry if that was unclear. Sideways713 ( talk) 16:24, 20 May 2018 (UTC)
Right, but it's a distinction without a real difference. It would return a subset of the ~300k I found (since I lumped level 3 into those 300k instead of excluding them), so I decided to not be any more precise, since there's no need - the result would be either the same (i.e. I'd still find those same 2 to be removed from the 434) or worse (I'd find 0 or 1 of those same 2); basically a way for programmers to rationalize exerting least effort...   ~  Tom.Reding ( talkdgaf)  19:45, 20 May 2018 (UTC)
No, what I meant is this, which gives 420 results. Sorry if there's a communication problem, Sideways713 ( talk) 21:53, 20 May 2018 (UTC)
Sideways713, sorry for the delay. Just to be sure: those 420 results need to have {{ WikiProject Athletics}} removed?   ~  Tom.Reding ( talkdgaf)  14:25, 5 June 2018 (UTC)
SFB, can you confirm instead?   ~  Tom.Reding ( talkdgaf)  21:11, 6 June 2018 (UTC)
@ Tom.Reding: above link is down so I can't see the results, but I still think this action is better done manually, given the cross-over in the sports (i.e. just because an orienteer isn't currently in a track athlete category doesn't necessarily mean the athlete has not competed in track). Happy for you to proceed on your rationalized approach per above. SFB 22:27, 6 June 2018 (UTC)

Bot to correct common ", ". and "? typos

A very common typo I see all the time is when end quotation marks are placed before a comma (like this: ",) or a period when at the end of a sentence (like this: ".), etc. The rule is that commas, periods, and question marks are placed inside quotation marks, like this: ."/,"/?"

I see these mistakes everywhere I go, and it seems that no one bothers to correct them. Perhaps there should be a bot that swaps the quotation marks and punctuation marks to the position that they should be in. Radioactive Pixie Dust ( talk) 05:36, 26 July 2018 (UTC)

Are they really "common typo[s]"? After all, the quotation mark should only include punctuation if it is part of the quoted text and how will a bot be able to determine this? Our Manual of Style recommends using the logical quotation style, see MOS:QUOTEMARKS and MOS:LQ for more details. Regards So Why 07:21, 26 July 2018 (UTC)
@ Radioactive Pixie Dust: There are many things like this that are taught in schools as "rules" but are in reality merely recommendations preferred by the given national variety or house style that do not have a strong basis in actual usage of the language by its users (some of which are tallied here). Wikipedia, being an international project which strives for a neutral point of view and weighs heavily on community consensus, has built its own series of recommendations on style which tries to be objective and is always subject to scrutiny. Nardog ( talk) 08:20, 26 July 2018 (UTC)
  • Declined Not a good task for a bot. Per WP:CONTEXTBOT. See also MOS:LQ - if the punctuation is part of the material that is being quoted, it goes inside the quote marks; if it is not, it goes outside. -- Redrose64 🌹 ( talk) 11:51, 26 July 2018 (UTC)

Replace WikiProject History of Photography templates with WikiProject Photography

With these templates successfully approved for merging, I am request a bot that will replace any WikiProject History of Photography templates with the WikiProject Photography template with the "history=yes" parameter.

If a page already has the WikiProject Photography template, the "history=yes" parameter should be added (if it's not there already). If the page already has the WikiProject Photography template with the "history=yes" parameter, then the WikiProject History of Photography template simply needs to be removed.

If there are differing quality ratings between these two templates, the rating given by the WikiProject Photography template should be applied. If WikiProject Photography template has not given a quality rating and the WikiProject History of Photography template has, the WikiProject Photography template should inherit the WikiProject History of Photography's quality rating. Qono ( talk) 23:07, 24 July 2018 (UTC)

My bot has approval for this sort of task; I'll handle it. Primefac ( talk) 23:52, 24 July 2018 (UTC)

Replace links to AP news hosted by Google with AP website links

Can anyone create a bot to replace links matching the regex https://www.google.com/hostednews/ap/.*\?docID=([0-9a-f]{32}) with https://apnews.com/$1. There are about 2800 links to AP news hosted by Google and all the links are dead. I estimate about 20–30% of these links have the docId tag and can be rewritten to link to AP's website. This doesn't always work, but it works often enough to make this worth the effort. You'll need to download the page first and check for absence of the string "The page you’re looking for doesn’t exist. Try searching for a topic." and the presence of a non-empty div of articleBody class. You'll also have to flip the deadurl tag to no after replacement and avoid references that have already been archived. Some examples:

Gazoth ( talk) 13:09, 8 June 2018 (UTC)

Bot to tag all remaining disambiguation links.

We developed a consensus a while back to tag all remaining disambiguation links in the project with a {{ dn}} tag. In order to avoid excessive tagging, the idea is to generate a list of all links, let it sit for a few weeks, then recheck it and tag everything that has still not been fixed after that interval. Any takers? bd2412 T 22:20, 17 April 2018 (UTC)

@ BD2412: - I think I can write this one. Basically, we're looking for links to pages in Category:Disambiguation pages inside articles. Generate a list based on that, and after a couple weeks - rerun with tagging enabled for the links in that list. The query to find those pages should be: quarry:query/26624 if I understood right (Quarry's taking a long time to run it - DBeaver came back with 20,000+ hits) SQL Query me! 21:56, 23 April 2018 (UTC)
There should be fewer than 8,200 total disambiguation links at this time, per The Daily Disambig; of those, at least 2,100 should already be tagged (you can exclude pages that already have such a tag on them), although many of the articles with tags are likely to include multiple tagged links, so I would think that the task should involve no more than 6,000 links to be tagged. bd2412 T 22:29, 23 April 2018 (UTC)
Good point, I'll rewrite the query to exclude {{ dn}}. SQL Query me! 22:32, 23 April 2018 (UTC)
@ SQL: Hi, just following up on this. Cheers! bd2412 T 22:51, 9 June 2018 (UTC)

Change external links for Colombian municipalities

Recently the links to the official websites of the municipalities of Colombia have been changed, e.g. Zipaquirá (old, dead) to Zipaquirá (new, live). The only difference I saw with checking some of the links is the removal of "index.shtml". I changed it manually for Zipaquirá, but there are 1200+ municipalities to be done, so best done by a bot. Thanks in advance! Tisquesusa ( talk) 17:04, 7 August 2018 (UTC)

Y Done. I have updated the links using AWB. Rcsprinter123 (discourse) 15:17, 8 August 2018 (UTC)

Abbreviations and machine generated typos at uz.wikipedia

Can someone here please create a bot to take care of the abundance of encyclopedia abbreviations uzbek wikipedia? While some abbreviations are rather easy to figure out, other abbreviations may be challenging for unfamiliar or inexperienced readers. This task is incredibly tedious to do manually. Here are some of the most common abbreviations (or errors) and their needed replacements. Please take note of common Uzbek suffixes such as -lar, -i, -si, -da, ning, etc

  1. yanv. →‎ yanvar (and yanv.da or yanv. da →‎ yanvarda)
  2. fev. →‎ fevral (and fev.da or fev. da →‎ fevralda)
  3. apr. →‎ aprel (and apr.da or apr. da →‎ aprelda)
  4. avg. →‎ avgust (and avg.da or avg. da →‎ avgustda; Commonly miswritten by bot as "avg .")
  5. sent. →‎ sentabr (and sent.da or sent. da →‎ sentabrda)
  6. okt. →‎ oktabr (and okt.da or okt. da →‎ oktabrda)
  7. noyab. →‎ noyabr (and noyab.da or noyab. da →‎ noyabrda)
  8. dek. →‎ dekabr (and dek.da or dek. da →‎ dekabrda)
  9. -a. →‎ -asr (and -a.lar →‎ -asrlar)
  10. b-n →‎ bilan (only if "b-n" alone, NOT inside another word)
  11. va b. →‎ va boshqalar
  12. d-r → doktor
  13. f-k → fabrika (may be followed by several suffixes, but do not change if another letter directly in front of "f")
  14. f-t →‎ fakultet (may be followed by several suffixes, but do not change if another letter directly in front of "f")
  15. hoz. →‎ hozirgi
  16. FA →‎ fanlar akademiyasi (Only if "FA" by itself and capitalized)
  17. i.ch. →‎ ishlab chiqarish
  18. in-t →‎ institut (may be followed by several suffixes, but do not change if another letter directly in front of "in-t")
  19. i.t. →‎ ilmiy tadqiqot (may be followed by several suffixes, but do not change if another letter directly in front of "i.t." or if both are capitalized. Often written as i. t.)
  20. k-z → kolxoz (may be followed by several suffixes, but do not change if another letter directly in front of "k")
  21. kVt-soat -> Kilovatt-soat
  22. k-t → kombinat (may be followed by several suffixes, but do not change if another letter directly in front of "k")
  23. lab. → laboratoriya (may be followed by several suffixes, but do not change if another letter directly in front of "l")
  24. mayd. →‎ maydon ("M" will probably be capitalized and should remain so)
  25. prof. →‎ professor (may be followed by several suffixes, but do not change if another letter directly in front of "p")
  26. qad. →‎ qadimgi
  27. q.x. →‎ qishloq xoʻjaligi (Commonly miswritten by bot as "q. x.")
  28. r-n → rayon (may be followed by several suffixes, but do not change if another letter directly in front of "r")
  29. rej. → rejissyor (may be followed by several suffixes, but do not change if another letter directly in front of "r")
  30. RF → Rossiya Federatsiyasi (Only if "RF" by itself and capitalized)
  31. radiost-ya →‎ radiostansiya
  32. telest-ya →‎ telestansiya
  33. sh. →‎ shahri (only if it is "sh." alone; if written as "sh.lar", then it should be "shaharlar")
  34. s-z → sovxoz (may be followed by several suffixes, but do not change if another letter directly in front of "s")
  35. taxm. →‎ taxminan
  36. t-ra →‎ temperatura (may be followed by several suffixes, but do not change if another letter directly in front of "t", or if the "a" is followed by an "n", ie, transport)
  37. t.y. →‎ temir yoʻl (may be followed by several suffixes, but do not change if another letter directly in front of "t". DON'T change the "y." to "yil")
  38. un-t →‎ universitet (may be followed by several suffixes, but do not change if another letter directly in front of "u")
  39. y.lar →‎ yillar (Do not allow bot to perform function if the article title starts with "Y")
  40. y.da →‎ yilda (Do not allow bot to perform function if the article title starts with "Y")
  41. ya.o. →‎ yarim orol (may be followed by several suffixes, but do not change if another letter directly in front of "y". Abbreviation sometimes written as "ya. o.")
  42. z-d →‎ zavod (may be followed by several suffixes, but do not change if another letter directly in front of "z")
  43. ` → ʻ (this is the correct punctuation; do not change if in a template, infobox, or category; only change if in the main text of a page)
  44. 1-jahon urushi →‎ Birinchi jahon urushi (Sometimes comes up as 1jahon urushi ; may be followed by several suffixes)
  45. 2-jahon urushi →‎ Ikkinchi jahon urushi (Sometimes comes up as 2jahon urushi ; may be followed by several suffixes)
  46. Axolisi →‎ Aholisi (Bot error due to similarity of the cyrillic letters)
  47. jan.-sharqida →‎ janubi-sharqida
  48. shim.-sharqida →‎ shimoli-sharqida
  49. jan.gʻarbida →‎ janubi-gʻarbida
  50. shim.gʻarbida →‎ shimoli-gʻarbida
  51. jan.dagi →‎ janubidagi
  52. shim.dagi →‎ shimolidagi
  53. jan.da →‎ janubida
  54. shim.da →‎ shimolida
  55. jan.dan →‎ janubidan
  56. shim.dan →‎ shimolidan

and typos:

  1. axoliyey →‎ aholisi
  2. poytahti →‎ poytaxti
  3. shaxri →‎ shahri
  4. xalk →‎ xalq
  5. yil da →‎ yilda
  6. yil lar →‎ yillar
  7. katnashchisi →‎ qatnashchisi
  8. suyuklanish → suyuqlanish
  9. Kozogʻiston → Qozogʻiston
  10. Koraqalpogʻiston → Qoraqalpogʻiston

Please try not to change capitalization in the process.

If a bot could take care of these things, it would be absolutely fantastic. Thank you so much to anyone who can make a bot to take care of these. Thank you. If you have any questions about suffixes or anything like that, please don't hesitate to ping me.-- PlanespotterA320 ( talk) 23:50, 23 July 2018 (UTC)

I would think you need a person who is familiar with the language. It seems like a good task for AWB. I see uz:Wikipédia:AutoWikiBrowser/CheckPage is not created, but I would think that issue can be overcome. uz is a site option in the AWB menus, but won't work without the user name in the appropriate Check Page. Then it's just a matter in running a wikisearch, in time with a suitable "search and replace" - then each proposed change can be seen before you click "Save". I tried https://uz.wikipedia.org/?search=insource%3A+yanv.&title=Maxsus:Search&profile=advanced&fulltext=1&ns0=1 - looking for "yanv." in the source of any article and got 464 hits. Ronhjones   (Talk) 21:04, 28 July 2018 (UTC)
I have familiarity with the abbreviations and commons errors, and over 10,000 edits to uz.wikipedia. I've tried to download AWB but it won't work on my computer, and I can't run JWB on uz. wikipedia. We used to have a bot (Foydalanuvchi:Ximik1991Bot), but user Ximik1991 left the wiki a while ago. I can compile a complete list of words with abbreviations, including suffixes. I know nothing about bot-writing and can't use AWB...if there is a person or bot that would help overcome this issue, that would be much appreciated. I am literally fixing abbreviations and typos article by article, it's taking a long time.-- PlanespotterA320 ( talk) 23:19, 29 July 2018 (UTC)
There's no JWB either.-- PlanespotterA320 ( talk) 23:21, 29 July 2018 (UTC)

Will take this one. Can move to uzwiki. -- Edgars2007 ( talk/ contribs) 17:31, 30 July 2018 (UTC)

[r] → [ɾ] in IPA for Spanish

A consensus was reached at Help talk:IPA/Spanish#About R to change all instances of r that either occur at the end of a word or precede a consonant (i.e. any symbol except a, e, i, o, or u) to ɾ inside the first parameter of {{ IPA-es}}. There currently appear to be about 1,190 articles in need of this change. Could someone help with this task with a bot? Nardog ( talk) 19:24, 12 June 2018 (UTC)

Create list based on size of article

Hopefully a simple request. I would like a list of all pages that are tagged with {{ WikiProject Green Bay Packers}}, assessed as a stub, and assessed as low-importance listed out in a table ( Low-class stubs). The table would be two columns, one listing the article's name and the other the article size in bytes ( User:Gonzo fan2007/Stubs would be a fine place to put it). As long as the table is sortable, I don't care what order the articles are in the table. I am looking to review all of the WikiProject's stubs and reassess as start or C-class if necessary and would like to start by looking at the largest articles (and thus the most likely to no longer be a stub).

Let me know if there are any questions. Thank you for any assistance. « Gonzo fan2007 (talk) @ 18:50, 15 August 2018 (UTC)

Coding .. Green C 22:46, 15 August 2018 (UTC)
@ Gonzo fan2007: - done. I left a 1-line unix command there in case you want to try the same with other templates or arguments, looks like a good method for de-stubbing. -- Green C 23:40, 15 August 2018 (UTC)
Awesome! Appreciate the work GreenC! If it is too much of a pain, don't worry about it, but how hard would it be to wikilink the article titles in the table? « Gonzo fan2007 (talk) @ 23:41, 15 August 2018 (UTC)
Done. -- Green C 23:50, 15 August 2018 (UTC)
You're awesome GreenC! Thanks for the quick turnaround. « Gonzo fan2007 (talk) @ 02:14, 16 August 2018 (UTC)

creating minor planets articles

hi in fr.Wikipedia.org a bot creat thousands articles about minor planets with good quality please creating this articles for English Wikipedia — Preceding unsigned comment added by Amirh123 ( talkcontribs) 11:22, 10 August 2018 (UTC)

See WP:MASSCREATION, this would need a strong consensus on WP:VPR or the like. Anomie 12:25, 10 August 2018 (UTC)
We actually did the opposite - redirect a bunch of articles on minor planets into lists; see WP:DWMP: "Before 2012, when this notability guideline did not yet exist, approximately 20,000 asteroid stubs were mass-created by bots and human editors. This created a considerable backlog of articles to be cleaned up, redirected, merged, or deleted." Galobtter ( pingó mió) 12:30, 10 August 2018 (UTC)
@ Amirh123: You have made similar requests before, for a variety of topics. I refer you to some of the replies left at Wikipedia:Bot requests/Archive 75#please make bot for adding articles for footballdatabase.eu and Wikipedia:Bot requests/Archive 76#Requests from Amirh123. -- Redrose64 🌹 ( talk) 12:36, 10 August 2018 (UTC)
@ Amirh123: please stop making bot requests that have no chance of being adopted, or for which you haven't demonstrated consensus. Headbomb { t · c · p · b} 13:15, 10 August 2018 (UTC)

Move WikiProject Articles for creation to below other WikiProject templates

In Special:Diff/845715301, PRehse moved WikiProject Articles for creation to the bottom and updated the class for WikiProject Video games from "Stub" to "Start". Then, in Special:Diff/845730267, I updated the class for WikiProject Articles for creation, and moved WikiProject Articles for creation back to the top. But then, in Special:Diff/845730984, PRehse decided to move WikiProject Articles for creation to the bottom again. For consistency, we should have a bot move all {{ WikiProject Articles for creation}} templates on talk pages to below other WikiProject templates. If the WikiProject templates are within {{ WikiProject banner shell}}, then {{ WikiProject Articles for creation}} will stay within the shell along with other WikiProject templates. GeoffreyT2000 ( talk) 16:44, 14 June 2018 (UTC)

Needs wider discussion. That sounds like a lot of bot edits for questionable benefit. Seek approval at one of the village pumps. Anomie 17:35, 14 June 2018 (UTC)

This change should be fine per Wikipedia:Talk page layout. -- Magioladitis ( talk) 18:24, 14 June 2018 (UTC)

If anything, this could be bundled in AWB, assuming it has consensus, so that AWB bots make the change when they do other task. However, this very likely wouldn't get consensus to be done on its own. Headbomb { t · c · p · b} 20:08, 14 June 2018 (UTC)
There are no bots doing tasks in this direction. Unless, we decide that wikiproject tagging bots should also be doing this. Only Yobot ued to do this but right now there is no guideline to ask bot owners to perform this action. So we have two ways: Form a strategy or approve a sole task for this. I would certainly support the task to be done if ther was a discussion held somewhere about this task of similar tasks. -- Magioladitis ( talk) 22:54, 14 June 2018 (UTC)
  • I concur this needs wider discussion. Why does the order of the WikiProject banners matter? Primefac ( talk) 02:19, 15 June 2018 (UTC)
For instance, we have a lose rule that WikiProjct Biography "comes before any other WikiProject banners". At the moment, I do not see why WikiProject Articles for creation should be at the bottom of all Projects but there is a place to discuss this. If this get support we should then create bots to do it. It's about 60,000 talk pages with this template. -- Magioladitis ( talk) 07:31, 15 June 2018 (UTC)
Dedicated WP:TPL bots never had support as far as I recall. Maybe there was one shoving banners into the metabanner after a certain threshold, but that'd be the only one if it ever was a thing. I don't see what'd be different here. Headbomb { t · c · p · b} 13:05, 15 June 2018 (UTC)
There was a bot that was adding WPBS and was doing that task and Yobot was doing it as part of WikiProject tagging. My main questions are: a) whether we have a guarantee that current BAG will continue to accept this as a secondary task and b) is there a need to actully do it as a sole task? WP:TPL bots did not have much luck in the past due to not conrecte rules (which now we have; I decicated a lot of time in this direction) and not built in AWB tools (which now we have since at some point I did some thousands of edits to rename templates to standard names). -- Magioladitis ( talk) 14:12, 15 June 2018 (UTC)
BAG cannot guarantee that any specific thing will be accepted by the community. If a task is proposed and there is consensus for it (or at least a lack of objections after a call for comments/trial), it'll be approved. If there is no consensus for the task to be done, it won't be approved. Headbomb { t · c · p · b} 20:19, 18 June 2018 (UTC)

Indexing talk page

User:Legobot has stopped indexing talk pages and archives and User:HBC Archive Indexerbot is deactivated. I would like a replacement for that task. -- Tyw7  ( 🗣️ Talk to me •  ✍️ Contributions) 20:06, 12 June 2018 (UTC)

Any these work? Category:Wikipedia_archive_bots -- Green C 14:31, 17 June 2018 (UTC)
Most of them are archive bots. Looking for index bots to tame over Legobot which has developed a big and doesn't index all talk pages. -- Tyw7  ( 🗣️ Talk to me •  ✍️ Contributions) 17:52, 17 June 2018 (UTC)
User:Legobot has 33 tasks not sure which one (Task #15?). Did Legobot say why they stopped or was it abandoned without a word? -- Green C 20:41, 18 June 2018 (UTC)
it's working on random page and many people had reported it but it hadn't been fixed. See the discussion at User talk:Legobot -- Tyw7  ( 🗣️ Talk to me •  ✍️ Contributions) 21:05, 18 June 2018 (UTC)
Looks like Legobot is on vacation until July 7. They should either fix the bugs (if serious) or give permission for someone else to take it over, should anyone wish. -- Green C 21:17, 18 June 2018 (UTC)
Legobot ( talk · contribs) is not on vacation, it is still running (there would be chaos on several fronts if it had stopped completely). It is Legoktm ( talk · contribs) that is on vacation, and if you have been following both User talk:Legobot and User talk:Legoktm, you'll know that Legoktm has not been responding to questions concerning Legobot (other than one or two specifics on this page such as #Take over GAN functions from Legobot above) for well over two years. -- Redrose64 🌹 ( talk) 17:41, 19 June 2018 (UTC)

Automatically add protection templates to protected pages

I tried bringing this up on the noticeboard, but I got no response. I am now convinced that there is no bot (or maybe there used to be one but it no longer works) that automatically adds the appropriate template to a page that has been protected by an administrator. This means that the template has to be added manually, and many admins forget to do this. The bot would put the following things on the template:

  • The level of protection the page has: "semi-protected", "fully protected", "pending changes", ect. "Move protected" would only be shown if that is the only type of protection added to the page.
  • When the protection expires, if it is set to expire.
  • The reason for the protection: "vandalism", "editing disputes", "promoting policy on BLPs", ect., if applicable.

These are all tasks that the protecting admin (or someone else who is able to edit the page) has to put in themselves. I think it would be perfectly possible for a bot to do this. If there is indeed a bot that is supposed to do this it should probably be fixed. funplussmart ( talk) 17:21, 15 August 2018 (UTC)

It was originally Lowercase sigmabot, that stopped and was taken over by TheMagikBOT2 - Wikipedia:Bot_requests/Archive_73#Add_protection_templates_to_recently_protected_articles - That seems to have ceased on the 27July. ping bot owner - @ TheMagikCow:. Ronhjones   (Talk) 22:00, 15 August 2018 (UTC)
The first two bullets are redundant since the protection templates now have this functionality built in. In fact, if you use the |expiry= parameter on any prot template, it will be ignored. It is also not necessary to add a prot template to certain kinds of page, such as templates that have either {{ documentation}} or {{ collapsible option}} since those also autodetect the setting of edit protection, and add the padlock template where appropriate. -- Redrose64 🌹 ( talk) 08:03, 16 August 2018 (UTC)
Redundant See User_talk:TheMagikCow#Stalled_Bot? - Api changes have stalled bot. repair in progress. Ronhjones   (Talk) 18:33, 16 August 2018 (UTC)
@ Funplussmart: Ronhjones   (Talk) 20:42, 16 August 2018 (UTC)

Vandalism from user:194.199.4.202

Hello this anonymous user is changing verified articles left and right. This is a vandalism

User:194.199.4.202 — Preceding unsigned comment added by Heraldique21 ( talkcontribs) 16:28, 22 August 2018 (UTC)

Declined Not a good task for a bot. @ Heraldique21: I think you in the wrong place - try WP:AIV Ronhjones   (Talk) 19:37, 22 August 2018 (UTC)

Removing date headers

Many pages, such as the help desk, and all of the reference desks, have level 1 date headers for each day questions are asked. Scsbot automatically adds these headers at the beginning of each day. However, if no questions are asked a certain day, users have to manually remove the headers, which has to be done quite often for reference desks with less traffic. So I'm wondering, would it be possible to have a bot who removes these headers at the end of a day, if no questions were asked then?-- SkyGazer 512 Oh no, what did I do this time? 01:39, 21 August 2018 (UTC)

Doing... @ SkyGazer 512: It should be possible. Looks like they are all just a Heading 1 tags (single =) with a date, then a newline (from the next date addition). I'll start an in-depth look tonight. Ronhjones   (Talk) 15:25, 21 August 2018 (UTC)
@ SkyGazer 512: I've created Category:Wikpedia Help pages with dated sections - it saves having to hard code page names (makes any later updating of any new help pages a dream), I'll just get the list of pages. Is that all the (current) pages that need fixing? How quick do you actually want the date removed? I see two options, examples...
  1. August 1 (no edits); August 2 (some edits); August 3 (current day) - remove August 1 (don't have to worry about when Scsbot adds the August 3 heading
  2. August 1 (no edits); August 2 (current day) - remove August 1 - need to make sure that Scsbot has been and updated the page (I note sometimes he can be a few hours late).
Obviously I would prefer option 1 - but 2 is possible, if necessary. Option 1 can be set to be not long after 00:00 UTC. I suspect option 2 would have to be 06:00 to ensure the other bot has edited (obviously if it hasn't edited, then it would skip) Ronhjones   (Talk) 18:46, 21 August 2018 (UTC)
If Scsbot ( talk · contribs) adds them, would it not be possible to ask the botop ( Scs) to amend that bot for this new request? What I suggest is that when Scsbot is about to add a date heading, it checks to see if the page presently ends with an unused level 1 date heading and if so, removes that before adding the new heading. -- Redrose64 🌹 ( talk) 18:52, 21 August 2018 (UTC)
That might work, I've not started any coding yet, I'll wait until he comments - I think he's using a shell script to add the date - not sure how well that will work on analysing the page and removing the unwanted dates. Ronhjones   (Talk) 19:28, 21 August 2018 (UTC)
I've pinged him by e-mail as he does not appear to log on often. Ronhjones   (Talk) 19:31, 21 August 2018 (UTC)
Imo, sooner's better. I personally would support option 2 (that is, remove the August 1 header as soon as the August 2 header is added), although if the first is easier, that would certainly be better than nothing.-- SkyGazer 512 Oh no, what did I do this time? 20:43, 21 August 2018 (UTC)
No problem. We'll plan it for option 2, and see how it pans out. We'll wait for ( Scs) to comment first, in case he can kill two birds with one stone. Ronhjones   (Talk) 23:04, 21 August 2018 (UTC)
Sounds great! Thank you.-- SkyGazer 512 Oh no, what did I do this time? 23:11, 21 August 2018 (UTC)
No time for long explanations, but mods to Scsbot for this purpose are unlikely, so do carry on with Plan B. — Steve Summit ( talk) 03:59, 22 August 2018 (UTC)
Coding... Thanks, Steve. Ronhjones   (Talk) 16:28, 22 August 2018 (UTC)
BRFA filed Ronhjones   (Talk) 00:13, 23 August 2018 (UTC)
Y Done Ronhjones   (Talk) 14:39, 23 August 2018 (UTC)

Placement of cursor after a Search

I would like a modification made to the Search facility. Simply, I would like the cursor to be placed after the text of the first instance of the search. The reason for this is that it would make editing much quicker in that you don't have to search for the text (which is highlighted) and then place the cursor after it to make an update. — Preceding unsigned comment added by Ralph23 ( talkcontribs) 02:44, 11 August 2018 (UTC)

Ralph23, this has nothing at all to do with bots or bot editing. Heck, I'm not even sure that it's a Wikipedia thing; I'm pretty sure this is browser-determined. Primefac ( talk) 02:45, 11 August 2018 (UTC)
From Wikipedia, the free encyclopedia
Archive 70 Archive 74 Archive 75 Archive 76 Archive 77 Archive 78 Archive 80

Bot to update the Adopt a User list

Hi- Theo's Little Bot 19 used to update the adopt a user list marking inactive adopters as such but this doesn't seem to have run since June 2016. Could someone take over the task? The request for approval can be found here, my original request here and there's a link to the source code there too if that helps. Thanks, jcc ( tea and biscuits) 01:21, 28 January 2018 (UTC)

Ok, if anyone takes this on, please ping me since I'm not watching this page. jcc ( tea and biscuits) 13:31, 30 January 2018 (UTC)
@ Jcc: BRFA filed -- Gabrielchihonglee ( talk) 01:45, 2 February 2018 (UTC)

Unreliable source? documentation

Hello, I have been pinged by User:Mattythewhite, who was informed of this by User:Helper201 that there is a documentation that the unreliable source? tags should be outside the ref tags not in them.

There may be unreliable source? tags found on articles in the references section, so a bot should be used to change the following:-

<ref>Reference {{Unreliable source?|date= }}</ref> → <ref>Reference </ref>{{Unreliable source?|date= }}
so it looks something like this [1] [2] unreliable source? Reference 1 does not abide to the documentation while ref 2 does. It is a difficult task to manually find all the articles with unreliable source? tags in the references sections. Iggy ( Swan) 16:32, 13 January 2018 (UTC)

  1. ^ Reference unreliable source?
  2. ^ Reference
Personally, I've always included any tags within the reference just before the closing tag, especially with {{ sps}}. I don't necessarily think this is a good idea. – Fredddie 16:39, 13 January 2018 (UTC)
Looking at documentation, it seems that {{sps}} should be inside the ref tags. It would then seem to me that rather than have a bot clean up the {{ Unreliable source?}} tags, we should come up with consistent rules for their usage. – Fredddie 16:44, 13 January 2018 (UTC)
I'd say the use of the unreliable source? tags in some articles within text and others in the references section would be somewhat inconsistent within the project, whether or not it agrees with the documentation is a different question. Iggy ( Swan) 16:59, 13 January 2018 (UTC)
@ Fredddie: {{ sps}} explicitly states it should be used outside ref tags. Nihlus 11:50, 14 January 2018 (UTC)

bot for creating new categorys

please make bot for creating new categorys example people birth by day — Preceding unsigned comment added by 5.75.62.30 ( talk) 07:08, 20 January 2018 (UTC)

 Not done. We do not sort people by birth day, only birth month. Primefac ( talk) 16:10, 20 January 2018 (UTC)
Year surely? Not month. -- Redrose64 🌹 ( talk) 21:43, 20 January 2018 (UTC)
You're right, I was thinking maintenance cats like {{ citation needed}}. Primefac ( talk) 21:46, 20 January 2018 (UTC)
why not creating categorys birth by day — Preceding unsigned comment added by 37.254.139.175 ( talk) 07:07, 21 January 2018 (UTC)
Because we could need anything up to 366 categories in a year, up to 36524 each century instead of the present level of 1 per year, 100 per century. You would need to show a demonstrable requirement for such a huge amount, and obtain plenty of support for it. See also WP:BOTPCAT. -- Redrose64 🌹 ( talk) 11:06, 21 January 2018 (UTC)
A single category per date, ignoring year (e.g. Category:21 January births) would not seem so harmful. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:05, 21 January 2018 (UTC)
If you want to find all the people born on a given day ("21 January 1966") or a given date regardless of the year ("21 January"), you can do so with a query at Wikidata. You can refine such queries, so that it only lists people with articles on a given Wikipedia, or just opera singers or whatever. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:05, 21 January 2018 (UTC)

no i want just birth by day not by day and year example i want november 15 birth not november 15 1994 birth — Preceding unsigned comment added by 5.22.3.31 ( talk) 05:10, 22 January 2018 (UTC)

You would need to start a discussion and reach a consensus to create such a set of categories. Primefac ( talk) 13:25, 22 January 2018 (UTC)

Where to make a discussion — Preceding unsigned comment added by 37.254.179.162 ( talk) 14:33, 22 January 2018 (UTC)

I'd go with WP:VPR. Primefac ( talk) 14:34, 22 January 2018 (UTC)

User:AarghBot , currently owned by User:Mikaey

BG19bot

The bot BG19bot was very helpful but has not been working for more than 6 months. And the bots "owner" has not been on Wikipedia since August. is there a way to start it up again, or a similar bot can be created?. BabbaQ ( talk) 23:30, 25 October 2017 (UTC)

Under circumstances not completelly investigated, BG19bot is at the moment inactive. I am willing to fill out a BRFA for all of its tasks. I will probably do in the nxt few days. -- Magioladitis ( talk) 22:51, 12 November 2017 (UTC)

us-highways.com

http://us-highways.com/ was previously used for a website called U.S. Highways: From US 1 to (US 830), a self-published site on the history of the United States Numbered Highway System. The creator of the website (Robert V. Droz) ran into some unrelated legal issues in his home state of Florida and let the site lapse. The domain name has been assumed by a commercial enterprise completely unrelated to the former site. As an SPS, the site should have never been used as a source in articles, but it was. Fredddie and I feel that it would be preferable to remove citations and links to the site at this time. Would some bot operator be amenable to replacing any citations to the site with {{ citation needed}} tags and removing any links in an external links section of the articles? Imzadi 1979  12:04, 13 January 2018 (UTC)

There are also a few links labeled "Florida in Kodachrome", but the domain is the same. – Fredddie 16:35, 13 January 2018 (UTC)
@ Imzadi1979 and Fredddie: Not taking this on yet, but do you have a consensus in hand to do this? Hasteur ( talk) 23:33, 15 January 2018 (UTC)
I personally support this. It may be AWB-able though. -- Rs chen 7754 01:21, 16 January 2018 (UTC)
@ Fredddie, Imzadi1979, Hasteur, and Rschen7754: SPS has a criterion for use, which is "Self-published expert sources may be considered reliable when produced by an established expert on the subject matter, whose work in the relevant field has previously been published by reliable third-party publications." (Emphasis original.) Does Robert V. Droz meet this criterion? If he does not, then I don't see a problem with mass removal of the references. Search indicates there are a fairly low number of links, so I would agree, this can be WP:AWBd.
If he does meet the bar for use in SPS, then what should instead happen is that InternetArchiveBot be updated (if such has not yet occurred) to understand these webpages have been usurped and to auto-archive the lot. -- Izno ( talk) 18:07, 23 February 2018 (UTC)
@ Izno: Droz doesn't meet the bar. He wasn't a historian nor employed by a highway department before his recent change in status. Imzadi 1979  23:45, 23 February 2018 (UTC)
Removal seems good to me then too. I think there are enough here that we could execute. -- Izno ( talk) 01:11, 24 February 2018 (UTC)
I've done this now on User:IznoRepeat and made something of a mess of it the first time through which I noticed about 100 pages in (repeated refs got me--note to future self). AnomieBot helped snag the repeated ones I missed. There were also, toward the end, a few references to old Rand McNally maps clearly hosted by him on a personal server, which may be valuable. There was also evidence of Geocities references hosted by him in the batch, but I can't point to any in specific without some re-digging. Question: Why are Google Maps referenced on roads? Those seem like not-great references. -- Izno ( talk) 20:14, 24 February 2018 (UTC)
Thanks for that, Izno. Re: Google Maps, I only use it in conjunction with the official printed state highway map (and maybe the Rand McNally atlas for other details), and then I use it for its satellite view to provide the extra details about the landscape/surroundings/etc, not for routing, etc. Imzadi 1979  02:16, 26 February 2018 (UTC)
There is no problem with referencing Google Maps or any other reliable map, as long as one recognizes the limitations of the source. -- Rs chen 7754 03:39, 26 February 2018 (UTC)

Move articles with the "telenovela" disambiguator to "TV series"

Per discussion at Wikipedia:Village pump (policy)#RfC: Is "telenovela" a suitable disambiguator? and updated guideline at WP:NCTV, is it possible to get a bot to move all articles with the disambiguator "(telenovela)", "(COUNTRY telenovela)" or "(YEAR telenovela)" to "(TV series)", "(COUNTRY TV series)" and "(YEAR TV series)"? -- wooden superman 16:31, 15 February 2018 (UTC)

Special:Search/intitle:"telenovela" does not indicate there are more than 600 titles that need to change, and a good chunk of those are probably false positive redirects already. -- Izno ( talk) 20:01, 21 February 2018 (UTC)
Current title Conflicting article
Araguaia (telenovela) Araguaia (TV series)
Cain and Abel (telenovela) Cain and Abel (TV series)
Caribe (telenovela) Caribe (TV series)
Claudia (telenovela) Claudia (TV series)
Esperanza (telenovela) Esperanza (TV series)
Magdalena (telenovela) Magdalena (TV series)
Vanessa (telenovela) Vanessa (TV series)
Above are the outstanding pages based on quarry:query/25043. —  JJMC89( T· C) 00:41, 26 February 2018 (UTC)
Great, thanks. I've manually moved these. -- wooden superman 12:21, 26 February 2018 (UTC)

Archiving stale reports at AIV

A consensus is emerging here that a bot to clear stale AIV reports would be desirable. Reports that have been open for more than 6-8 hours are usually considered declined by default. An edit summary along the lines of "listed for >6 hours without any admin willing to block" is appropriate. I see a couple main obstacles to this, and I was hoping this board could help with them.

  1. This task needs to be run at least every 2 hours. That way we can set the minimum archive time to 6 hours, and get all the threads before they're 8 hours old. This has been the sticking point with the currently extant archive bots that I have checked.
  2. It has to archive individual bullet points, not sections. I don't think this is a great technical hurdle but if it is we can reformat AIV to accommodate the bot.
  3. It mustn't break HBC AIV helperbot5, which archives actioned reports. Pinging the current operator, @ JamesR:.

Cheers, Tazerdadog ( talk) 23:42, 1 January 2018 (UTC)

The current bot archiving actioned reports should probably also be responsible for stale unactioned reports. -- Izno ( talk) 13:54, 2 January 2018 (UTC)
How difficult would it be to update the current bot? Tazerdadog ( talk) 22:25, 6 January 2018 (UTC)
The current not just clears to my knowledge, not archives, so I think it’d be pretty easy to add another clear condition to it. TonyBallioni ( talk) 05:23, 7 January 2018 (UTC)
Can someone with more technical knowledge update the bot to do this? Tazerdadog ( talk) 16:50, 13 January 2018 (UTC)

I meant to note this here much earlier, but I wanted to add that in addition to archiving declined/stale reports, a bot adding a note about recently declined reports or users recently off a block might be helpful as well. ~ Amory ( utc) 02:20, 22 February 2018 (UTC)

@ Amorymeltzer and Tazerdadog: You should probably go and ask the operator of HBC AIV helperbot5, who is JamesR. -- Izno ( talk) 18:27, 23 February 2018 (UTC)

Bot for automatically updating Alexa rank in infobox

I recently updated the infobox information in On-Line Encyclopedia of Integer Sequences. As part of that update, I added an Alexa parameter to the infobox. It seems that the value of this parameter requires frequent updates. I just checked the Alexa link and found that the rank information in the infobox is no longer up-to-date. I think there are probably many more infoboxes with this parameter and regular bot runs to update those parameters seem like a good idea to me. -- Toshio Yamaguchi 14:49, 6 January 2018 (UTC)

This report says that |alexa= is used 2,257 times in {{ Infobox website}}. – Jonesey95 ( talk) 15:41, 6 January 2018 (UTC)
There is an Alexabot over at Wikidata, though the operator, Tozibb, says that they have been having technical issues so the last updates are from December. I have been working on Module:Alexa/{{ Alexa/sandbox}} (credit to RexxS for writing the initial module), though it's not as useful right now since it generates arrows based on the two most recent Wikidata values (or based on four local parameters) and most Wikidata items with the Alexa rank property only have one value (furthermore, I provided an incomplete enwiki search to Tozibb for finding items to add data to). Referencing capability needs to be added before it can be used, either with local values or with the Wikidata data. Jc86035 ( talk) 16:13, 6 January 2018 (UTC)

Bot that notifies users on their talk page if a set of pages are created.

I'm looking to create a bot that can automatically use {{ AC notice}}~~~~ to inform users an article from a set of pages has been created. I noticed that I have a lot of redlinks in my Watchlist that I only have there to find out if a page is created eventually. If a bot could use addtext.py to select an article it detects has changed after a refresh of a a group of pages. If an article has been created, regardless of its contents (or lack thereof), uses it for 1= and adds the parameter to the talk pages of users that have provided it with a request to notify them upon its creation. Does this make sense? I don't know, but I hope it does. Thank you anyways! ― Matthew J. Long -Talk- 22:03, 6 January 2018 (UTC)

I'm sorry but it is Impossible because a user's watch page is not publicly available according to Help:Watchlist#Privacy -- Gabrielchihonglee ( talk) 13:00, 15 January 2018 (UTC)
This request isn't asking for a bot to access people's watchlists, so it's not impossible for the reason you claim. Anomie 21:56, 15 January 2018 (UTC)

List of non-marine molluscs of a country

I request to create lists by country "List of non-marine molluscs of COUNTRY" based on data from the http://www.iucnredlist.org/search website. It is very time consuming task, even if I can filter out freshwater / terrestrial / gastropods / bivalves of certain country at the IUCN website. If a Bot could make a list of species with the references, that would be great.

This is realizable (there onece existed a Polbot, that was able to create stubs like this [1] based on iucnredlist.org and there is possible to make various list based on the such as this one List of least concern molluscs).

Examples of the work in progress.

There are number of lists missing:

You can virtually make lists of non-marine molluscs for all countries (I will manually merge them with existing lists when needed).

It would be great, if you could at least sort those species into sections "Freshwater gastropods", "Land gastropods" and "Freshwater bivalves" (or make working lists of those three groups).

This task is suitable for non-marine molluscs. This task is not suitable for marine molluscs (that are placed in separate lists on Wikipedia), because there are not enough data for them on IUCN.

If you could pre-prepare such lists (in a User namespace), I would finish the task manually (I will sort species in systematic order, I will add families, I will update outdated info, I will generally check-out lists). Thanks. -- Snek01 ( talk) 21:11, 9 January 2018 (UTC)

Snek01, this should be brought up at Wikipedia talk:WikiProject Tree of Life first, to determine whether or not it's desirable, if it would duplicate existing info, or if it could be accomplished via the existing category structure, etc. I believe there's also a moratorium on bots mass-creating articles.   ~  Tom.Reding ( talkdgaf)  21:32, 9 January 2018 (UTC)
I requested to do it in my User namespace. If you are afraid that could happen something bad, bot operator can create one list only first. You will see what will happen. Thanks for your solicitude. There is even no need a Wikipedia approved Bot for this task. There is need knowledge how to datamine data from iucnredlist.org. -- Snek01 ( talk) 23:34, 9 January 2018 (UTC)
Oh, I misunderstood (re Polbot). Dumps into userspace would be fine. I'm currently working on another IUCN related project that's fairly large, otherwise I'd offer to help. It would still be worthwhile x-posting to WT:TREE, as there are others there that don't watch WP:BOTREQs who might also be able to help.   ~  Tom.Reding ( talkdgaf)  23:44, 9 January 2018 (UTC)

Remove red links and redirects from lists in a category

Can anyone use a bot to scan the drafts in Category:Abyssal temporary Russia cat and remove entries from the lists that are redlinks and redirects? Abyssal ( talk) 15:18, 20 February 2018 (UTC)

I can't really proceed on my big list of prehistoric life articles series until this is resolved. Could anyone please help Abyssal ( talk) 15:56, 21 February 2018 (UTC)
@ Abyssal: User:AlexTheWhovian/script-redlinks run on each of the pages will probably get you most of the way there. Will that work for you? -- Izno ( talk) 18:40, 23 February 2018 (UTC)
@ Izno:Thanks for the recommendation. I won't be able to tell until sometime after the weekend. I have some long shifts coming up. Abyssal ( talk) 18:45, 23 February 2018 (UTC)
@ Izno:Actually I managed to get this working right now and it didn't help. It only de-linked them rather than deleting the links altogether. Abyssal ( talk) 18:52, 23 February 2018 (UTC)
You wanted the entire lines to be deleted? Is there a purpose in your working flow to not having the lines at all? -- Izno ( talk) 18:54, 23 February 2018 (UTC)
I'm trying to take long lists of species and boil them down only to the more scientifically important ones. Since the most important species are the ones most likely to have pre-existing articles, I'm trying to remove all the red links. Abyssal ( talk) 19:24, 23 February 2018 (UTC)
@ Abyssal: I made an edit which should have done the job of removing the red links at Draft:List of the prehistoric life of Arkhangelsk Oblast. Does that work for you? -- Izno ( talk) 01:06, 24 February 2018 (UTC)
As for redirects fixing, redirects should become obvious if you are using the Page Previews feature, which appears in Special:Preferences#mw-prefsection-rendering in the "Reading preferences" section. -- Izno ( talk) 01:14, 24 February 2018 (UTC)
@ Izno:Yeah, that's good. How did you do it? Can you do that for the rest of the category as well? Abyssal ( talk) 15:20, 26 February 2018 (UTC)
 Doing... Manually. This is easy enough with a little regex. Tazerdadog ( talk) 19:03, 26 February 2018 (UTC)
@ Abyssal: How does Draft:List of the prehistoric life of Kaliningrad Oblast look? If that's what you wanted, I can do the same thing for the rest of the pages without much trouble. Tazerdadog ( talk) 21:21, 26 February 2018 (UTC)
@ Tazerdadog:Could you preserve the bullet structure so the species stay under their respective genera? Abyssal ( talk) 21:41, 26 February 2018 (UTC)
@ Abyssal: That shouldn't be too hard to do. How do you want me to handle cases where the species exists, but the genus is a redirect or redlink? Example: Xylolaemus sakhnovi, but Xylolaemus
You can keep the genus in that case. Abyssal ( talk) 22:07, 26 February 2018 (UTC)

Sounds good. Everything else look as you want it to? Tazerdadog ( talk) 22:15, 26 February 2018 (UTC)

@ Tazerdadog:OK, let's see how it goes. Abyssal ( talk) 22:27, 26 February 2018 (UTC)
Yeah, what I did was use the redlink script and then I pulled the output of the script into an offline .txt editor. There, I regex removed all the lines without links. -- Izno ( talk) 22:38, 26 February 2018 (UTC)
@ Tazerdadog: Kaliningrad Oblast Is still missing its genus-species hierarchy... Abyssal ( talk) 22:53, 26 February 2018 (UTC)
Working on it now. Tazerdadog ( talk) 23:10, 26 February 2018 (UTC)
That one is done Tazerdadog ( talk) 23:18, 26 February 2018 (UTC)
All undesired redlinks should be gone now. Working on redirects. Tazerdadog ( talk) 02:48, 27 February 2018 (UTC)
Sweet! Thanks, @ Tazerdadog:! Abyssal ( talk) 03:02, 27 February 2018 (UTC)

Redlinks remover

@ Abyssal: I wrote a script that removes redlinked list entries. It's called User:The Transhumanist/RedlinksRemover.js.

It's designed specifically for cleaning up outlines and lists, and I noticed your drafts are in outline format, except for the little cross at the beginning of the entries.

The script keeps nipping off the ends of branches until it reaches one that shouldn't be pruned.

It won't strip out an entry that has descendants in the tree.

After it is done pruning redlinked ends from the tree, it goes back and delinks any redlinks that are leftover and red categories (this part comes from AlexTheWhovian's script).

It would work for your lists, if you removed the little crosses first. Then you could put them back in after the script was done. That's easy to do with WikEd.

If you didn't remove the crosses, it would just delink the links, because they aren't at the beginning of the entries, and so the script would consider them to be embedded links, rather than linked entries.

To use it, you install it, and it provides a menu item in the tools menu on the sidebar. When you are ready to remove the redlinked entries from a page, just click on "Remove red links", and it will process the current page you are on.

I hope you find this does the trick for you.     — The Transhumanist   00:48, 1 March 2018 (UTC)

P.S.: I haven't added this to the user scripts list yet, because it hasn't undergone enough testing. Beware, it is alpha software. -TT

Thanks, @ The Transhumanist: I'll take a look at it. Abyssal ( talk) 14:18, 1 March 2018 (UTC)

Removing links to copyrighted videos?

Hey folks, pursuant to this discussion, I'm curious to know if it would be possible to build a bot that would remove links to copyrighted material hosted in violation of the creator's copyright, such as YouTube. Ed  [talk]  [majestic titan] 18:12, 19 February 2018 (UTC)

I'm not a botop, but I'd imagine it would be impossible. It's not a simple case of seeing if the uploader's name matches the title; a video clip from a TV show, for instance, could legitimately be released by the TV station, by the presenter of the show, by the third-party production company which made it, or by up to 200 different holders of overseas rights, depending on exactly what the terms of the contracts say. This is something even a company with the resources of Google struggles with, and is why YouTube is so reliant on rights holders flagging problematic content. ‑  Iridescent 18:19, 19 February 2018 (UTC)
Agreed. CorenSearchBot used to flag article text if it was copyvio, but I'm not sure you'd be able to determine if an elink was a link to a page violating copyright or just a page with a copyright. Primefac ( talk) 18:26, 19 February 2018 (UTC)
Does Youtube publish a list of videos it deletes for copyright reasons? Or similar  Lingzhi ♦  (talk) 11:34, 23 February 2018 (UTC)
no. Primefac ( talk) 17:09, 23 February 2018 (UTC)

Tag all remaining disambiguation links

A consensus is forming at Wikipedia talk:WikiProject Disambiguation#Proposal to tag all disambiguation links to tag all remaining disambiguation links in Wikipedia with a {{ disambiguation needed}} tag. From our most recent count, about 16,454 disambiguation links remain. Around 5,500 of these are already tagged, leaving a little under 10,000 to tag. What is needed here is, first, to get a list of all links to disambiguation pages from mainspace pages that do not already have this tag; second, wait about ten days to see if any of those are short term links that will be fixed quickly; third, re-check that list to see what links from that initial list have been fixed; and fourth, have a bot tag all remaining disambiguation links. bd2412 T 21:51, 14 December 2017 (UTC)

Note: In order to distinguish these from older uses of the tag to identify difficult links, we will actually need to tag these with the template redirect {{ Needdab}}. bd2412 T 13:35, 15 December 2017 (UTC)
@ Bd2412: From my personal experience, the remaining disambiguation links fall into 2 categories: recently created links and links that are difficult or impossible to disambiguate. In the first case, adding {{ disambiguation needed}} would be useful, in the second case, not as much. Plus there are rare cases where it actually makes sense to link to the disambiguation page (when more than one of the meanings or uses of a term are relevant to the link). Kaldari ( talk) 07:26, 2 January 2018 (UTC)
@ Kaldari: Longstanding difficult links are probably already tagged with {{ disambiguation needed}}, for the most part. For those that are not, it may still be useful because tagging may draw the attention of subject matter experts, who have the specialized knowledge to fix the link even if it is difficult for the average editor. As for intentional links to disambiguation pages, these must conform to WP:INTDABLINK. If they do, then they don't show up as errors, and there is no need to tag them. If they do not, then they need to be fixed like any other link. bd2412 T 02:48, 4 January 2018 (UTC)

Tag talk pages of articles about English with Template:WikiProject English language

WP:Article alerts recommends having a bot tag the talk pages of articles with relevant topical wikiproject banners so that the AA bot produces more meaningful results. This would also be useful for getting this barely active project rolling better; I'd been looking into manually going article to article doing this, but it looked to be a rather daunting task even with AWB, and I'm on a Mac, so I'd have to run AWB in a VM or something anyway.

Would start with Category:English languages and its subcats.

Various subcats of Category:Words are going to qualify but will probably have to be done manually (e.g. about 99% of the content of Category:Neologisms, Category:Slang, etc., are English, but a handful of articles in such categories are not and so should not be tagged as within the scope of this project. Similarly, the majority of articles under Category:Punctuation have a section on English and would get tagged, but in a few cases the English coverage has been split out into separate spinoff articles like Quotation marks in English which should get tagged while the main article on the mark would not. We'll probably want to exclude most literature-related categories, but would include Shakespeare (for having had a profound effect on English, in contributing more stock phrases than any other body of work besides the King James Bible). Category:Lexicographers and other such bios will also need manual tagging.  —  SMcCandlish ¢ >ʌⱷ҅ʌ<  19:09, 17 January 2018 (UTC)

Convert protocol relative URLs to http/https

All protocol relative links on Wikipedia should be converted to either http or https. As of June 2015, Wikipedia is 100% HTTPS only and because protocol relative links are relative to where they are hosted it will always render as HTTPS. This means any underlying website that doesn't support HTTPS will break. For example:

[2] (//americanbilliardclub.com/about/history/)

..the http version of this link works. The article American rotation shows it in action, the first three footnotes are broken because they use a protocol relative link to a HTTP only website. But Wikipedia is rendering the link as HTTPS.

More info at WP:PRURL and Wikipedia:Village_pump_(technical)#Protocol_relative_URLs. It's probably 10s of thousands of links broken. -- Green C 21:06, 8 June 2017 (UTC)

This should only be done if the existing link is proven to be broken, and where forcing it to http: conclusively fixes it. Otherwise, if the link is not dead under either protocol, it is WP:COSMETICBOT. -- Redrose64 🌹 ( talk) 21:45, 8 June 2017 (UTC)
Well let's ask, what happens if you keep them? It creates a point of failure. If the remote site stops supporting HTTPS then the link immediately breaks. There is no guarantee a bot will return years later and recheck. WP:COSMETICBOT is fine but it shouldn't prevent from removing a protocol that causes indefinite maintenance problems and MediaWiki no longer really supports. By removing it also discourages editors from further usage, which is good. -- Green C 22:07, 8 June 2017 (UTC)
That reasoning makes no sense. If a bot converts the link to https and the remote site stops supporting HTTPS, then the link immediately breaks then too. Anomie 00:22, 9 June 2017 (UTC)
Different reasoning. IABot forces HTTPS on all PR URLs since Wikipedia does too, when it analyzes the URL. It's erroneously seeing some URLs as dead as a consequence since they don't support SSL. The proposal is to convert all non-functioning PR URLs to HTTP when HTTPS doesn't work.— CYBERPOWER ( Message) 02:22, 9 June 2017 (UTC)
@ Cyberpower678: The proposal, as specified above by Green Cardamom ( talk · contribs) is not to convert all non-functioning PR URLs to HTTP when HTTPS doesn't work, but to convert all PR URLs to either http or https. No exceptions were given, not even those that are presently functioning. This seems to be on the grounds that some are broken. -- Redrose64 🌹 ( talk) 09:06, 9 June 2017 (UTC)
Do I want to get rid of PR URLs? I personally think we should because they confuse editors, confuse other bots, ugly and non-standard etc they're an unnecessary complication. If we don't want to get rid of them (all), we still need to the fix broken HTTP links either way. -- Green C 14:35, 9 June 2017 (UTC)
  • As someone who's been strongly involved with URL maintenance over the last 2 years, I think this bot should be run on Wikipedia, and should enforce protocols. It's pushing WP:COSMETICBOT but if the link ends up being broken because only HTTP works, then that will create other issues. The task can be restricted to only converting those not functional with HTTPS, but my first choice is to convert all. — Preceding unsigned comment added by Cyberpower678 ( talkcontribs) 01:38, 13 June 2017 (UTC)
Opining as a bot op: I personally don't think this can be read as having community consensus because it's going to create a lot of revisions for which there is no appreciable difference. Yes it would be nice if wikipedia was smart enough to figure out if the relative URL is accessable only via HTTP or can be accessed via https, but the link is clicked in the user's browser and therefore the user doesn't know that the content may be accessable via HTTPS or HTTP. Ideally, users entering relative URLS could be reminded via a bot that it's better to be explicit with what protocol needs to be used to get to the content. The counter is we could set a bot to hunt down all the relative URLS and put a maintanance tag/category in the reference block so that a human set of eyes can evaluate if the content is exclusively available via one route or if the content is the same on both paths.

TLDR: This request explicitly bumps against COSMETICBOT, needs further consensus, and there might be a way to have "maintenance" resolve the issue. Hasteur ( talk) 12:38, 13 June 2017 (UTC)

Those are all good ideas but too much for me to take on right now. Agree there is no community consensus about changing relative HTTPS links; However existing relative HTTP cases broken in June 2015 should be fixed asap. A bot should be able to do it as any broken-link job without specific community consensus (beyond a BRFA). Broken links should be fixed. That's something I can probably do, unless someone else wants to (I have lots of other work..). Note this fix would not interfere with any larger plans to deal with relative links. -- Green C 15:26, 13 June 2017 (UTC)
Bump. -- Green C 17:13, 9 August 2017 (UTC)
  • Strong support: This is definitely not COSMETICBOT; these URL errors directly interfere with exercise of WP:Verifiability. They also cause editwarring and article damage; various times I've had to revert people – including some long-experienced editors – removing "dead links" and inserting {{ citation needed}} tags, when all that was required was adding the characters "http:".  —  SMcCandlish ¢ >ʌⱷ҅ʌ<  21:02, 3 October 2017 (UTC)
  • Bump thread expire -- Green C 04:16, 22 October 2017 (UTC)
  • Support per GreenC and SMcCandlish Jon Kolbert ( talk) 23:59, 28 October 2017 (UTC)
  • Needs wider discussion. I happen to agree with this strongly, but this would need a very wide discussion at a village pump before being considered. This is far too many edits to do without very clear consensus, some of which fail WP:COSMETICBOT if no consensus is obtained to override it. ~ Rob13 Talk 14:50, 8 December 2017 (UTC)
  • Support per GreenC. Also this is seriously needed and would benefit the project. BabbaQ ( talk) 13:24, 30 December 2017 (UTC)
  • I'll chime in as a WP:BAG member here, that the scope of the task means we need wider discussion, if only to identify possible pitfalls and cornercases. WP:VPT/ WP:VPR would be the natural places to hold it. I personally support the task FWIW. Headbomb { t · c · p · b} 02:44, 26 January 2018 (UTC)

Bot to search and calculate coordinates

Please look at this table: Lands_administrative_divisions_of_New_South_Wales#Table_of_counties

My goal is to add a column to this table that shows the approximate geographical coordinates of each county. Those county coordinates can be derived form the parish coordinates that are found in each county article, by taking the middle of each northernmost and southernmost / easternmost and westernmost parish coordinates. Is it possible to write a script or a bot to achieve this? -- Ratzer ( talk) 21:27, 25 January 2018 (UTC)

For illustration, I did the work for the first county in the list, Argyle County, manually. The table of parishes in this article shows that they range from 34°27'54" and 35°10'54" latitude south and 149°25'04" and 150°03'04" longitude east. The respective middle is 34°49'24" and 149°44'04", which I put in the first table entry of Lands administrative divisions of New South Wales and the info-box of Argyle County.-- Ratzer ( talk) 07:39, 26 January 2018 (UTC)

Please Review...

Please review my Draft at Cpt. Alex Mason. Thanks! — Preceding unsigned comment added by Amason1930 ( talkcontribs) 23:16, 21 March 2018 (UTC)

@ Amason1930: what does this have to do with bots? Headbomb { t · c · p · b} 00:12, 22 March 2018 (UTC)

Newspaper

I would like to request for a bot that could fill in the Publisher after use of the Refill tool such as |publisher=Aftonbladet. As many Swedish subject articles uses one or two of the few main newspaper sources that are available in Sweden I would like for the bot to fill in for the sources aftonbladet.se as Aftonbladet, expressen.se as Expressen, svd.se as Svenska Dagbladet, kvp.se as Kvällsposten and dn.se As Dagens Nyheter. If those could be filled in at Publisher it would help seversl thousands of articles. BabbaQ ( talk) 13:32, 30 December 2017 (UTC)

Anything that is known to be a newspaper should use |newspaper= and definitely not |publisher=, which should instead be removed. -- NSH001 ( talk) 16:53, 21 February 2018 (UTC)
@ BabbaQ: Idea is not well explained. How would these pages be found/determined? -- TheSandDoctor ( talk) 22:30, 11 March 2018 (UTC)

Replace "IMDb award" by "IMDb event"?

There is a template which takes an IMDb page, I think, and an event name as parameters - e.g. {{ IMDb award|Venice_Film_Festival|Venice Film Festival}} - but it creates broken links. Maybe it relied on some redirect on IMDb's side and they changed their format, I do not know. There is another template which uses a IMDb event code instead of a page name - e.g. {{ IMDb event|0000681|Venice Film Festival}}, which creates a correct link. See both at work:

Is there any chance a bot could fix those? I guess it would need to search the IMDb to get the event codes, which I do not know if it is allowed... (both by us and them). Thanks. - Nabla ( talk) 17:23, 30 December 2017 (UTC)

I've done a couple of hundred mannually. There's only about 50 left now. -- WOSlinker ( talk) 01:03, 31 December 2017 (UTC)
And I have done the remaining ten or so. Thank you. - Nabla ( talk) 23:08, 24 February 2018 (UTC)
In that case, N Not done as was done manually. --- TheSandDoctor ( talk) 21:11, 11 March 2018 (UTC)

Adding category to articles

Pleas add this category to the articles related to the Children's literature portal, because I need it in arabic wikipedia. Thank you. أبو هشام ( talk) 12:10, 4 March 2018 (UTC)

@ أبو هشام: Why do you need it in the Arabic Wikipedia? If you need it there, why ask on the English Wikipedia? Also, doesn't Category:Children's literature already contain the related articles? If I am misunderstanding, I apologize (also why I am asking for clarification) -- TheSandDoctor ( talk) 16:21, 9 March 2018 (UTC)
The problem solved, thanks. أبو هشام ( talk) 00:27, 10 March 2018 (UTC)

Olympic competitors: Project tagging

Can a bot be created to add the {{WikiProject Olympics}} to the talkpage of all the articles in the sub-cats of Category:Olympic competitors by country that don't already have their TP tagged? If the tag already exists, ignore it, and if it's not there already add it with stub class and low importance, unless the article is already tagged at a higher class than stub by another project. Now the 2018 Winter Olympics are over, it would be good to catch all those athletes who are missing the tag, along with countless others that have been created/updated too. Many thanks. Lugnuts Fire Walk with Me 13:02, 8 March 2018 (UTC)

For interest, Petscan results show 104k pages in this cat and its subpages. It's likely the majority are already tagged, but that's still a hell of a lot of pages to parse. Primefac ( talk) 13:18, 8 March 2018 (UTC)
Excluding those already tagged gives 13,872 results. Are Ancient Olympians within the scope of the project though? They also fall within the category. -- Paul_012 ( talk) 14:53, 8 March 2018 (UTC)
Duh, should have thought of that. And I suppose Ancient Olympians would be in the scope of WikiProject Olympics. Primefac ( talk) 15:04, 8 March 2018 (UTC)
When requests like this are made, we normally ask for an explicit list of categories and not a blanket "plus all subcategories" approach - that way lies mistagging. -- Redrose64 🌹 ( talk) 16:12, 8 March 2018 (UTC)
Eyeballing the Petscan for all subcats looks okay, though I didn't check the entire list. -- Izno ( talk) 16:23, 8 March 2018 (UTC)
Going deeper:
  1. Checking for sports inclusion: with 1 cat/row, removing matches to the regex [\r\n][^\r\n]*?\b(archers|artists|athletes|(bi|tri|pent)athletes|bobsledders|boxers|canoeists|competitors|cricketers|curlers|cyclists|divers|equestrians|fencers|footballers|golfers|gymnasts|jumpers|lugers|managers|medall?ist stubs|medall?ists|Members of|Olympians|pilots|players|practitioners|racers|rowers|sailors|shooters|skaters|skiers|snowboarders|swimmers|weightlifters|wrestlers)\b[^\r\n]* leaves only 173 Category:Olympic judoka of Japan-type cats, Category:Olympic pelotaris by country, Category:Olympic pelotaris of France, Category:Olympic pelotaris of Spain, and Category:1980 US Olympic ice hockey team. Since, as I just found out, Judoka is one who practices Judo, and pelotaris refers to players of various court-sports (the pelotaris cats only contain people too), everything looks legit here.
  2. Checking for Olympics inclusion: with 1 cat/row, removing matches to the regex [\r\n][^\r\n]*?\b(olympics?|olympians)\b[^\r\n]* leaves only Category:Canoeists of the Republic of Macedonia, which only contains Olympic athletes.
All 5048 cats look good to me. The last canoeists cat deserves a name change though.   ~  Tom.Reding ( talkdgaf)  19:45, 8 March 2018 (UTC)
I've done project tagging before. Looks like it'd be best to take class/importance from {{ Wikiproject Biography}}.   ~  Tom.Reding ( talkdgaf)  15:34, 8 March 2018 (UTC)
There are two problems with taking importance from {{ WikiProject Biography}}: one is that the importance rating varies between WikiProjects - a topic that is high-importance to one might be low importance to another; the other is that {{ WikiProject Biography}} doesn't have importance ratings. Taking the class rating from {{ WikiProject Biography}} should be fine though. -- Redrose64 🌹 ( talk) 16:32, 8 March 2018 (UTC)
Good point. By 'best' I only meant that it seems to be the most prevalent WP banner in the lot.   ~  Tom.Reding ( talkdgaf)  16:39, 8 March 2018 (UTC)
I should qualify that. {{ WikiProject Biography}} doesn't have general importance ratings, but it does have workgroup-specific importance ratings, and these are described as priority ratings. For example, when |sports-work-group=yes is set, then |sports-priority= is recognised; but somebody who is |sports-work-group=yes|sports-priority=low for {{ WikiProject Biography}} might rate |importance=mid for {{ WikiProject Olympics}}, see for example Talk:Christopher Dean (don't forget to [show] the "WikiProject Biography / Sports and Games" row). So I still think that the importance shouldn't be copied. -- Redrose64 🌹 ( talk) 10:39, 9 March 2018 (UTC)
Thanks for everyone's input - is this likely to happen? Thanks again. Lugnuts Fire Walk with Me 09:21, 9 March 2018 (UTC)
BRFA submitted.   ~  Tom.Reding ( talkdgaf)  15:45, 9 March 2018 (UTC)
Thanks Tom! Lugnuts Fire Walk with Me 14:22, 11 March 2018 (UTC)
  Done! 13,068 pages tagged, with 0 remaining. 590 are missing {{ WP Bio}} so were excluded from the run, but I'll help tag them manually.   ~  Tom.Reding ( talkdgaf)  14:21, 17 March 2018 (UTC)
Thanks Tom! Lugnuts Fire Walk with Me 14:27, 17 March 2018 (UTC)

Xulbot

Not an issue for Bot requests. Referred elsewhere.

ShareMan 15 ( talk · contribs) 17:43, 25 March 2018 (UTC)

This is not the place to request bot approval. WP:BRFA is the appropriate venue. —  JJMC89( T· C) 18:18, 25 March 2018 (UTC)
@ JJMC89: Thank's guy for the information. ShareMan 15 ( talk · contribs) 18:24, 25 March 2018 (UTC)

Special character de-corrupter

Very often, because of encoding issues, you have situations like é → é.

This is often due to copy-pasting, or bot insertions. It would be nice if a bot could find all corrupted equivalent of all special Latin characters (possibly others too), and then do a de-corruption pass e.g. [3]/ [4].

This might be best as a manual AWB run though.. Headbomb { t · c · p · b} 13:41, 1 December 2017 (UTC)

Related problems with file names on Commons: Rename files with wonky Unicode encoding. — Dispenser 06:16, 2 December 2017 (UTC)
@ Dispenser: could you adapt your script for enwiki? Headbomb { t · c · p · b} 02:46, 26 January 2018 (UTC)
@ Headbomb: Well I had to write a dump parser. Wasted a few hours in writing a word frequency collector. Ultimately regex on the dump was the fastest (4 hour runtime). It only does UTF-8 → mojibake and we need to figure out which of the 4,018 matches across 2,166 articles actually need to be fixed. I've done some already: [5] [6]Dispenser 03:21, 29 January 2018 (UTC)
Wikiget can return a regex dump search in about 30 seconds. Only limit it maxes out at 10,000 hits (limited by Elasticsearch).
./wikiget -a "insource:/<regexcommand>/"-- Green C 05:21, 29 January 2018 (UTC)
Supposedly our Elasticsearch times out easily such that [0-9] needs to be split to properly work: [0-4], [5-9]. — Dispenser 11:45, 29 January 2018 (UTC)
See T106685, which has been marked as "Resolved", to the dismay of those of us who want to search using regexes. – Jonesey95 ( talk) 14:10, 29 January 2018 (UTC)
Someone should setup a dedicated instance just for searching with no limitations. Cirrus dumps + setup info. -- Green C 16:10, 29 January 2018 (UTC)
I'm tempted to create a web version of AWB's Database Scanner since I find it a pain to download a new dump, find a way to update AWB, take 15 minutes to decompress the dump, then try and fail to get my regexp working. Is there interest in building something better? — Dispenser 01:13, 31 January 2018 (UTC)
There would be interest but the disk space.. I wrote a fast and simple program for regex'ing XML dumps. -- Green C 02:36, 31 January 2018 (UTC)
I have 1.2TB of compressed monthly dumps for the top ten wikis going back to September 2015. For enwiki, I have early 2008, 2010, 2012, and 2014 dumps. I also have a spare 120 GB end-of-write-life SSD which could be useful in a high throughput read only mode. But this would be running on my home server/work machine, so I'd be worried about CPU usage and would have to figure out a way of limiting abuse. — Dispenser 04:52, 31 January 2018 (UTC)

Reduce all caps title to title case: BIOGRAPHICAL INDEX OF FORMER FELLOWS ...

There are over 1200 cases of "title=BIOGRAPHICAL INDEX OF FORMER FELLOWS OF THE ROYAL SOCIETY OF EDINBURGH 1783 – 2002" that should be reduced to "title=Biographical Index of Former Fellows of the Royal Society of Edinburgh 1783 – 2002". Chris the speller  yack 13:35, 6 April 2018 (UTC)

"1783–2002" would be even better, per MOS. – Jonesey95 ( talk) 15:57, 6 April 2018 (UTC)
I don't usually change spacing in quoted titles, but, after checking, the RSE's own web site shows it without the spaces, so yes, that would be even better. Chris the speller  yack 16:27, 6 April 2018 (UTC)
BRFA filed. Primefac ( talk) 16:51, 6 April 2018 (UTC)

Removing unnecessary piping

There are many instances where a link is piped to another link, but both parts of the link actually target the same article. For instance [[Chicago, Illinois|Chicago]] (where the piping simply redirects back to the original link) or [[Lakewood Amphitheatre|Aaron's Amphitheatre at Lakewood]] (where both parts of the link are redirects to the same destination), or [[MidFlorida Credit Union Amphitheatre|Live Nation Amphitheatre]] (where the visible part of the link is a redirect to the piped part. These last two types occur particularly often with sports teams, which change their name with a change of hometown or sponsor, and venues, which change as they sell naming rights, and newspapers, as they merge. Normally a redirect will be set up to point the old name to the new one, but many well-meaning editors will nonetheless pipe the old name to the new one, thinking they're doing good (and then often the piping is not updated when the name changes yet again, so even the trivial efficiency benefit of bypassing a redirect is lost). This sort of piping has many failings (as described at WP:NOTBROKEN and WP:NOPIPE). Would it be possible to set up a bot that would detect and fix these sorts of unnecessary piping? Colonies Chris ( talk) 20:00, 30 March 2018 (UTC)

Sounds like it would be largely WP:COSMETICBOT, and a bot is not well equipped to decide when the difference in tooltip might be significant in most cases. Anomie 21:12, 30 March 2018 (UTC)
I suspect that CC may be trying to circumvent this decision. -- Redrose64 🌹 ( talk) 21:37, 30 March 2018 (UTC)
This has absolutely nothing to do with states, state abbreviations or any change in the appearance of links to the reader. I suggest you reread my proposal and then withdraw this unfounded accusation. Colonies Chris ( talk) 21:50, 30 March 2018 (UTC)
I'm not sure what you mean about tooltips. And it's not just cosmetic; WP:NOTBROKEN explains how using redirects instead of piping directly benefits the encyclopaedia. Colonies Chris ( talk) 21:50, 30 March 2018 (UTC)
These changes look controversial. Please provide consensus for your requested bot (this page is not the place to generate that consensus). -- Izno ( talk) 23:03, 30 March 2018 (UTC)
Please clarify in what way you think they are controversial. And if it's just because of Redrose64's disgraceful accusation, you might wish to read the decision he's linked to, and the proposal I've made. and you'll see that they are in no way connected. I just wish Redrose64 had taken the trouble to read this bot proposal properly before interfering. Colonies Chris ( talk) 23:29, 30 March 2018 (UTC)
BAG note I agree that consensus for these changes need to be shown first. Any coder willing to take this task could very well waste their time should they agree to code this without proper consensus to back it up. It's very possible that consensus for such a task, or something close to it such as making these changes part of WP:GENFIXES, exist, but short of an RFC on the issue this cannot go to trial. Headbomb { t · c · p · b} 00:00, 31 March 2018 (UTC)
My purpose in coming here was to gauge whether there is any consensus and willingness to carry through such changes on a large scale. That's why I'm here. I think this would be a worthwhile improvement, and I hope others agree. Why accuse me of failing to show consensus when that's the very purpose for my coming here? Where else would I go? Colonies Chris ( talk) 09:48, 31 March 2018 (UTC)
See WP:BOTREQUIRE, item 4. Any bot is allowed to perform tasks only when those tasks have consensus, which is developed in an appropriate discussion forum where that sort of task is discussed. – Jonesey95 ( talk) 13:59, 31 March 2018 (UTC)
In this case, WP:VPPRO would probably be appropriate. -- Izno ( talk) 14:11, 31 March 2018 (UTC)
Much Ado auditions - Grads — Preceding unsigned comment added by Colonies Chris ( talkcontribs)
The recent thread on your behavior regarding redirect links and city names indicates you are not the right person to make the call on controversiality of what looks to be a related task, however good faith or desirable you think it might be. RR64, regardless of the text in his message here, was correct to point to the ANI thread. If you believe there is consensus for your proposed task, show it, rather than assuming it. -- Izno ( talk) 00:11, 31 March 2018 (UTC)
I'd go further and say that the community has, for years, opposed making these sort of cosmetic edits en masse as pointless and disruptive. ~ Amory ( utc) 00:18, 31 March 2018 (UTC)
These are not cosmetic changes. None of them would affect the reader's view at all. This is behind-the-scenes stuff, designed to improve the overall usability of the encyclopedia by eliminating some of the problems listed at WP:NOTBROKEN. Colonies Chris ( talk) 09:48, 31 March 2018 (UTC)
Exactly my point. Cosmetic to editors, invisible to readers. ~ Amory ( utc) 13:19, 31 March 2018 (UTC)
So, to summarise: I came here with an idea, based on 12 years' gnoming experience, for a way to improve the encyclopaedia. I expected some scepticism, requests for clarification, discussion of possible drawbacks and how to avoid them, and hopefully a plan of action emerging from the discussion. What I got, however, was unrelenting hostility. A false accusation, from someone who hasn't the decency to come back and withdraw it; an objection from someone who couldn't be bothered to explain it when asked; being told that even though the accusation was untrue, I was nonetheless not to be trusted; a rejection from someone who evidently hadn't bothered to read the links I provided. In short, I have been told: it's a bad idea; I'm a bad person; and I shouldn't have come here at all. In the midst of all this there was exactly one helpful suggestion (thank you, Izno). You guys really need to change the poisonously negative culture you have here. I certainly won't be back. Colonies Chris ( talk) 08:26, 2 April 2018 (UTC)
it's a bad idea; I'm a bad person; and I shouldn't have come here at all Decidedly not. Your takeaway from this discussion should be that you should seek consensus for the task--because regardless of any of the other words said here, the task may still be valuable but it doesn't have an obvious consensus. It is normal for people to come here with not-obviously uncontroversial bot requests; when we receive such requests, we ask for consensus. This does two things: a) makes it very obvious to anyone inspecting the bot while it is running that they should not contest the edits without a similar consensus, and b) stops the task from getting to the WP:BRFA process, where the WP:BAG would request the same (because, as one BAGger above commented, this task does not look uncontroversial). -- Izno ( talk) 12:34, 2 April 2018 (UTC)
Eleven words, and you blow it up out of all proportion. -- Redrose64 🌹 ( talk) 20:08, 2 April 2018 (UTC)
@ Colonies Chris: More than just Izno provided constructive feedback. You were told by multiple editors that that idea doesn't have obvious consensus behind it. It's possible that it does, it's possible that something like it, but not exactly-as proposed has consensus (e.g. this might be a good idea for a subset of all such redirects, but a bad idea in other cases), and it's possible that it has nowhere near consensus. Go to WP:VPR, start an WP:RFC, and if there's consensus for something specific, we can move to a bot task. Headbomb { t · c · p · b} 19:06, 4 April 2018 (UTC)

I support this task which will reduce WP:OVERLINKING. WP:AWB is a popular semi-automated tool that can help in doing this task. Many editors use AWB for similar tasks. -- Magioladitis ( talk)

Further anatomy infobox series help

I have another request for a bot to help tighten up our {{ Infobox anatomy}} series. Ping to Nihlus who helped out last time.

Request is to:

  1. In all articles that use {{ Infobox anatomy}} and all subtemplates* remove the empty |MapCaption=, and |ImageMap= (which have been integrated into the "image" parameter)
  2. In all articles that use {{ Infobox muscle}} remove deprecated parameters |NerveRoot=, |PhysicalExam=
  3. In all articles that use {{ Infobox anatomy}}, {{ Infobox brain}}, {{ Infobox neuron}} and {{ Infobox nerve}} remove the parameters: |BrainInfoType=, |BrainInfoNumber=, |NeuroLexID=, |NeuroLex= (now moved to Wikidata)
  4. In all articles that use {{ Infobox anatomy}} and all subtemplates* remove from pages the field |Code= which I have gone through and checked, and duplicated other fields.

I would be very grateful for this, it will significantly help tidy up both our articles and the infoboxes.-- Tom (LT) ( talk) 00:32, 3 March 2018 (UTC)

@ Tom (LT): I'll look into this. When it comes to point #1, do you just want |MapCaption= and |ImageMap= removed or their values integrated elsewhere? If so, where? (Would MapCaption just have its value put in |Caption=?) -- TheSandDoctor ( talk) 17:07, 9 March 2018 (UTC)
This is an extension of something my bot already did. I just need to alter the settings and run it for this. I can run it some time this weekend. Nihlus 21:11, 9 March 2018 (UTC)
@ TheSandDoctor I have already replaced ImageMap with Image in all articles that used the parameter. Now there's just stacks of empty parameters sitting around (which will display an error message when I finally remove it from the infobox in totalis). -- Tom (LT) ( talk) 21:57, 9 March 2018 (UTC)
Thanks very much Nihlus. To let you know, I notice I have made a spelling mistake above and have now corrected it (|NerveRoom= -> |NerveRoot=)). Like last time, once the bot runs I'll be able to remove the parameters, then I will manually go through all articles that have parameter problems and fix them. -- Tom (LT) ( talk) 21:57, 9 March 2018 (UTC)
@ Tom (LT): @ Nihlus: Roger. Had started writing it, but hadn't finished & was good practice anyways . Surprised that this page was not on my watchlist, have solved that problem now. --All the best, TheSandDoctor ( talk) 00:11, 10 March 2018 (UTC)
@ Tom (LT): Can you clarify point 4? Should |Code= be removed from all of those templates or do you have a separate list of affected pages somewhere else? Nihlus 11:25, 11 March 2018 (UTC)
@ Nihlus the bot will need to run through all pages that use those templates and remove the blank parameters - See eg [7] - I removed the "ImageMap" and "MapCaption" parameters which are blank. Point 4 is that the bot will also need to run through and remove any blank "Code" parameters, too (eg as I have done here [8]). Once that's done I'll remove it from the templates -- Tom (LT) ( talk) 19:17, 11 March 2018 (UTC)
@ Nihlus how are you going?? very eager to finish up my editing of this infobox series, hoping you might have time this weekend? -- Tom (LT) ( talk) 23:14, 16 March 2018 (UTC)
Hmm... It seems that currently Nihlus is bit busy with something. Although I'm not so experienced like Nihlus, I have basic knowledge. So if there are no response from him, I'm going to try this task at next weekend (April 7). -- Was a bee ( talk) 17:43, 1 April 2018 (UTC)
Thanks Was a bee, that would be wonderful. Happy also to check some edits on April 7th before you do a full run.-- Tom (LT) ( talk) 21:42, 1 April 2018 (UTC)
Tom Here is 20 test edits [9]. By the way, I interpreted the request simply as follows.
  1. Removing these 9 deprecated parameters: |MapCaption=, |ImageMap=, |NerveRoot=, |PhysicalExam=|BrainInfoType=, |BrainInfoNumber=, |NeuroLexID=, |NeuroLex= and |Code=
  2. From {{ Infobox anatomy}} and its all subtemplate.
So, for example, in this edit [10], I removed |NerveRoot= and |PhysicalExam= from {{ Infobox anatomy}} (not from {{ Infobox muscle}}). Is my interpretation right? -- Was a bee ( talk) 21:17, 2 April 2018 (UTC)
@ Was a bee my request was clearly phrased in a complicated way given how you've simply and accurately rephrased it :). And have had a look at your edits - have checked through and they're great! Can't wait, and thank you again!! -- Tom (LT) ( talk) 11:42, 3 April 2018 (UTC)

Thanks for this Was a bee. I consider this task Y Done. -- Tom (LT) ( talk) 04:59, 7 April 2018 (UTC)

Mass editing {{DEFAULTSORT}} values

Related to but more generic than #Fixing sort keys for biographies of Thai people above, I'm looking for a bot to make mass edits to {{DEFAULTSORT}} keys (or add them if they don't exist) for a pre-defined list of articles, i.e. Special:PermaLink/829542720. These are articles that may have previously been tagged with incorrect defaultsort keys. Optimally, the bot should also skip the edits if changes are made only in capitalisation. Edits which result in no changes would of course be skipped. -- Paul_012 ( talk) 08:23, 9 March 2018 (UTC)

@ Paul 012: Is it just for those articles in that version of the sandbox? Is it just for Thai people? How would these be found exactly? -- TheSandDoctor ( talk) 00:46, 10 March 2018 (UTC)
@ Paul 012: I have a functional proof of concept now (for changing the defaultsort value), just need the confirmation on details above & will file BFRA. -- TheSandDoctor ( talk) 03:50, 10 March 2018 (UTC)
This would be a one-time edit for just the 215 listed articles (which are not part of the Thai name sort task above.) -- Paul_012 ( talk) 09:59, 10 March 2018 (UTC)

As said, now explicitly, at Wikipedia talk:Categorization of people#Thai names, I'm opposing this bot operation. Since only two people commented there thus far (the OP and me), with a 50%/50% division of opinions, this needs more time for discussion, with let's hope a bit more input from other editors, before firing up a bot. The same goes for the #Fixing sort keys for biographies of Thai people BotReq proposal above, although that one might be more in line with current guidance (can't really get my head around it yet). -- Francis Schonken ( talk) 17:00, 10 March 2018 (UTC)

Still something else, bot-assisted insertion of a {{ DEFAULTSORT}} value that is exactly equal to the article title of the page where it is inserted would be a WP:COSMETICBOT infringement, as far as I understand the applicable policy. -- Francis Schonken ( talk) 17:13, 10 March 2018 (UTC)

Thanks for the comments, Francis Schonken. This request (for the 215 articles) is in accordance with the current guidelines. All the listed articles here are multi-word names which do not contain surnames, which is why comma-separated sort keys would be incorrect. This bot task is to rectify those that have been mistakenly added. Regarding your concerns of the difference between Luang Pu Sodh Candasaro and Luang Pu Thuat, this is because all the other Luang Pus are titles preceding the person's name, but Luang Pu Thuat is a specific name in and of itself—the subject's name wasn't Thuat. (I think this is quite like how Lady Gaga isn't sorted Gaga, Lady because she isn't a lady named Gaga.) -- Paul_012 ( talk) 17:29, 10 March 2018 (UTC)
Sorry, no, the request goes beyond what is mandated by the applicable guidance afaics, so would need to find consensus elsewhere first. -- Francis Schonken ( talk) 17:33, 10 March 2018 (UTC)
Specifically, the guidance only mentions per-category sort keys for Thai categories (clearly assuming that the {{ DEFAULTSORT}} is defined as "surname, given name(s)"), and does not mandate to set the {{ DEFAULTSORT}} to "given name(s) surname", which should not normally be done for any article with an actual title in that format, and is thus not mandated by any policy, and is an infringement on WP:COSMETICBOT if done by bot. -- Francis Schonken ( talk) 17:40, 10 March 2018 (UTC)
You might want to re-read my above comment. None of the articles titles here contain a surname. Most of them are royalty and nobility, and are covered by WP:PEERS. As for the WP:COSMETICBOT concerns, one could also argue that manually inputting any DEFAULTSORT would be unnecessary, as it results in no changes in the sorting. But the point here is to prevent unknowing editors from inserting incorrect values. I don't think this violates the spirit of WP:COSMETICBOT. -- Paul_012 ( talk) 17:45, 10 March 2018 (UTC)

I originally thought manually going through all those articles would be an unnecessary waste of time and effort. Seeing the difficulty I'm having in explaining the task, however, it has become clear that further discussions would actually waste more time and effort on everybody's part than just manually performing the edits. I have gone ahead and done so. Thanks to TheSandDoctor for the assistance, but this is now moot. Marking as N Not done. -- Paul_012 ( talk) 10:26, 11 March 2018 (UTC)

@ Paul 012: So is it N Not done for both this and the above? Should I move the template back to my userspace & tag U1 then? (Sorry for the delay in my response, this was sent at around 3:30am & the previous one at around 1:10am.) -- TheSandDoctor ( talk) 13:22, 11 March 2018 (UTC)
@ TheSandDoctor Just this one. The above is still pending further discussion, so please keep the template for now. -- Paul_012 ( talk) 13:26, 11 March 2018 (UTC)

Bot to convert New York Times abstract URLs to archive PDF URLs

There are a lot of New York Times URLs that begin with http[s]://query.nytimes.com/gst/abstract.html?res= or http[s]://select.nytimes.com/gst/abstract.html?res=. However, all this does is take people to the abstract page. If these Wikipedia readers aren't NYT members, they encounter a paywall, and if they are members, they are allowed to select a PDF/TimesMachine version to continue reading the article. Either way, they have to click at least one more time once they reach the abstract page.

Would it be practical to convert these to http://query.nytimes.com/mem/archive/pdf?res= URLs? These PDF versions can be seen by everyone, even non-members, and is much easier to verify. The hexadecimal string after the equals sign will remain the same before and after, but it does have to be an HTTP URL for these NY Times PDF links to work. epicgenius ( talk) 22:01, 4 February 2018 (UTC)

So, just to double-check this, you're doing a find/replace of gst/abstract and converting to mem/archive, as well as changing any select. into query.
And this will work for all the articles? Primefac ( talk) 12:50, 5 February 2018 (UTC)
Actually, it's gst/abstract.html to mem/archive/pdf, select. into query., and all HTTPS to HTTP. Yes, this will work for all articles. However, KolbertBot is converting http://nytimes.com URLs to https://nytimes.com, so I will ping Jon Kolbert for feedback.
I am requesting that HTTPS be converted specifically to HTTP, because http://query.nytimes.com/mem/archive/pdf?res=9801E7DF1330E333A25755C0A96E9C94669ED7CF&legacy=true (for instance) will work, but not https://query.nytimes.com/mem/archive/pdf?res=9801E7DF1330E333A25755C0A96E9C94669ED7CF&legacy=true, which displays an empty frame. epicgenius ( talk) 18:42, 5 February 2018 (UTC)
@ Epicgenius: A few weeks ago there were reported issues with query.nytimes.com links, I had fixed issues with links to query.nytimes.com/mem/archive/pdf?res= in response. KolbertBot doesn't act on select.nytimes.com links. Is the desired outcome to have select.nytimes.com/gst/abstract.html?res= and query.nytimes.com/gst/abstract.html?res= changed to query.nytimes.com/mem/archive/pdf?res=? That shouldn't be too hard to do with KolbertBot, I can create a new bot task to do this job. Jon Kolbert ( talk) 00:46, 6 February 2018 (UTC)
@ Jon Kolbert: Yes, that is what I am trying to do. epicgenius ( talk) 00:48, 6 February 2018 (UTC)
This strikes me as a "needs consensus" task given it goes from https -> http. Additionally, this strikes me as something which may be a temporary workaround. Whether TNYT allows this deliberately or through some failure of configuration is obviously unknown--but I would guess that if they notice persons jumping straight to their PDF versions from external to their website, they'll cut off the access (which leaves us in a definitely worse spot than current). -- Izno ( talk) 18:22, 23 February 2018 (UTC)

Please.     — The Transhumanist    10:20, 5 March 2018 (UTC)

Fixing sort keys for biographies of Thai people

According to WP:NAMESORT (and expanded upon at WP:MOSTHAI), biographical articles about Thai people should be sorted like this:

{{DEFAULTSORT:Surname, Forename}}
[[Category:International people]]
[[Category:Thai people|Forename Surname]]

However, this has very inconsistently been adhered to, with some articles specifying the Thai order in the DEFAULTSORT and some not following the Thai order at all.

Would it be plausible for a bot to help fix this? A possible process I have in mind is something along the lines of:

  1. Manually compile a list of all Thai people categories.
  2. Manually compile a list of all biographies of Thai people.
  3. Manually list preferred DEFAULTSORT and Thai sort keys for all of them.
  4. Have a bot go through all the articles, adding and/or replacing the DEFAULTSORT and sort keys according to the listed values.

And, for the long term:

  1. The bot, during the aforementioend run, also adds a {{Thai name sort}} template, which does nothing but notes the correct Thai sort key for future reference.
  2. During periodical runs, a bot looks up the sort key in the {{Thai name sort}} template and adds it to any Thai people categories (from the previous list, which would have to be manually updated) which have been later added and are missing the sort key.

I realise this is pretty labour-intensive, but a more automated process would likely not be able to identify names which don't follow the Forename Surname format. I'd like to know that a bot was available for the task before attempting to review all the names. -- Paul_012 ( talk) 10:09, 9 February 2018 (UTC)

Paul 012, Category:Thai people states Note on sorting: Thailand people are usually called by the first name, even telephone books are sorted by the first name. This of course also applies to the subcategories.. Could you point to the relevant passages in the guides you mentioned?   ~  Tom.Reding ( talkdgaf)  17:44, 11 February 2018 (UTC)
Tom.Reding, sorry but I'm not sure you're reading the request correctly. It's asking to add sort keys so that Thai people categories will be sorted by first name. Anyway, the quotes are:

Thai names have only contained a family name since 1915 and the name follows the western pattern of "Forename Surname". However, people in Thailand are known and addressed by their forename. In categories mostly containing articles about Thai people, Thai names should be sorted as they are written with the forename first. Thaksin Shinawatra is sorted [[Category:Thai people|Thaksin Shinawatra]].

and

When categorizing biography articles, do not specify sort keys to sort by surname in Thai people categories. However, sorting by surname is still desirable for non-country-specific people categories, and this is done with the DEFAULTSORT magic word. A biography article for Given-name Surname should therefore be categorized like this:

{{DEFAULTSORT:Surname, Given-name}}
[[Category:International people]]
[[Category:Thai people|Given-name Surname]]

-- Paul_012 ( talk) 19:21, 11 February 2018 (UTC)
Paul 012, this should be doable, as long as all of the special given-name-first-sortkey cats are identified and appropriately not affected by {{ DEFAULTSORT}}. I'm not available to do this, unfortunately, but no reason someone else can't pick it up. In the meantime, you could compile the list of all such special cats, to do some of the legwork and to entice a potential bot op.   ~  Tom.Reding ( talkdgaf)  20:49, 16 February 2018 (UTC)
Hi there Paul_012, do you have an idea of what {{ Thai name sort}} would include/what it would look like? Would it be substituted? Would it have parameters? (How would it note the preferred format?)
As for compiling a list of all Thai people categories and biographies, I see two viable approaches to that part of the problem:
  1. A bot runs through Category:Thai people and just works off of that category
  2. A bot runs through Category:Thai people and compiles a list (easily writeable to a local text file; one article per line) and uses that to work off of, updating periodically (in this case, that part wouldn't even have to be part of a bot's regular function, I could theoretically make generating said list its own program and run periodically for simplicity's sake)
Once I have some more details (above), I would be happy to consider working on this program and already have a rough idea of how I would do it (shouldn't take that long once things are clarified). --All the best, TheSandDoctor ( talk) 16:40, 9 March 2018 (UTC)
I just saw the discussion here, I should clarify that I am happy to considering moving ahead with this one clarifications above are made and consensus is reached. I would consider this a fun little project, but will not move unless adequate consensus and discussion has taken place. -- TheSandDoctor ( talk) 16:42, 9 March 2018 (UTC)
Thanks a lot for offering to help, TheSandDoctor. If a bot becomes available then there should be no need to modify the guideline; I hope it can be settled soon.
I'm imagining the template as taking only one parameter, which is the desired sort key, with no visible output. (In most cases it would be identical to the article title, but there may be some exceptions.) So for the Abhisit Vejjajiva article, the desired outcome would be:
{{DEFAULTSORT:Vejjajiva, Abhisit}}
{{Thai name sort|Abhisit Vejjajiva}}
[[Category:Prime Ministers of Thailand|Abhisit Vejjajiva]]
etc.
[[Category:People educated at Eton College]]
etc.
The list of articles would be only needed for the initial run. I've already begun compiling it at Special:Permalink/829616472—It's still a work in progress, and will need to be double-checked. I'm expecting that subsequent periodical runs will identify the articles by looking up inclusions of {{ Thai name sort}}. This way, new articles can easily be picked up.
I was thinking that the list of categories would also need to be manually compiled in order to avoid false positives such as expatriates, whose names wouldn't be relevant to this task. But then again, expatriates are already excluded from the article list, so it shouldn't make any difference. Having the bot periodically go through Category:Thai people would require less maintenance, and would be preferable. -- Paul_012 ( talk) 19:44, 9 March 2018 (UTC)
Oh, wait. Automatically going down the category would include categories like Category:American people of Thai descent, so this approach wouldn't work. I'll see if I can make a list of the specific categories that should be browsed instead. -- Paul_012 ( talk) 20:04, 9 March 2018 (UTC)
@ Paul 012: I went and created a template in my userspace to have it ready to go (feel free to edit it, just please leave moving to me or others with the page mover user right as I don't want a redirect there).
Do you want {{ DEFAULTSORT}} modified (on pages) to also match "Given Surname"? Adding of the Thai name sort template should be easy as the bot could (theoretically) just take the page name and "plop" it in as the parameter. It won't always work right (ie House of Abhaiwongse), but should (most likely will) work the majority of the time and might see about creating a "blacklist" of titles, where if the page title is equal to X, then it will skip it). I assume that the sub-categories of categories within the list your sandbox are also meant to be included (recursive)? -- TheSandDoctor ( talk) 00:39, 10 March 2018 (UTC)

Okay, TheSandDoctor, here's a newer summary of the task:

  1. A one-time task performed on the articles listed at Special:Permalink/829616472* Special:Permalink/829756891 which, entails, for each article:
    1. Modify {{DEFAULTSORT}} to match that given in the table.
    2. Add {{Thai name sort}}, with the desired sort key** as the parameter.
    3. For each [[Category:...]] in the article, see if it is included in the list***. If it is, add the same sort key to the category, but don't replace existing values.
  2. A recurring task performed on articles which are tagged with {{Thai name sort}}:****
    1. For each [[Category:...]] in the article, see if it is included in the list***. If it is, copy the sort key listed in {{Thai name sort}}, and add it to the category, but don't replace existing values.
  • *As previously mentioned, please wait for a finalised version. House of Abhaiwongse and similar non-person-name articles won't be affected as they're already excluded from the table.
  • **I'll add it to the table—there may be some exceptions that don't exactly follow the article title.
  • ***This list should be automatically compiled by scanning through Category:Thai people, with the exception of Category:Thai diaspora‎ and its subcategories, plus Category:Orders, decorations, and medals of Thailand. Please disregard the list currently in my sandbox.
  • ****I not sure how often this should be run, but once a month would probably be plenty often enough.

-- Paul_012 ( talk) 10:44, 10 March 2018 (UTC)

I just realised that there should also be a fallback in case {{Thai name sort}} is called with missing/empty parameters. In such cases, the article title should be used as the sort key. -- Paul_012 ( talk) 11:29, 10 March 2018 (UTC)
@ Paul 012: Alright, let me know when you're ready. One last clarification: "with the exception of Category:Thai diaspora‎ and its subcategories, plus Category:Orders, decorations, and medals of Thailand" means to exclude Category:Orders, decorations, and medals of Thailand as well, correct? -- TheSandDoctor ( talk) 15:40, 10 March 2018 (UTC)
Oops, sorry. What I meant was to (1) Include Category:Thai people. (2) For all subcategories of Category:Thai people except Category:Thai diaspora, also include them and all their subcategories (recursive). (3) Include Category:Orders, decorations, and medals of Thailand and all its subcategories (recursive). -- Paul_012 ( talk) 16:12, 10 March 2018 (UTC)
@ Paul 012: Okay, thanks for the clarification. Sorry to be asking so many questions and to be somewhat anal retentive about this, just trying to make sure we are on the same page and that I know what you want etc (need detailed details to be able to make bot and to ensure it does what you want).
Let me know when you are ready. I have to head out for a bit, but when I get back I will see about continuing to work on the bot (& then file a BFRA if things are looking good). -- TheSandDoctor ( talk) 16:30, 10 March 2018 (UTC)
Final list is at Special:Permalink/829756891. Still pending further discussion to address Francis Schonken's concerns below. -- Paul_012 ( talk) 18:44, 10 March 2018 (UTC)

TheSandDoctor, Francis Schonken has requested that manual placement of the template be manually trialled on article pages first. Could you go ahead and move your sandbox version into the template space? Thanks. -- Paul_012 ( talk) 09:07, 11 March 2018 (UTC)

Well, err, no, that's not what I suggested (and I certainly didn't "request" anything). In the approach I suggested {{ Thai name sort}} (or a template with a different name) would be applied to *category* pages (i.e. Categories of Thai people where the collation should be according to actual article titles), not a template that would be inserted in mainspace. Anyhow, such templates—whether according to the original idea or according to my suggested scheme—should be experimented with, would have needed to have found consensus, and would have needed to be explained in the WP:SUR guidance (etc) before any sort of bot operation. This is not a page where to request manual operations, nor a page to find consensus about things that go beyond the mandate of current guidance and particular consensuses. -- Francis Schonken ( talk) 10:43, 11 March 2018 (UTC)
Sorry, I misread. Thanks for the clarification. The point about moving the sandbox template into template space is still valid anyway. I'll continue at Wikipedia talk:Categorization of people‎. -- Paul_012 ( talk) 10:57, 11 March 2018 (UTC)
@ Paul 012: Template moved to Template:Thai name sort.

Requests from Amirh123

Previous requests include: Wikipedia:Bot requests/Archive 75#make a translate bot; Wikipedia:Bot requests/Archive 75#please make bot for adding articles for footballdatabase.eu; Wikipedia:Bot requests/Archive 75#wwikia bot; Wikipedia:Bot requests/Archive 75#bot for upbayt articles; Wikipedia:Bot requests/Archive 75#Bot to update articles; Wikipedia:Bot requests/Archive 75#geoname bot; Wikipedia:Bot requests/Archive 75#catalogueoflife bot; Wikipedia:Bot requests/Archive 76#bot for creating new categorys (plus some that were deleted without being archived). -- Redrose64 🌹 ( talk) 23:01, 28 March 2018 (UTC)

rsssf

hi rsssf.com have many articles about soccer please make bot to added articles from rsssf.com — Preceding unsigned comment added by Amirh123 ( talkcontribs) 15:48, 8 March 2018 (UTC)

Hi there Amirh123, I think that WP:MASSCREATION might apply in this case. Also, due to copyright restrictions, Wikipedia could not just take content from other sites. If you have any questions, please feel free to let me know. -- TheSandDoctor ( talk) 16:24, 9 March 2018 (UTC)
hi rsssf.com is free content please make bot to adding articles for this site thanks — Preceding unsigned comment added by Amirh123 ( talkcontribs) 08:19, 11 March 2018 (UTC)
@ Amirh123:, please define "free" in this context ("free" to view, or open license/public domain?). Please also see Wikipedia:Copying text from other sources. It is seldom ever appropriate to directly copy content from sources as doing so (in most cases) would be a copyright violation and would be speedily deleted. -- TheSandDoctor ( talk) 13:29, 11 March 2018 (UTC)
hi copy write see this link — Preceding unsigned comment added by Amirh123 ( talkcontribs) 14:01, 13 March 2018 (UTC)
N Not done/ Declined Not a good task for a bot. @ Amirh123: Please see the last sentence in section 2 of the charter you sent, "The data made available on the WWW in the RSSSF Archive are subject to copyright". That means that the text is free to view but that it is still subject to copyright, meaning that it could not be copied directly to Wikipedia, therefore meaning that it is not a suitable task for a bot. Please also keep future responses to this here, rather than posting on my talk page, as it keeps discussions together/centralized. -- TheSandDoctor ( talk) 15:47, 13 March 2018 (UTC)

bot

hi some articles on english wiktionary add links to Wikipedia german But there are also english Wikipedia example Berndorf link the Wikipedia german but english wiktionary must link to English Wikipedia thanks — Preceding unsigned comment added by Amirh123 ( talkcontribs) 13:30, 18 March 2018 (UTC)

@ Amirh123: I have no idea what you are asking for. -- Redrose64 🌹 ( talk) 18:10, 18 March 2018 (UTC)
I think Amirh123 is saying that on the English Wiktionary, there are sometimes cross-wiki links to articles at the German Wikipedia. That they should be converted to point to the English Wikipedia version, if it exists. Amirh123 gives the example Berndorf which has a link to de:Berndorf instead of Berndorf. -- Green C 18:29, 18 March 2018 (UTC)
It looks like a lot of these may be the result of one editor whose talk page is full of complaints about poor quality work. -- Green C 18:37, 18 March 2018 (UTC)
ok Please edit these edits — Preceding unsigned comment added by Amirh123 ( talkcontribs) 06:39, 19 March 2018 (UTC)

years

hi please make bot to adding years articles automaticly example make 1432 in iran or 528 in india thanks — Preceding unsigned comment added by Amirh123 ( talkcontribs) 17:57, 24 March 2018 (UTC)

Would there be any content to these pages. I'm feeling that this would possibly by "Bots to create massive lists of stubs" which is on the list of Frequently Denied Bots Pi (Talk to me! ) 20:15, 24 March 2018 (UTC)

categorys

hi very articles don't described categorys example Selenophorus pedicularius described in 1829 but not any category — Preceding unsigned comment added by Amirh123 ( talkcontribs) 14:55, 26 March 2018 (UTC)

Declined Not a good task for a bot. Adding categories to articles requires (at least some level of) individual evaluation (unless there is consensus for a list of articles that X category should be added; your request appears to deal with all uncategorized articles in current wording). Bots aren't suited for this. -- TheSandDoctor Talk 15:11, 26 March 2018 (UTC)
hi I want see all years in one category example Category:Insects by century of formal description but I want see all years in one category example Category:Video games by year — Preceding unsigned comment added by Amirh123 ( talkcontribs) 08:08, 27 March 2018 (UTC)
@ Amirh123: I am not 100% sure what you mean, but from what I gather it is still not feasible by a bot as it could require some level of individual evaluation (unless the years are all in a specific infobox parameter?) It is probably a better suited job to do manually than with a bot. -- TheSandDoctor Talk 16:07, 27 March 2018 (UTC)

catalogueoflife.org

hi catalogueoflife.org have very articles about species animals planet and more this articles not in Wikipedia please make bot to adding this articles to Wikipedia — Preceding unsigned comment added by Amirh123 ( talkcontribs) 12:25, 28 March 2018 (UTC)

@ Amirh123: Declined Not a good task for a bot. (bots creating articles en-masse is generally not approved) and there would also be copyright concerns (as there was the last time you requested a bot do this for a different website). -- TheSandDoctor Talk 13:23, 28 March 2018 (UTC)

but in ceb.Wikipedia.org and sv.Wikipedia.org use catalogueoflife.org and adding very articles — Preceding unsigned comment added by Amirh123 ( talkcontribs) 14:37, 28 March 2018 (UTC)

This is not ceb.wikipedia or sv.wikipedia. We have our own standards and expections. Headbomb { t · c · p · b} 17:49, 28 March 2018 (UTC)
@ Amirh123: I refer you to the responses left at Wikipedia:Bot requests/Archive 75#please make bot for adding articles for footballdatabase.eu by myself and others. -- Redrose64 🌹 ( talk) 22:43, 28 March 2018 (UTC)

Removal of repetitive internal links in Wikipedia article

Does there exist bots able to detect and remove repetitive internal links in a Wikipedia article? Thanks! -- It's gonna be awesome!Talk♬ 03:48, 3 April 2018 (UTC)

Declined Not a good task for a bot. This sort of thing is very context dependent. It requires a human editor to review. ~ Rob13 Talk 17:58, 3 April 2018 (UTC)
Maybe you're right but I feel it time-consuming to tackle {{ overlinked}} tagged with a long but complete article. -- It's gonna be awesome!Talk♬ 21:26, 3 April 2018 (UTC)
AWB can help with this, to a degree. A rule that changes "\[\[([-\w ,\(\)–]+)\]\]([^\n]+)\[\[\1\]\]" to "[[$1]]$2$1" will unlink the last duplicate link in a paragraph. For example, it will change "We needed to draw yellow bananas and yellow canaries. But there were no yellow crayons." to "We needed to draw yellow bananas and yellow canaries. But there were no yellow crayons." If you use two or three such rules, it will also delink " yellow canaries". The rule will not remove a link in the seventh section of an article that also exists in the second section, but that would often be unhelpful; a reader may look at the lead section and then jump to the seventh section, not seeing the link in the second section. Use this rule with care. Chris the speller  yack 14:34, 6 April 2018 (UTC)
Thanks! I will try! -- It's gonna be awesome!Talk♬ 14:44, 7 April 2018 (UTC)

IW-ref template

I would appreciate if someone could please remove {{ Iw-ref}} from all article pages and add {{ translated page}}, with the same parameters, to the corresponding talk page. The Iw-ref template was deprecated quite a long time ago, but still remains on a lot of pages. Please note that there are a couple redirects to the template, {{ Translation/Ref}} and {{ Translation/ref}}. This should be a one time task, since once done, the old template can be deleted.

Thanks in advance, Oiyarbepsy ( talk) 05:47, 20 April 2018 (UTC)

@ Oiyarbepsy: - Doesn't look like there are any transclusions of that template, or it's redirects in mainspace. Possible that someone's gotten to it already. SQL Query me! 21:24, 23 April 2018 (UTC)
 Done courtesy of Plastikspork and AnomieBOT. —  JJMC89( T· C) 01:22, 24 April 2018 (UTC)
Yes, I made it substitute cleanly and added the {{ subst only}} to have AnomieBot replace it. After that, AnomieBot is already programmed to move it to the talk page. Thanks! Plastikspork ―Œ(talk) 02:36, 24 April 2018 (UTC)

Remove redundant links from See Also sections

A huge number of articles have See Also links that are already linked from the bodies of the articles. Per MOS:NOTSEEALSO, these redundant links should be removed. If there are no non-redundant links in a See Also section, the entire See Also section should be removed. Kaldari ( talk) 21:51, 21 March 2018 (UTC)

This strikes me as something that could easily find objection if someone runs a bot to try to make it happen, and "as a general rule" doesn't sound very convincing for a bot task. If someone decides to take this on, they should be prepared for pushback. Anomie 22:56, 21 March 2018 (UTC)

Vital Articles Bot

Hello, is it possible to create or edit a current bot to help with Level 5 vital articles? My idea is that it will take all articles of top importance in a Wikiproject, check to see if they are already in the list, and if not, tag them as Level 5 vital articles and add them to the list. There are already bots controlling lists so this may be feasible. Please reply! — Preceding unsigned comment added by SuperTurboChampionshipEdition ( talkcontribs) 16:22, 10 April 2018 (UTC)

I think it is Declined Not a good task for a bot.. Vital articles shouldn't be simply a result of wikiproject results. Many articles are "top-importance" for that specific project, but overally for Wikipedia it may be low importance. -- Edgars2007 ( talk/ contribs) 16:01, 11 April 2018 (UTC)

Add a textual remark to all pages of a private wiki

Hello, I'm owner of Westmärker Wiki, a small Mediawiki which I copied from a predecessor.

I want to add "Dieser Artikel wurde am 21.04 2018 aus dem Hombruch-Wiki kopiert." to the bottom of each article and media page.

Is there any bot I can use and possibly a person who can run it there? Wschroedter ( talk) 11:24, 29 April 2018 (UTC)

Semi -automatic change of categories of articles in private wiki

Hello, I'm the owner of Westmärker Wiki (a small Mediawiki) and I want to change certain categories.

I'm looking for a tool by which I can change several articles from a list or all articles of a category.

Any ideas or hints? Wschroedter ( talk) 11:29, 29 April 2018 (UTC)

@ Wschroedter: You should try WP:AWB. -- Izno ( talk) 13:53, 29 April 2018 (UTC)
Looks good, I'll try it out. Thanks a lot, Izno! Wschroedter ( talk) 17:08, 30 April 2018 (UTC)

AndBot

Could someone (@ Tokenzero:?) create this bot

You can tell if something is a journal or magazine by looking for the 'journal'/'magazine' string in the categories of the article, or the presence of {{ infobox journal}}/{{ infobox magazine}} on the page. Headbomb { t · c · p · b} 23:56, 26 March 2018 (UTC)

@ Headbomb: The IFF is for both ways ('&' -> 'and', 'and' -> '&'), right? -- TheSandDoctor Talk 16:09, 27 March 2018 (UTC)
I don't think it needs to be both ways, but I suppose it's better to be safe and restrict this to publications, yes. Some further digging would be required before knowing if this would be done across the board. Headbomb { t · c · p · b} 16:31, 27 March 2018 (UTC)
Oh, one thing, this should be spaced ampersands only (i.e. 'A&A' should be left alone and not converted to say, AandA). Headbomb { t · c · p · b} 16:33, 27 March 2018 (UTC)

@ Headbomb: Coding... Basically done. Should they be categorized as {{ R from modification}}? There doesn't seem to be anything more relevant (except {{ R from railroad name with ampersand}}, curiously); examples I checked (both journal and general) are somehow almost always without any rcat. The first run on infobox-journals would create ~1500 redirects. Do you want to have it run once or eg. monthly? Minor remark: there is a chance it'll create a dumb redirect when the title is in another language, say Ora and labora, but I can't find any actual example and I don't think it's a significant problem anyway. Also, some would remove a serial comma when replacing ', and' with an ampersand, but some style guides advocate keeping the comma, so I would just keep it. Tokenzero ( talk) 20:32, 6 May 2018 (UTC)

Yeah, might as well got for {{ R from modification}} since we don't have something more specific like {{ R from and}} or {{ R to and}}. The bot could run Daily/Weekly/Monthly, the exact time period doesn't really matter, but monthly seems too long. I'd go weekly at the longest. Headbomb { t · c · p · b} 17:20, 7 May 2018 (UTC)
BRFA filed Tokenzero ( talk) 18:28, 9 May 2018 (UTC)

Update links to www.fiu.edu/~mirandas and www2.fiu.edu/~mirandas

Hi,

The links to http://www.fiu.edu/~mirandas ( 1453 links) and http://www2.fiu.edu/~mirandas ( 896 links) do not work anymore. The content is now available on http://webdept.fiu.edu/~mirandas. Could a bot update those links?

Most pages should work after updating the domain. However, the alphabetical index ( http://webdept.fiu.edu/~mirandas/bios-a.htm, http://webdept.fiu.edu/~mirandas/bios-b.htm, ...) only contain empty pages. Fixing this is more complicated, but can be partially automated by matching article names and/or old link anchors with entries in http://webdept.fiu.edu/~mirandas/494-2017-a-z-all.htm. I fixed them on frwiki, so you can also try to take them from the French page when there is one.

Orlodrim ( talk) 21:51, 21 March 2018 (UTC)

The easy part is the non-bios entries, which can be solved via this query and this query. SQL Query me! 02:58, 22 March 2018 (UTC)

Who Was Who link formatting error

We seem to have a large (three-figures, at least) number of external links to Who Was Who, formatted with an extraneous comma at the end of the URL, like the one I fixed in this edit. Can someone fix them all, please?

Better still would be to apply the {{ Who's Who}} template, like this, but I appreciate that may not be so straightforward. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 16:52, 21 January 2018 (UTC)

@ Pigsonthewing: This search drags up about 1500. -- Izno ( talk) 18:15, 23 February 2018 (UTC)
I've got this one written - but since the URL works with or without the comma - would this pass WP:COSMETICBOT? SQL Query me! 18:05, 22 March 2018 (UTC)

Flagicon to flagdeco in country year navboxes

I went through part of Category:Country year navigational boxes, replacing {{ flagicon}} with {{ flagdeco}} to remove the double link and double alternative text — flagicon's alt attribute repeats the country name in nearby visible text. Example diff. I'd like to request a bot finish the rest of the category, making the same flagicon to flagdeco change. Matt Fitzpatrick ( talk) 22:02, 15 May 2018 (UTC)

Matt Fitzpatrick, how many have been done so far? If it's more than 20 or so an AWB run would probably suffice. Primefac ( talk) 13:41, 16 May 2018 (UTC)
I did everything under "*" and "A". I think most of the rest contain a flagicon, though some don't. Matt Fitzpatrick ( talk) 22:06, 16 May 2018 (UTC)
Matt Fitzpatrick, it's not really a large enough task for a bot run, but if you don't have AWB access I can take care of it. Primefac ( talk) 01:46, 17 May 2018 (UTC)
N Not done Thanks for the offer. I went ahead and made all the changes manually, so this request can be closed. Matt Fitzpatrick ( talk) 10:51, 21 May 2018 (UTC)

Automatically pull data from various wiki pages to create a table?

Hello! I'm sure there's already a bot in existence for the task that I'm trying to do, but I'm new to using Wiki bots so I'm not sure which one I'm looking for or how to use it. Basically, I'm trying to automatically copy from a particular set of wiki pages all of the sentences that contain a particular word, then paste those sentences into a spreadsheet or word document so I can look it over manually.

The specific purpose is that I'm trying to find a list of plebeian tribunes of the Roman Republic, but a quick search around the internet doesn't furnish many promising results. There is a wiki page for a list of all types of tribunes here ( /info/en/?search=List_of_Roman_tribunes), but it looks like the author had only just started this article, since it's not very comprehensive (for example, there were supposed to be ten plebeian tribunes elected every year from 457 to about 48 BC). Obviously it would in all likelihood be impossible to furnish a complete list of every plebeian tribune given the enormous number of these office holders and the relative scarcity of primary sources we have from the time, but considering that only a small fraction of the tribunes were noteworthy enough to make it into the history books (many of whom have their own wiki pages already), I think it's possible to get a reasonably well-represented list by just trawling the existing wiki pages to see which articles are about people who served as plebeian tribunes. One particularly helpful place to start would be the Wikipedia page that lists all Roman gentes (family names, at /info/en/?search=List_of_Roman_gentes). Each family name on that list links to a page that lists all of the notable members of that family, along with a short description of their careers. So the bot would start on the page for gens Abronia, do a word search for "tribun" (so that it catches variations of the word like "tribune," "tribunate," or "tribuneship"), and find nothing. Then it would go to the next page for gens Aburia, search for "tribun" again, and copy the sentence that says "Marcus Aburius, as tribune of the plebs in 187 BC, opposed Marcus Fulvius Nobilior's request for a triumph, but was persuaded to withdraw his objection by his colleague, Tiberius Sempronius Gracchus," paste that as a line in a word document or spreadsheet, continue searching for other occurrences of the phrase in the gens Aburia page until it's out of results, and then moves on to the page for gens Accia, and so forth.

Again, this probably won't furnish a comprehensive list of tribunes, but it'll at least give us a good start. Depending on how many results it returns, I might be able to just go through the resulting data and manually add each name, date of office, and link to the relevant tribune's wiki page as entries on the table at /info/en/?search=List_of_Roman_tribunes, so the bot would only need to read text from existing Wikipedia pages, and not need to write anything to them automatically. I appreciate any help or feedback that you can provide. Thanks! Dfault ( talk) 02:51, 22 March 2018 (UTC)Dfault

@ Dfault: Sounds interesting. If you have access to Unix command-line, this should work:
./wikiget -F "List of Roman gentes" > list (manually edit list to remove any unwanted pages)
  • Download plain-text (wikiget -w -p) and extract sentences containing "tribun" (case-insensitive)
awk '{print "./wikiget -w \"" $0 "\" -p | awk -v titl=\"" $0 "\" \x27{IGNORECASE=1; split($0,a,\".\"); for(b in a){if(a[b] ~ /tribun/) print titl \" : \" a[b]} }\x27" }' list | sh
(above is a single unbroken line)
For each match, it will give the article title followed by a ":" then the sentence containing "tribun". Do you want to do it, or should I post the script output? -- Green C 21:29, 24 March 2018 (UTC)

Brilliant! Just ran it now, worked perfectly! Looks like there are about 750 results; I'll get to work formatting them now and let you know if I run into any trouble or have any updates. Thanks for your help! Dfault ( talk) 23:11, 25 March 2018 (UTC)

Great! Glad it is useful. The script uses metaprogramming (generative programming) with awk emitting awk code, thus the \x27 (ie. '). -- Green C 16:09, 26 March 2018 (UTC)

Could anyone remove all lines beginning with two bullets from these articles?

Could anyone remove all lines beginning with two bullets from the commented-out list of articles? I'm trying to remove all the species entries listed under the genera. Abyssal ( talk) 12:48, 16 April 2018 (UTC)

Just copy the text into a text editor and do a find and replace. Easy. ··· 日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 17:59, 17 April 2018 (UTC)
For 112 articles? I think the number of articles involved is the reason for this request, not the difficulty of editing each individual article. (That being said, could someone not do this with WP:AWB? I don't actually know…) - dcljr ( talk) 00:26, 22 April 2018 (UTC)
Yes, this is a trivial task in AWB. -- Izno ( talk) 18:41, 22 April 2018 (UTC)
@ Abyssal: does this still need doing now the pages are in mainspace? ƒirefly ( t · c · who? ) 19:13, 25 May 2018 (UTC)
@ Firefly:Nah, this request can be canceled. Abyssal ( talk) 04:42, 26 May 2018 (UTC)

Wikiproject task force tagging (Reality TV)

Can I request a bot to tag the articles that are in Category:Reality television series with "|reality-tv=yes|reality-tv-importance="(importance can be assessed manually afterwards). I have created a list of each individual category here after removing certain subcategories, mostly participant and container categories. Please let me know if you think that needs more refining. A lot of the articles already have WPTV but are just missing the Reality TV task force label, and some older articles don't have the project at all. So would need the bot to add the full WPTV+realitytv tag to any that are missing the project, and only add the task force parameter to those that are already under WPTV. WikiVirus C (talk) 16:30, 9 March 2018 (UTC)


Coding...

Hi, can I just clarify the requirement. What I understand is necessary is:
  • All the articles that are in any of the categories here need to have the WPTV template on their talkpage. Most already have it, and this tag should be added to those that don't.
  • Within the WPTV template, the parameter "|reality-tv=yes|reality-tv-importance=" should be added it it is not already added?
I am working on a test script to do this. I am looking to use my first bot to do this, so I'd greatly appreciate a little patience Pi (Talk to me! ) 15:37, 25 March 2018 (UTC)
@ Pi: Yeah that about sums it up. There is no rush, so take your time. Thanks for the response. WikiVirus C (talk) 17:28, 25 March 2018 (UTC)
@ WikiVirusC: I've looked at the data. I ran a script through the categories that you provided, and found a total of about 1900 subcategories. Within these subcategories are approx 16,000 articles. I can't help thinking that the list might be too broad. I don't want to overload the WikiProject with too many articles which might only be loosely related to Reality TV Pi (Talk to me! ) 22:21, 25 March 2018 (UTC)
Although if you do want all these articles tagged I do now have a bot that I am ready to put in a BRFA Pi (Talk to me! ) 22:24, 25 March 2018 (UTC)
The list includes several subcategories, I took out the ones I didn't want, can you just run it on the ~600 categories that are listed? Not everything beneath them, unless it's listed as well. How many total pages is it with just those? WikiVirus C (talk) 23:02, 25 March 2018 (UTC)
That makes sense I'll run it when I get home from work and see how many articles it is Pi (Talk to me! ) 11:32, 26 March 2018 (UTC)
On just the categories you provided there are 8,752 pages. I have made a list here of the pages. If you are happy with this then I will put in the request for bot approval. Pi (Talk to me! ) 23:31, 26 March 2018 (UTC)
I have adjusted the list of categories a bit [11] [12]. Mostly removing Category:Dance competition television shows and related categories since some of those were strictly dance competition and not reality shows. I will manually go through those afterwards, but from the few I checked, the ones that belong fall under another relevant category anyways. The list of individual pages, had a few that were redlinked, and I'm not sure if that was just a parsing error or not. I'm sure there shouldn't be any big issues, but if you want to updated the list I can look through it again to see if any more refining is needed. WikiVirus C (talk) 15:54, 27 March 2018 (UTC)
Hi, I've run the list again. It's now 8,493 pages, and I've resolved the redlink problem (that was about non-Ascii characters). The list is here Pi (Talk to me!) 23:59, 27 March 2018 (UTC)
Looks fine to me. I will go over it again in morning, but I don't see any reason not to request approval when you are ready. WikiVirus C (talk) 04:34, 28 March 2018 (UTC)

Please head to WP:VG/ViGoR for article-of-the-day improvement project

Hi! I recently made a proposal at WP:VG regarding a "featured article of the day" premise. Please head to that page to add your expertise reading the automation nature of my proposal. We're discussing the validity of it too, but I think if I have the automation nature sorted it will be much easier to prove I will have a successful implementation. Essentially I want a bot to: -- Coin945 ( talk) 22:01, 30 March 2018 (UTC)

  1. Randomly choose an article from a list of WP:VG stubs
  2. Place it in on a page
  3. (Possibly notify people - still discussing)
  4. Do this for every day of the week until it has seven
  5. After seven days remove the oldest one and archive it (the page always has seven listed articles)
  6. Have a table ala this one taht shows the accomplishments made to each article.
  7. Note: This is a better system than at TAFI methinks because it does not require humans to nominate articles and such - it's all automated.

See Wikipedia talk:List of Wikipedians by article count#Updating?.     — The Transhumanist    10:18, 5 March 2018 (UTC)

It seems to be working again.     — The Transhumanist    07:32, 20 March 2018 (UTC)
Are you sure? The main list hasn't been updated since October. Lugnuts Fire Walk with Me 08:10, 31 March 2018 (UTC)

Updating vital article counts

Wikipedia:Vital articles/Level/5, the list of Level 5 vital articles, is currently under construction, with editors adding articles to the list. Right now, article counts for each subpage and section are updated manually, which is quite inconvenient and often leads to human errors. I would like to request a bot to update the article counts in each section of each sublist (such as Wikipedia:Vital articles/Level/5/Arts), the total article count at the top of each sublist, and the table listed on the main Wikipedia:Vital articles/Level/5 page. I think such a bot would ideally update the article counts once daily. feminist ( talk) 04:27, 25 May 2018 (UTC)

This should also apply to Levels 1/2/3/4. And if the bot could also automatically update the assessment icons, that would be nice. Headbomb { t · c · p · b} 09:37, 25 May 2018 (UTC)
I'll take a look at this tonight - can't be too hard, and I can run it on Toolforge. ƒirefly ( t · c · who? ) 10:07, 25 May 2018 (UTC)
Coding... I can definitely do this - proof of concept for counts. Just working on the assessment icon updating. ƒirefly ( t · c · who? ) 23:46, 25 May 2018 (UTC)
BRFA filed ƒirefly ( t · c · who? ) 19:48, 26 May 2018 (UTC)

question

hi I find csv files do make bot to adding articles of csv files with csv loder in auto wiki browser this — Preceding unsigned comment added by Amirh123 ( talkcontribs) 13:53, 6 May 2018 (UTC)

Do you mean mass-creating articles from a CSV file? That's a bad idea. Richard 0612 22:30, 14 May 2018 (UTC)

A category I created about World Series-winning managers.

I'm trying to properly alphabetize Category:World Series-winning managers. It was saved from deletion, but now, for some reason, the category isn't in proper alphabetical order. I did get some help regarding the sort key, but to no avail. Perhaps a bot can kindly help me. Thank you. Mr. Brain ( talk)

This is a known problem caused by a configuration change. Patience will be required. It is supposed to be fixed in a few days. – Jonesey95 ( talk) 02:29, 14 April 2018 (UTC)

Y Done

GAR archiving

Hi all. I used to work on community Good Article reassessments area years ago and have recently returned. Old community reassessments are archived by User:VeblenBot. However Veblenbot has been inactive for a while now and the unarchived GARs are building up, Category:GAR/60 has 134 entries in it. Could someone please take over it or find another way to do the archiving. I found two relevant discussions at the Bot noticeboard ( Wikipedia:Bots/Noticeboard/Archive 8#New operator needed for VeblenBot and PeerReviewBot and Wikipedia:Bots/Noticeboard/Archive 10#User:VeblenBot) . I also tried Wikipedia:Village pump (technical)#Archiving community Good Article reassessments. It does not seem like a difficult task. Regards AIRcorn  (talk) 23:40, 25 March 2018 (UTC)

@ TheSandDoctor: in case you want to expand the GA bot you are working on. Kees08 (Talk) 23:44, 25 March 2018 (UTC)

Thanks for the ping Kees08. I will certainly look into this, just need to research more on exactly what it did (but it is late now, so will do that tomorrow). I wish the source code was still available. -- TheSandDoctor Talk 06:17, 26 March 2018 (UTC)
There are links from the bots user page. @ CBM and Ruhrfisch: have been very good also, they should be able to help out with code. AIRcorn  (talk) 07:06, 26 March 2018 (UTC)
@ Aircorn: The links on the user page are dead (for me at least they are 404 errors), that is why I said that. I will reach out to them (then again, you have pinged them, so to avoid pestering, I shall wait a couple days). -- TheSandDoctor Talk 15:06, 26 March 2018 (UTC)
Thanks much appreciated. AIRcorn  (talk) 19:34, 26 March 2018 (UTC)
CBM is the author of Veblenbot. I did not write the bot or modify its code - I only took it over trying to find someone with more knowledge than I to hand it off too. SOrry I cannot be of more help, Ruhrfisch ><>°° 11:25, 4 April 2018 (UTC)

Updating inaccurate unreferenced templates

In articles like this one, I often find {{ unreferenced}} templates in sections that contain references. Is there a Wikipedia bot that can be configured to replace Template:Unreferenced with Template:Refimprove? Jarble ( talk) 03:01, 29 March 2018 (UTC)

This isn't really possible without some pre-processing because {{ unreferenced}} can be used to indicate an unreferenced section rather than an entirely unreferenced article. -- Izno ( talk) 04:48, 29 March 2018 (UTC)
@ Izno: Then it would be relatively easy to automatically replace {{ Unreferenced section}} with {{ refimprove section}}. Jarble ( talk) 23:42, 29 March 2018 (UTC) Jarble ( talk) 23:39, 29 March 2018 (UTC)
I brain-farted on this one--of course it's possible even without pre-processing--I just was thinking of trying to get some numbers for affected articles. -- Izno ( talk) 00:11, 30 March 2018 (UTC)

For a similar bot action check Wikipedia:Bots/Requests for approval/Yobot 11. Yobot could do this as part of general tagging actions but it will need a new BRFA approved since this BRFA is not valid anymore. -- Magioladitis ( talk) 23:32, 4 April 2018 (UTC)

Maps on sawiki

I tried importing Module:Location map to sawiki (latest code) but it has 1000s of dependencies. It is very difficult to import all associated countries / states / cities to sawiki manually. Can someone please help. Capankajsmilyo ( talk) 12:44, 6 April 2018 (UTC)

Bot to search and calculate coordinates (where did my previous request go? nothing was done to it)

Please look at this table: Lands_administrative_divisions_of_New_South_Wales#Table_of_counties

My goal is to add a column to this table that shows the approximate geographical coordinates of each county. Those county coordinates can be derived form the parish coordinates that are found in each county article, by taking the middle of each northernmost and southernmost / easternmost and westernmost parish coordinates. Is it possible to write a script or a bot to achieve this?

For illustration, I did the work for the first county in the list, Argyle County, manually. The table of parishes in this article shows that they range from 34°27'54" and 35°10'54" latitude south and 149°25'04" and 150°03'04" longitude east. The respective middle is 34°49'24" and 149°44'04", which I put in the first table entry of Lands administrative divisions of New South Wales and the info-box of Argyle County. -- Ratzer ( talk) 11:35, 12 April 2018 (UTC)

Your request was automatically archived approximately 3 weeks ago. In over 2 months no one had responded to the task. That suggests no operator is interested in performing the work. -- Izno ( talk) 13:33, 12 April 2018 (UTC)

Misplaced punctuation: add this to an existing task

Could someone already running a cleanup bot request permission to add a simple task, punctuation before cleanup tags? Example, which judging by the date on the tag, had been there since 2015. Inline cleanup tags, e.g. {{ fact}} and {{ which}}, should go after punctuation, so {{cleanup tag|date=whenever}}, and {{cleanup tag|date=whenever}}. are wrong. It's quite visible to the reader, and a bit jarring, so this isn't WP:COSMETICBOT. Nyttend ( talk) 12:59, 5 April 2018 (UTC)

You might want to make a request to bundle this into WP:AWB / WP:GENFIXES as well. Headbomb { t · c · p · b} 14:42, 12 April 2018 (UTC)

AWB already fixes this. And it's also part of CHECKWIKI error 61. -- Magioladitis ( talk) 15:01, 12 April 2018 (UTC)

Clearing articles in the backlogs 'articles without infoboxes' that now have an infobox

I would propose that to help clear the 'articles without infoboxes' maintenance categories, a bot could go through these pages and see if an infobox has already been added to the page, then if it has, remove the |needs-infobox=y from the WikiProject banner templates.

This would allow editors who wanted to add infoboxes to articles that are in these categories not to have to sift through articles that already have infoboxes on them so that they can clear the backlog more quickly.

Thanks. Wpgbrown ( talk) 17:39, 3 April 2018 (UTC)

@ Wpgbrown: Could you provide the category these are in, please? ~ Rob13 Talk 17:58, 3 April 2018 (UTC)
@ Bu Rob13: Categories (to name a few) are Category:Biography_articles_without_infoboxes, Category:Ship_articles_without_infoboxes, Category:Song articles without infoboxes. I can find several extra categories by searching on this page [13]. I would assume however it would make sense for a potential bot to only crawl through categories with large enough backlogs. Wpgbrown ( talk) 18:38, 3 April 2018 (UTC)
The categories vary by WikiProject. Some of these cats are subcats of Category:Wikipedia articles with an infobox request, but by no means all. Examples:
Notice that the parameter name varies, also that some have variant values - this can be confusing, since {{ WikiProject Derbyshire|ibox=yes}} means that that article already has an infobox and doesn't need another. -- Redrose64 🌹 ( talk) 18:43, 3 April 2018 (UTC)

There are cases where I have suggest an article should not have an infobox, but this typically means is there is no current infobox that would make the article better than having none. If a human editor can't deal with that, how will a bot do it? Furthermore, you can't have missed that we've just had a huge Arbcom case about conduct around infoboxes. I think if the bot stuck an infobox on Buckingham Palace, all hell would break loose and there would probably be an ANI thread requesting said bot be blocked. Sorry, bots should only work on uncontroversial and boring stuff, and this just isn't that. Ritchie333 (talk) (cont) 12:33, 4 April 2018 (UTC)

Ritchie333, obviously infobox changes are controversial, but I think you may have missed the intended effect here. Unless I'm the one who is confused, this request would not lead to any infoboxes being added or removed; it would eliminate the request for an infobox from articles that already have one. I suppose that could be seen as controversial in that it would ease the task of adding infoboxes to the remaining articles with requests, but it doesn't sound like that's what you're commenting on. Mike Christie ( talk - contribs - library) 13:21, 4 April 2018 (UTC)
@ Ritchie333: Yes, Wpgbrown ( talk · contribs) is asking for the removal of |needs-infobox=yes (or equivalent) where this is no longer applicable. It's pure WP:GNOMEing. -- Redrose64 🌹 ( talk) 18:35, 4 April 2018 (UTC)

I used to go this as part of my gnomish actions. I can do it agai using WP:AWB which is a powerful tool to make repeative actions and the tool resposnible for a large amount of edits in English Wikipedia. -- Magioladitis ( talk) 23:26, 4 April 2018 (UTC)

Are you sure you can do that?Jonesey95 ( talk) 03:42, 5 April 2018 (UTC)
Jonesey95 Yes, I only need to use my bot account after approval. This is what I plan to do. -- Magioladitis ( talk) 07:14, 5 April 2018 (UTC)
Jonesey95 Wikipedia:Bots/Requests for approval/Yobot 17. -- Magioladitis ( talk) 07:15, 5 April 2018 (UTC)
Wikipedia:Bots/Requests for approval/Yobot 60 for that. -- Magioladitis ( talk) 07:27, 5 April 2018 (UTC)
@ Jonesey95:, as long as it's BRFA'd and properly trialed and reviewed, there shouldn't be any issue. Headbomb { t · c · p · b} 14:36, 5 April 2018 (UTC)
It might also be worth removing {{reqinfobox}} or {{Infobox requested}} if an infobox has been added as this template also adds them to a maintenance category. Wpgbrown ( talk) 22:12, 14 April 2018 (UTC)

Can anyone use a bot to scan many lists for red links and then remove them from the articles?

I have a series of lists of prehistoric life articles that I need to condense. Can anyone use a bot to scan those articles for red links and remove the entries that contain them from the lists? Abyssal ( talk) 12:50, 2 April 2018 (UTC)

Yes, but you're going to have to tell us which lists. — Dispenser 10:43, 3 April 2018 (UTC)
Thanks, @ Dispenser:! I've hidden the list here in a comment. Abyssal ( talk) 12:38, 3 April 2018 (UTC)
@ Dispenser: Still interested in helping me condense these lists? Abyssal ( talk) 03:49, 5 April 2018 (UTC)
Your question is ambiguous. Am I to be removing red links or blue links? Or delete each draft since they each contain some red links (as stated)? Are you looking for the full list of red links? — Dispenser 10:45, 5 April 2018 (UTC)
Remove the items containing red links from the lists altogether, eg:

to

Abyssal ( talk) 14:19, 5 April 2018 (UTC)

@ Dispenser: Tried to clarify as best I could. Abyssal ( talk) 03:19, 8 April 2018 (UTC)
Done [14]Dispenser 00:35, 9 April 2018 (UTC)
Thanks, @ Dispenser:, that looks great. While we're at it, could you remove all the lines starting with two bullets from the same lists? Some of them are still a bit long. Abyssal ( talk) 16:15, 9 April 2018 (UTC)
I was only able to find one set on Draft:List of the Paleozoic life of South Dakota. — Dispenser 01:46, 10 April 2018 (UTC)
@ Dispenser:Could you adjust it to remove the Caseodus eatoni line? I'm trying to get rid of all of the species even if they're blue links. Abyssal ( talk) 13:47, 10 April 2018 (UTC)
@ Dispenser: Abyssal ( talk) 12:03, 12 April 2018 (UTC)
The script I wrote only works for pages with live red links (uses the Database). And frankly this doesn't seem like a productive use of my time. — Dispenser 12:34, 17 April 2018 (UTC)

Implement article history after a good article reassessment

Hi. I asked this years ago at User talk:Gimmetrow#Update the article history following Good article reassessments. Since I am back in this area I thought I would try again. I am not sure what bot updates article histories now so am posting this here instead of at an individuals page.

The {{ GAR/link}} template renders the following {{GAR/link|~~~~~|page=|GARpage=|status=}}. Status can be changed to kept, delisted or a number of other similar positions. An example of a delisted template is here and a kept one here. The issue is that any reassessment has been preceded by at least one assessment, so that template is not ideal. It really needs to be integrated into the {{ articlehistory}} template.

So far the only way to do that is manually. [15] This requires finding the oldid, copying the reassessment page, dates and updating GA to DGA. It would be useful if a bot did this like it does for other similar article history processes. There is a complication however as a reassessment can be opened as a community reassessment or an individual one. As far as I can tell this will only affect the link parameter. Individual reassessments will link to a talk subpage Talk:Foo/GA?, while community ones use a WP:GAR subpage Wikipedia:Good article reassessment/Foo/?. Foo being the name of the article and ? being the number of the reassessment.

For delisted articles, it would also be useful if the bot could change or remove the GA class from the wikiproject template (changing to C is probably the best, but since the difference is mostly arbitrary B would do). Another useful feature would be the removal of the {{ good article}} template from the article itself (it produces the green spot at the top of the page).

There are 2 280 delisted articles (I don't think this includes ones that were delisted and then later regained good or featured status so the true number may be higher), so this feature could save editors quite a bit of manual work. Thanks in advance. AIRcorn  (talk) 00:29, 18 April 2018 (UTC)

@ TheSandDoctor: This would be a good addition to the GA bot once it is working properly. I did not do GARs before because it was a little daunting to close them (turns out it is not too bad, but pretty annoying still). Kees08 (Talk) 22:05, 19 April 2018 (UTC)

I just thought of another useful feature. Updating the lists at Wikipedia:Good articles/all. When an article is delisted it would need to be removed from there. AIRcorn  (talk) 22:37, 19 April 2018 (UTC)

Removing succession boxes from song and album articles

The consensus during a recent RFC was to remove succession boxes from song and album articles. Since these appear in over 4,200 song [16] and 2,000 album articles, [17] it seems that this may be a good job for a bot. — Ojorojo ( talk) 14:33, 30 May 2018 (UTC)

BRFA filed Ronhjones   (Talk) 15:36, 10 June 2018 (UTC)
Y Done Ronhjones   (Talk) 20:23, 23 June 2018 (UTC)

Can anyone bulk undo edits by a single user?

Can anyone bulk undo the most recent edit by User:Dispenser tot he commented out list of articles? They were fine edits, but I need the previous state of the article to show up for the public and the information from those edits can be gotten later out of the article history. Abyssal ( talk) 20:30, 4 May 2018 (UTC)

I would recommend asking Dispenser directly to see if they'd be able to mass-rollback the edits in question. If not, ping me and I'll look into it. Richard 0612 22:34, 14 May 2018 (UTC)
Declined Not a good task for a bot. It was only 111 edits not worth getting a bot operator involved. — Dispenser 19:03, 16 June 2018 (UTC)

Tag covers of academic journals and magazines with Template:WikiProject Academic Journals / Template:WikiProject Magazines

The task is "simply"

I believe in both cases, the parameters may be simply the name of the file (e.g. File.svg), or a full [[File/Image:....]] thing.

The task would need to be run daily/weekly. Headbomb { t · c · p · b} 14:31, 24 May 2018 (UTC)

Not to ask the stupid question, but why not just check the talk page of any article using the above infoboxes and place the appropriate talk page tag if necessary? Primefac ( talk) 17:42, 24 May 2018 (UTC)
Because it's the (non-free) image files (i.e. pages in the File namespace) that need to be tagged, not the articles. feminist ( talk) 03:04, 25 May 2018 (UTC)
Oh, right, my apologies. I misread "the associated file" as "the associated talk page" for some reason. Primefac ( talk) 12:49, 25 May 2018 (UTC)
Coding... should hopefully get this done. Dat Guy Talk Contribs 07:16, 29 May 2018 (UTC)
Headbomb want me to also use the class from any wikiprojects in the talk page? Dat Guy Talk Contribs 16:13, 5 June 2018 (UTC)
Never mind, stupid question. Dat Guy Talk Contribs 16:26, 5 June 2018 (UTC)
BRFA filed. Dat Guy Talk Contribs 16:18, 15 June 2018 (UTC)

MeetUp: Women of Library History

Hello There, I would like to send a MeetUp invitation to all active Wikipedians in New Orleans (particularly librarians)--here is our MeetUp page: Wikipedia:Meetup/New_Orleans/WomeninLibraryHistory Please let me know if I need to do anything else--thanks! RachelWex ( talk) 01:16, 19 May 2018 (UTC)

@ RachelWex: Well the first thing you need is to craft the message to be sent and have a list of people/pages to notify. Then several people can sent those notices. Headbomb { t · c · p · b} 01:24, 19 May 2018 (UTC)
@ Headbomb: I can craft the message, but I do not know how to locate the active Wikipedians in New Orleans. Any suggestions? RachelWex ( talk) 01:37, 19 May 2018 (UTC)
I'd suggest lookling at Wikipedia:WikiProject Louisiana/ Wikipedia:WikiProject New Orleans. Headbomb { t · c · p · b} 01:40, 19 May 2018 (UTC)
Not sure why this is at BOTREQ - it seems more of a task for WP:MMS (if you have a list of recipients) or for WP:GN (if you have a geographical area to target). -- Redrose64 🌹 ( talk) 07:37, 19 May 2018 (UTC)

WikiSpaces wikis linked from all Wikipedias

Hello! WikiSpaces is closing on July 2018. It would be helpful having a list of all "subdomain.wikispaces.com" from external-links table in all Wikipedias (and sister projects too, why not). In WikiTeam we will try to preserve all these open-knowledge sites. Thanks. emijrp ( talk) 13:02, 5 May 2018 (UTC)

@ Emijrp: You can get this yourself with a really simple search. @ Cyberpower678: may want to do a botjob for the domain. -- Izno ( talk) 13:55, 5 May 2018 (UTC)
The domain needs archiving first. I've submitted a list of discovered Wikispaces URLs that IABot found during the course of its runs to the maintainers of the Wayback Machine for mass archiving.— CYBERPOWER ( Chat) 18:11, 5 May 2018 (UTC)
@ Cyberpower678: Can you send me a copy of that URLs? Wayback Machine is good (archive HTMLs), but I am coding a bot to export the wikicode from the wikis. emijrp ( talk) 07:39, 6 May 2018 (UTC)
Here you go.— CYBERPOWER ( Chat) 13:32, 6 May 2018 (UTC)

WP:RESTRICT archive bot

I asked for this over a year ago, and one bot op said they would do it... but they never did, so I’m asking again.

WP:RESTRICT is an incredibly bloated list of everyone who is currently sanctioned by arbcom or the community as well as those under “last chance” unblock conditions. In order to reduce the size of these lists and make them easier to navigate, it was decided that any sanction on a user who had been inactive or blocked for more than two years be moved to an archive. The sanction is still valid, just not displayed on the main page anymore, and can be moved back if the user returns to editing.

I did the initial archiving myself 14 months ago. It ranks as pretty much the most tedious thing I have ever done in nearly 11 years of contributing here. I would therefore like to again request that some bot or other be instructed to review listings there once a month or so and remove any fitting the criteria to the archive. If it could move back those that have returned to editing that would be amazing. We seem to be able to auto-generate such data for inactive admins so I am guessing (as someone who admittedly knows nothing at all about programming bots) that this should be fairly straightforward. Thanks for your time. Beeblebrox ( talk) 03:32, 8 June 2018 (UTC)

For now, I gave the page WP:RESTRICT#Active_editing_restrictions a spitshine with collapsible tables. Headbomb { t · c · p · b} 13:46, 8 June 2018 (UTC)
@ Beeblebrox: Looking at it. Will message you on your talk page later. Ronhjones   (Talk) 15:43, 24 June 2018 (UTC)
Coding... Ronhjones   (Talk) 21:15, 26 June 2018 (UTC)
BRFA filed Ronhjones   (Talk) 15:51, 3 July 2018 (UTC)
Y Done Ronhjones   (Talk) 21:48, 3 July 2018 (UTC)

Invalid fair use media

How are we doing on getting a bot together that detects improper use of non-free media (if not the actual removal from the articles)? That is, the use of non-free media on articles for which the file description page lacks a valid WP:FUR specific to that article - I've just found and removed this, 366 days after this image was added lacking a valid FUR for the article, contrary to WP:NFCCP#10c. -- Redrose64 🌹 ( talk) 19:10, 20 April 2018 (UTC)

My bot is approved for this, however, there was far too much whining during the brief time that it was running. —  JJMC89( T· C) 00:03, 21 April 2018 (UTC)
Whining at being told about copyright issues is a tradition almost as old as Wikipedia itself. I can recall many heated debates and people threatening to quit the project... Someguy1221 ( talk) 00:19, 21 April 2018 (UTC)
  • I think that a proposal on a larger forum would not find consensus for this. On the other, producing a list of articles and what images are problematic is still helpful and no one would object to a list. Oiyarbepsy ( talk) 00:07, 21 April 2018 (UTC)
    OK, take my original post and ignore the parenthesis "(if not the actual removal from the articles)". Can we at least do the detection? I don't mind if it's a list, a notice placed on the talk page of the file, or a notice on the talk page of the article. The latter two would need some sort of tracking category. -- Redrose64 🌹 ( talk) 08:19, 21 April 2018 (UTC)

JJMC89, could your bot task be modified to simply log these uses instead of removing them? Oiyarbepsy ( talk) 01:12, 22 April 2018 (UTC)

It is easier to write something new than to modify the other script. The issue will be the time needed to check the 602,549+ files. I'm doing some testing. —  JJMC89( T· C) 05:28, 23 April 2018 (UTC)
Already at about 5,000 violations and only in the G's. —  JJMC89( T· C) 01:23, 24 April 2018 (UTC)
Report available at User:JJMC89 bot/report/NFCC violations (Warning: large page). —  JJMC89( T· C) 01:06, 26 April 2018 (UTC)
Thank you, I'll look at it next time I have a day off work (Saturday?) -- Redrose64 🌹 ( talk) 07:06, 26 April 2018 (UTC)
  • Redrose64 I've been going thru the resulting list (starting at the top), but the large size of the page has left me unable to edit to remove the ones I've completed. If you do work on it, consider starting from the bottom so we don't duplicate each others work. Oiyarbepsy ( talk) 04:20, 28 April 2018 (UTC)
    OK, will do... unfortunately, although today is Saturday, I've been called in to work to cover an absence. Will get round to it ASAP. -- Redrose64 🌹 ( talk) 10:51, 28 April 2018 (UTC)
    I've updated the report to only list 1000 files at a time to make the page size manageable. This is configurable, so let me know if you want a different limit. —  JJMC89( T· C) 07:37, 6 May 2018 (UTC)

Athletics piped links

There is a historical link issue that needs sorting out for the article Sport of athletics.

Would it be possible to amend all piped links to Athletics (sport) (an old title and currently a redirect) to point directly to Sport of athletics? The old title is still ambiguous with Athletics (physical culture), which was the reason for the subsequent move. 99% of the incoming links are valid, as it's a non-natural title choice.

There is also a sub-sport distinction link issue with track and field. I've seen many links in the style [[track and field|athletics]] and [[track and field athletics|athletics]] – these piped links should also be piped to sport of athletics to remove the WP:EASTEREGG aspect. Similarly, links like [[sport of athletics|track and field]] should simply point to track and field. SFB 19:22, 4 May 2018 (UTC)

@ Sillyfolkboy: Has consensus been established for this change? It makes sense on its face, but it's best to ask on the article talk page or at the appropriate WikiProject first. Richard 0612 22:36, 14 May 2018 (UTC)
@ Richard0612: I've added this to the Wikiproject talk page and at Talk:Sport of athletics.
  • [[track and field|athletics]] → [[sport of athletics|athletics]]
  • [[track and field athletics|athletics]] → [[sport of athletics|athletics]]
  • [[sport of athletics|track and field]] → [[track and field]]
  • [[sport of athletics|track and field athletics]] → [[track and field]]
  • [[athletics (sport)|track and field]] → [[track and field]]
  • [[athletics (sport)|track and field athletics]] → [[track and field]]
  • [[athletics (sport)|athletics]] → [[sport of athletics|athletics]]
Better specified the targeted changes too SFB 15:00, 15 May 2018 (UTC)

Replace architecture= parameter value in Infobox religious building post-merge

{{ Infobox Mandir}} and {{ Infobox Hindu temple}}, and maybe a couple of other related templates, have been merged into {{ Infobox religious building}}. As part of the conversion, the value of the |architecture= parameter in the merged templates has been assigned a different meaning.

In the pre-merge templates, |architecture= could take a value like " Dravidian architecture". In {{ Infobox religious building}}, |architecture= takes a value of "yes" to indicate that the infobox should have an Architecture section, and the actual architectural style is placed in |architecture_style=.

In Category:Pages using infobox religious building with unsupported parameters, templates with an unsupported value for |architecture= are listed under the "Α" section heading (note that "Α" is a Greek letter that is listed after "Z" in the category listing).

I am looking for someone who would be willing to run through that section of the tracking category with AWB and replace this:

| architecture = [any value] |

with this:

| architecture = yes | architecture_style = [any value] |

The "[any value]" string should be preserved in each infobox. For example, | architecture = Dravidian architecture | would be changed to | architecture = yes | architecture_style = Dravidian architecture |

Here's a sample edit.

This will have to be a supervised run, since there could be some strange stuff in the parameter values. It looks like there are about 1,000 pages to fix. – Jonesey95 ( talk) 16:04, 12 April 2018 (UTC)

That seems very poor template design. Why isn't the |architecture= automatically set to yes (or something functionally equivalent) when there's a non-null |architecture_style=? Headbomb { t · c · p · b} 01:36, 19 May 2018 (UTC)
That sounds like a good idea, but we need these 800 or so pages fixed first so that the merge can be completed. – Jonesey95 ( talk) 14:26, 19 May 2018 (UTC)
Well it seems to me no bots need to be involved here if I understand the situation correctly. Just treat |architecture= as an alias of |architecture_style=. A bot could replace |architecture= with |architecture_style= if the old parameter is to be deprecated, but it seems good to update the template before the bot, rather than after the bot. Headbomb { t · c · p · b} 14:37, 19 May 2018 (UTC)

Upon review the template, I think your original course of action is better and that my suggestion above isn't adequate for the current functionality. |architecture=yes enables a whole section of the infobox. I still think it'd be good to have the section be displayed on whether its parameters are empty or present, but that's a different discussion entirely. Headbomb { t · c · p · b} 14:41, 19 May 2018 (UTC)

Friendly Search Suggestions

Hi, suggesting that the Template:Friendly search suggestions be added by bot to every stub article talk page to aid the improvement of the articles, thanks Atlantic306 ( talk) 20:53, 6 June 2018 (UTC)

That seems like an incredible waste of time and effort, as well as the patience of the community. Is there a consensus that this should be done? Primefac ( talk) 12:42, 7 June 2018 (UTC)
  • Disagree, its completely uncontroversial and helpful to the community as the template has a large number of search options to improve stub articles and surely thats a very good use of time and effort to improve the Encyclopedia, for something so minor is consensus really needed? thanks Atlantic306 ( talk) 20:30, 7 June 2018 (UTC)
Every stub article talk page - you're talking hundreds of thousands if not millions of stubs (Just checked the cat, which is at 2+ million). InternetArchiveBot and Cyberpower got harassed simply for placing (in my opinion completely relevant) talk page messages on a fraction of that. So yes, I do think you need consensus. Primefac ( talk) 22:08, 7 June 2018 (UTC)
I would oppose that with tooth and nail. Absolutely not suitable for a bot task. Headbomb { t · c · p · b} 03:03, 8 June 2018 (UTC)
    • Well, its certainly too much for a human editor - if it was limited to 300 articles a day it would not cause much disruption, Atlantic306 ( talk) 19:17, 14 June 2018 (UTC)
    • Will start an RFC when I have more time, thanks Atlantic306 ( talk) 20:38, 16 June 2018 (UTC)

Sort Pages Needing Attention by Popularity/daily views

I suggest, for example, that someone sort the items on this page Category:Wikipedia_requested_photographs by page popularity, similar to how this page is sorted: Wikipedia:WikiProject_Computer_science/Popular_pages Instead of clicking through random obscure pages, a sorted table would allow people to prioritize pages that need attention the most. The example bot is found here User:Community_Tech_bot. Turbo pencil ( talk) 00:57, 8 June 2018 (UTC)

@ Turbo pencil: Try Massviews. -- Izno ( talk) 02:23, 8 June 2018 (UTC)
@ Izno: Thanks a lot Izno. Super helpful! — Preceding unsigned comment added by Turbo pencil ( talkcontribs) 04:12, 8 June 2018 (UTC)

Suspicious User Watcher

Watches suspicious users because they might wreak havoc on the wiki. Bot reports back to the operator(s) so they know what the user is doing, just in case the user is committing vandalism, or anything else. Bot finds suspicious users by seeing if they vandalized (or as I mentioned before, anything else) past the 2nd warning. Manual bot. — Preceding unsigned comment added by SandSsandwich ( talkcontribs) 08:19, 9 July 2018 (UTC)

Idea is not well explained.. I have a funny feeling this is a joke request anyway, but whatever. Primefac ( talk) 12:02, 9 July 2018 (UTC)
@ SandSsandwich: that would be a lot of work. But to begin with, how should the bot decide/recognise which users are suspicious? —usernamekiran (talk) 13:01, 11 July 2018 (UTC)
I mean, technically speaking, we already have an anti-vandal bot. Primefac ( talk) 17:13, 11 July 2018 (UTC)
Do you mean cluebot or actual anti-vandal bot? Who we had to retire, cuz he was getting extraordinarily intelligent ( special:diff/83573345). I mean, he had unlimited access to entire wikipedia afterall. Anyways, this idea is not much feasible: a bot generating list of users (obviously after observing contribution history of every non/newly auto confirmed users. Then posting this list somewhere, other humans examining these users. Too much work for nothing. Too many resources would be wasted. Current huggle/cluebot/RCP/watchlist/AIV pattern is better than this. —usernamekiran (talk) 01:25, 12 July 2018 (UTC)
We already have multiple tools that are used to give increasing attention to editors after 1, 2, 3 or 4 warnings. I'm not sure whether an additional process is required, or why 2 warnings is such a significant threshold. Ϣere SpielChequers 09:30, 16 July 2018 (UTC)

Missing big end tags in Books and Bytes newsletters

Around 1300 pages linking to Wikipedia:The Wikipedia Library/Newsletter/October2013 contain <center><big><big><big>'''''[[Wikipedia:The_Wikipedia_Library/Newsletter/October2013|Books and Bytes]]'''''</big> after a misformatted issue 1 of a newsletter. [18] I guess it looked OK before Remex but now it gives an annoying large font on the rest of the page. A few of the pages have been fixed with missing end tags. It happened again in issue 4 (only around 200 cases) linking to Wikipedia:The Wikipedia Library/Newsletter/February2014 with <center><big><big><big>'''''[[Wikipedia:The_Wikipedia_Library/Newsletter/February2014|Books and Bytes]]'''''</big>. [19] None of the other issues have the error. The 200 issue 4 cases could be done with AWB but a bot would be nice for the 1300 issue 1 cases. Many of the issue 4 cases are on pages which also have issue 1 so a bot could fix both at the same time. PrimeHunter ( talk) 00:36, 19 July 2018 (UTC)

I like how this problem magnifies with each newsletter addition; (see User_talk:Geraki#Books_and_Bytes:_The_Wikipedia_Library_Newsletter). Wonder why issue 5 is bigger than issue 4? -- Green C 20:48, 19 July 2018 (UTC)
wikiget -f -w <article name> | awk '{sub(/Books[ ]and[ ]Bytes[ ]*(\]){2}[ ]*(\x27){5}[ ]*[<][ ]*\/[ ]*big[ ]*[>][ ]*$/,"Books and Bytes]]\x27\x27\x27\x27\x27</big></big></big>",$0); print $0}' | wikiget -E <article name> -S "Fix missing </big> tags, per [[Wikipedia:Bot_requests#Missing_big_end_tags_in_Books_and_Bytes_newsletters|discussion]]" -P STDIN
It looks uncontroversial, and 1500 talk pages isn't that much. Will wait a day or so to make sure no one objects. -- Green C 22:45, 19 July 2018 (UTC)
test edit. -- Green C 13:26, 20 July 2018 (UTC)
Small trout, well perhaps a minnow, for not having the bot check that each instance it was "fixing" was actually broken. Special:Diff/850668962/851324159 not only serves no purpose, but is wrong to boot. If you're going to fix a triple tag, it would have been trivial to check for a triple tag in the regex. Storkk ( talk) 14:35, 21 July 2018 (UTC)

Y Done -- Green C 15:21, 21 July 2018 (UTC)

Update Ontario Restructuring Map URLs in Citations

Request replacing existing instances of the URLs for Ontario Restructuring Maps in citations. While the old URLs work, the new maps employ a new URL nomenclature system at the Ministry of Municipal Affairs and Housing (Ontario) and have corrected format errors that make the new versions easier to read. The URLs should be replaced as follows:

Map # Old URL New URL
Map 1 http://www.mah.gov.on.ca/Asset1605.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6572
Map 2 http://www.mah.gov.on.ca/Asset1612.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6573
Map 3 http://www.mah.gov.on.ca/Asset1608.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6574
Map 4 http://www.mah.gov.on.ca/Asset1606.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6575
Map 5 http://www.mah.gov.on.ca/Asset1607.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6576
Map 6 http://www.mah.gov.on.ca/Asset1611.aspx http://www.mah.gov.on.ca/AssetFactory.aspx?did=6577

Thanks. -- papageno ( talk) 23:41, 5 July 2018 (UTC)

Qui1che, are these six URLs the sum total of the links that need to be changed? Primefac ( talk) 23:43, 5 July 2018 (UTC)
That is correct. There are only six maps in the series. -- papageno ( talk) 01:29, 6 July 2018 (UTC)

Y Done Green C 16:38, 20 July 2018 (UTC)

Redirects of OMICS journals

Here's one for Tokenzero ( talk · contribs)

OMICS Publishing Group is an insidious predatory open access publisher, which often deceptively names it journals (e.g. the junk Clinical Infectious Diseases: Open Access vs the legit Clinical Infectious Diseases). To help catch citations to its predatory journals with WP:JCW/TAR and Special:WhatLinksHere, redirects should be created. I have extracted the list of OMICS journals from its website, which I've put at User:Headbomb/OMICS. What should be done is take every of those entries and:

  • If Foobar doesn't exist, create it with
#REDIRECT[[OMICS Publishing Group]]
[[Category:OMICS Publishing Group academic journals]]
{{Confused|text=[[Foobar: Open Access]], published by the OMICS Publishing Group}}

There likely will be some misfires, but I can easily clean them up afterwards. Headbomb { t · c · p · b} 04:38, 29 June 2018 (UTC)

  • Would it be possible to tag the talk pages of these redirects with {{WPJournals}}? -- Randykitty ( talk) 07:46, 29 June 2018 (UTC)
It's not really needed, but that could be done, sure. Headbomb { t · c · p · b} 14:11, 29 June 2018 (UTC)
  • Would it be over-linking to make the "OMICS Publishing Group" in the hatnote into a wiki-link? XOR'easter ( talk) 15:30, 29 June 2018 (UTC)
    • I'd be OK with that, personally. Foobar will point to OMICS Publishing Group so that wouldn't be super useful. But if Foobar is ever created, then the links would point to different places, and that might be useful. Headbomb { t · c · p · b} 15:38, 29 June 2018 (UTC)

OK, Coding... Tokenzero ( talk) 13:34, 30 June 2018 (UTC)

BRFA filed. See also pastebin log of simulated run. Two questions: should the talk page with {{WPJournals}} be created for each redirect, or only the main one (without and/& or abbreviated variants)? Should the redirects be given any rcats? Tokenzero ( talk) 19:28, 21 July 2018 (UTC)
I don't know that any rcats need to be added. I can't think of any worth adding (beyond {{ R from ISO 4}}). As for redirects, @ Randykitty:'s the one that asked for them, so maybe he can eludicate (probably all). I don't really see the point in tagging those redirects myself, but it doesn't do any harm to tag them either. Headbomb { t · c · p · b} 21:08, 21 July 2018 (UTC)
I'd appreciate if any new redirects could be tagged with {{WPJournals}} on their talk pages. This ensure that the journals wikiproject get's notified if they go to RfD, for example (it's rare, but it happens). Thanks. -- Randykitty ( talk) 02:19, 22 July 2018 (UTC)
@ Randykitty: if you're thinking about WP:AALERTS, what's important is that the target of the redirect is tagged. Tagging redirects is only useful if the target itself isn't tagged. Headbomb { t · c · p · b} 18:45, 26 July 2018 (UTC)
Some of the fullnames could be {{ R without mention}} but I don't think it'd be necessary. ~ Amory ( utc) 10:05, 22 July 2018 (UTC)

Y Done The bot finished (2739 redirects and 21 hatnotes) and I did the few outliers by hand. Tokenzero ( talk) 09:55, 26 July 2018 (UTC)

Bot needed for updating introduction section of portals

Many portals lack human editors, and need automated support to avoid going stale.

Most portals have an introduction section with an excerpt from the lead of the root article corresponding to the portal. The content for that section is transcluded from a subpage entitled "Intro".

The problem is that the excerpts are static, and grow outdated over time. Some are many years out of date.

What is needed is a bot to periodically update subscribed portals, by refreshing the excerpts from the corresponding root article leads.

Each excerpt should end similar to this:

...except that the link should go to the corresponding root article, rather than aviation.

There are over 1500 portals, and so it would be quite tedious for a human editor to do this. Some portals are supported, while others aren't updated for years.

Portals are in turmoil, and so, this is needed sooner rather than later.

Of course, they need greater support than this. But, we've got to start somewhere. As the intros are at the tops of the portal pages, it seemed like the best place to start.    — The Transhumanist   07:06, 14 April 2018 (UTC)

Probably better to do section transclusion, i.e, like Portal:Donald Trump/Intro Galobtter ( pingó mió) 07:09, 14 April 2018 (UTC)
I tried various forms of transclusion of the lead, and they all require intrusive coding of the source. Either section markers, or noinclude tags.
I think an excerpt-updater would be better, as there would be zero impact in the source pages in article space. Cumulatively, portals include tens of thousands of excerpts. Injecting code for each of those into article space would be unnecessary clutter, when we could have a bot update the portal subpages instead.    — The Transhumanist   00:38, 15 April 2018 (UTC)
Adding code to the mainspace pages to facilitate translations is a really bad idea. GF editors will just strip the coding. I don't support automatically changing the text of portals to match the ledes I'd rather see them redirected to the matching articles. Short excerpts don't really help the reader especially for broad concept articles whicb is what most portals try to cover. Legacypac ( talk) 04:19, 15 April 2018 (UTC)
Such coding generally has comments included with it so that GF editors don't remove it. As for support/oppose, that's irrelevant, as it is allowable code, like all the other wikicode we use. They added an entire extension to MediaWiki, available on all MediaWiki sites, for transcluding content based on inserted code, and it's already a standard feature on Wikipedia. I think such code makes the source less readable, and think it is best practice to avoid it. As long as there is an alternative, like bot-updated excerpts in portals.
Redirects would be links. Portals with just links are lists, not portals. To go to merely redirects, the portal design itself would need to be changed via a new consensus. Portals display content by transcluding excerpted content, that's their core design element. One of the biggest problems with portals is that there aren't enough editors to refresh the excerpts manually. Hence, the bot request.
Short excerpts are exactly the point of portals. To let editors dip in to the subtopics of a subject, in exactly the same way the main page does that for the entire scope of Wikipedia. While you may not find them useful, I find the main page highly useful and entertaining. I rarely follow the links to the rest of the article, but am glad I read the excerpts. The thing I love about it most is that the content changes daily. If portals were set up like that, I would visit the portals for my favorite subjects often. I might even assign one as my home page. Bots can accomplish this. But rather than tackling the whole thing at once, focusing on a bot for updating the portion at the topmost part of the page, the intro, seems like a good place to start.    — The Transhumanist   05:49, 15 April 2018 (UTC)
For a way to avoid that, see this revision I did on Portal:Water. Only problem is that it transcludes the entire page which is pretty heavy..then uses regex to find the first section... But yeah, I do agree with you - I don't see how the excerpts help much. Galobtter ( pingó mió) 06:04, 15 April 2018 (UTC)
Excerpts are the current design standard of portals. Changing the practice of using excerpts would be a change in the design standard of portals, which is outside the scope of this venue. Bots are for automating routine and tedious tasks. The method for updating excerpts has been for the most part to do it manually. A bot is needed to help with this onerous chore.    — The Transhumanist   06:09, 15 April 2018 (UTC)
Excerpts not helping much is part of the my general position that portals don't help much in general haha Galobtter ( pingó mió) 06:26, 15 April 2018 (UTC)
Based on the replies, the strongest exception to portals was that they are out of date and unmaintained. Both of which problems can be solved with bots. So, I've come to the experts. I'm sure they can find an automatable solution.    — The Transhumanist   23:21, 15 April 2018 (UTC)
Excerpts are part of the problem, not a solution. Portals are a failed idea amd no amount of bot mucking around is going to fix them. Legacypac ( talk) 18:08, 16 April 2018 (UTC)
Please keep in mind when transcluding anything from mainspace, fair use media is currently restricted to "articles" and should not be transcluded to Portal space. — xaosflux Talk 19:15, 17 April 2018 (UTC)
You mean, like pictures of book covers, logos, and the like?    — The Transhumanist   04:17, 18 April 2018 (UTC)
I tend to think both bot updating and transclusion of content from article space are problematic approaches. The stated problem that this request is trying to fix is that portal intros become stale over time because no one is paying attention. If some automated process is adopted, portal pages could well become broken and stay that way for long periods of time because no one is paying attention. I'd take stale over broken any day. Of course, the risk of such breakage depends on how the automation is done, but isn't a better solution to simply avoid potentially dated language/information in portal intros, or to mark such stuff with, say, {{ as of}} or {{ update after}}? This would require an initial round of assessments to add such templates (/fix problematic wording, etc.), but it looks like with all the attention portals are getting there will be a concerted effort to review portals once the current RFC fails is closed. - dcljr ( talk) 22:38, 19 April 2018 (UTC)
To reduce the "brokenness rate", the bot could first add {{ historical}} to all the portals which have less than a certain threshold of edits in a certain period, then after a week perform the proposed edit to the existing "intro" section/subpage of the portals which are not marked historical. -- Nemo 12:15, 16 May 2018 (UTC)
  • At this point the vast majority of portals have been updated with a variety of templates which transclude content from mainspace directly. This was a good idea at the time, but does not seem to have been the prefered option. JLJ001 ( talk) 15:25, 29 May 2018 (UTC)

Em dashes

I find myself removing spaces around em dashes frequently. Per the MOS, "An em dash is always unspaced (that is, without a space on either side)".

Example of occurrence

Since this is such a black and white issue, a bot to automatically clean this up as it happens would be useful. Kees08 (Talk) 05:49, 31 May 2018 (UTC)

This is a context-sensitive editing task, since some spaced em dashes should be converted to en dashes, not to unspaced em dashes. Others, such as those in file names, should be left alone. – Jonesey95 ( talk) 12:46, 31 May 2018 (UTC)
True there are cases it matters. Cases such as file names will require exceptions that can be written in the code. As for em dashes that should be en dashes, since and dashes can be spaced or unspaced, switching to spaced will not hury anything, unless there is a specific context I am missing. Kees08 (Talk) 03:40, 1 June 2018 (UTC)

Association footballers not categorized by position

Would it be possible to fill Category:Association footballers not categorized by position with the intersection of:

  • AND all players not in the following 15 football position categories:

Some members of WP:FOOTY have been working on adding missing positions, this would be much appreciated in order see all players which are missing a position category. Thanks, S.A. Julio ( talk) 04:36, 14 July 2018 (UTC)

@ S.A. Julio: Is this task expected to be a "one-off" run, or do you see it being a regular task? A one-off could possibly be several runs on AWB. The number of players involved could also be be quite big - do you know the total in the 9 categories? - I got over 9000 for the first one. Ronhjones   (Talk) 15:57, 14 July 2018 (UTC)
@ Ronhjones: I think probably a one-off. I can periodically check if any new articles are missing positions in the future, but currently there are far too many which would need to be added to Category:Association footballers not categorized by position. I've currently counted ~160,000 articles (some of which are likely not players, however), with two categories still running (though most of these are likely duplicates of what I've already counted). There are just over 113,000 players already categorised by position. S.A. Julio ( talk) 16:58, 14 July 2018 (UTC)
Coding... @ S.A. Julio: That's quite a few pages - too many for a semi-automated run(s). I think I'll skip the AWB option. I'll probably do it, so it can be re-run, say quarterly. I think I'll do it in stages - make a local list of players, and then process that file one line at a time. I assume if I combine all those categories, then we will end up with duplicates, which will need to be removed? Ronhjones   (Talk) 18:59, 14 July 2018 (UTC)
@ Ronhjones: I realised a simpler option might just to use only Category:Association football players by nationality and Category:Women's association football players, and look to the players one level down. Theoretically, every player should be in the top level category of their nationality (even if they fall under a subcategory as well). In reality I'm sure there are some pages which are improperly categorised, though likely a very low amount. And most articles should be players (only a few outliers such as Footballer of the Year in Germany and List of naturalised Spanish international football players). S.A. Julio ( talk) 19:26, 14 July 2018 (UTC)
@ S.A. Julio: If the bot was going to run daily, then maybe a change might be beneficial, but not much as an Api call can only get 5000 page titles per call, so it needs multiple calls anyway. For an occasional running bot, then it might be better to ensure you get them all. A test for "Association football player categories" above has given me 162,233 pages in 14642 categories - does that sound right? How can we eliminate the non-player articles? I can see a few answers (you might know better)...
  1. Do nothing. Just ignore the non-player pages that gets added to the category
  2. Find the pages and add a "Nobots" template to the page to deny RonBot access
  3. Find the pages and put them in a category, say, "Category:Association football support page", we can then add that category to the exclusion list. I like this one, it means that if someone creates a new page and it gets added by the bot to the category, then adding the new cat will ensure it gets removed on the next run.
  4. I did think of a search ' insource: "Infobox football biography" ' - that gives 153,074 pages, can one be sure that every page has that code - looks like a big difference from the pages I found? If so we could have used that to get the first list! :-)
Other than that, after seeing how well the code works, the overall plan for processing will be...
  1. Get all in "Association football position categories" and keep as a list in memory
  2. Get "Category:Association footballers not categorized by position", and check that they all should be there (i.e. no match with above list) - if there is a match then remove the cat from that page, if not add to the list, so we don't have to try to add it again.
  3. Get all in "Association football player categories", and only keep only those that don't match the other list
  4. From the resulting "small" list, edit all the pages to add the required category.
Ronhjones   (Talk) 15:24, 15 July 2018 (UTC)
@ Ronhjones: I think option #3 sounds best, this could also be useful for other operations (like finding players not categorised by nationality). With option #4 the issue is that a small percentage of these articles are missing infoboxes (I even started Category:German footballers needing infoboxes a while back for myself to work on). I've started to gather a list of articles which should be excluded, I could begin adding them to a category, would it go on the bottom of the page or on the talk page (similar to categories like Low-importance Germany articles)? And sounds like a good plan, thanks for the help! S.A. Julio ( talk) 20:02, 15 July 2018 (UTC)
@ S.A. Julio:It will be easier to code with it on the article page - I use the same call over and over again, just keep changing the cat name ( User:RonBot/7/Source1) - which returns me the page names. I'll crack on with the plan. Let me know what you call the new category. I'll probably do a trial for the second set, and see how many we get, tonight. The comparison of the two numbers will give you the true number of pages that will end up in Category:Association footballers not categorized by position. Ronhjones   (Talk) 20:22, 15 July 2018 (UTC)
17 cats found, 113279 pages contained. That means 162233-113279=48954 pages to examine. Ronhjones   (Talk) 21:02, 15 July 2018 (UTC)
Dummy run comes up with a similar figure. I've put the data in User:Ronhjones/Sandbox5 - NB: I used Excel to sort it, so it may have corrupted some accented characters. Ronhjones   (Talk) 02:35, 16 July 2018 (UTC)
@ Ronhjones: Alright, I've adjusted some categories which shouldn't have been under players, the next run should have a few less articles. I created Category:Association football player support pages, and now have a list at User:S.A. Julio/sandbox of pages which need to be added. Could a bot categorise these? S.A. Julio ( talk) 04:10, 16 July 2018 (UTC)
I can do a semi-auto AWB run for that list, later Ronhjones   (Talk) 12:38, 16 July 2018 (UTC)
930 pages done - one was already there, and I did not do the two categories - I will only be adding the category to pages in main-space, so cats, templates, drafts, userpages, etc. don't need to go into Category:Association football player support pages. You can have them there if you want - it's not an issue, just less to add. Ronhjones   (Talk) 15:56, 16 July 2018 (UTC)

@ Ronhjones: Alright, thanks! I think the issue is that there were two articles redirecting to the category mainspace. List of Eastleigh F.C. players was inadvertently categorised (missing a colon), and List of Australia national association football team players should have redirected to an already existing article. Now fixed. S.A. Julio ( talk) 16:38, 16 July 2018 (UTC)

@ S.A. Julio: OK, I'll try another dummy run later, with the new category in the "exclusion" list. We should end up with about 1000 less matches! I'll sort the list in python before exporting, then put in my sandbox again. I'll also do a couple of tests of the "add cat" and "remove cat" subroutines in user-space, just to check the code works OK. Then once you are happy that you have put all the "odd" pages in Category:Association football player support pages, then it will be time to apply for bot approval. Ronhjones   (Talk) 17:54, 16 July 2018 (UTC)
Please also see User_talk:Ronhjones#Category:Association_football_player_support_pages. Do have a re-think about name, and let me know. maybe "Association football non-biographical" Ronhjones   (Talk) 19:55, 16 July 2018 (UTC)
@ Ronhjones: Sounds good. I've been going through some category inconsistencies, one of which is stub sorting. For example, Category:English women's football biography stubs is categorised under Category:English women's footballers, yet "football biographies" is not necessarily limited to players (can include managers, referees, officials/administrators etc.). I'm working on fixing the category structure, hopefully should be finished relatively soon. Regarding the name, what about Category:Association football player non-biographical articles? Or another title? S.A. Julio ( talk) 07:12, 17 July 2018 (UTC)
@ S.A. Julio:Tweaked code to only retrieve articles and categories. Now there are 161259 articles. 47013 are not matching - User:Ronhjones/Sandbox5 (not excel sorted this time, names look OK). I agree on cat name - will add request on cat move later (and let bot move them) Ronhjones   (Talk) 12:47, 17 July 2018 (UTC)
Now listed Wikipedia:Categories_for_discussion#Current_nominations. As soon as the move is finished, I will file the BRFA. Ronhjones   (Talk) 15:46, 17 July 2018 (UTC)
@ Ronhjones: Alright, perfect. The other day I added the position category for ~1700 articles, so the list should be slightly shorter now. I'll now finish working on fixing the stub category structure, hopefully there will be less results for the next run (like A. H. Albut, who was only a manager). S.A. Julio ( talk) 17:01, 18 July 2018 (UTC)
@ S.A. Julio: Less is better :-) I will add that my bot task 3, just adds a template to pages, and usually manages 8-10 pages a minute, so expect a quite long run when we get approval - 8 pages a minute is 4 days for 45000 articles - but of course only for the first run, subsequent runs will be much faster as there will be a lot less pages to tend to (plus the basic overhead of getting the page lists - 2h) Ronhjones   (Talk) 17:38, 18 July 2018 (UTC)

BRFA filed Ronhjones   (Talk) 19:53, 19 July 2018 (UTC)

Y Done Ronhjones   (Talk) 00:50, 2 August 2018 (UTC)

Someone to take over User:HasteurBot

Hasteur ( talk · contribs) has retired, it would be good if someone could take over the bot, that would be nice

The code can be found at is at https://github.com/hasteur/g13bot_tools_new, with hasteur stipulating "All I ask is that the credit for the work remains."

@ Firefly: Hasteur posted this on your talk page, any interest in taking over? Headbomb { t · c · p · b} 10:40, 4 June 2018 (UTC)

@ Headbomb: Yep, I'm happy to do this. Will look at it and submit a BRFA tonight (hopefully!) ƒirefly ( t · c · who? ) 13:27, 4 June 2018 (UTC)

Orphan tags

Hi, could you please give a bot an extra task of removing orphan tags from articles that have at least one incoming link from mainspace articles, lists and index pages but not disambig pages or redirects as per WP:Orphan. The category is Category:All orphaned articles but exclude Category:Orphaned articles from February 2009 as an admin is checking those. A rough estimate is there are at least 10,000 misplaced tags, thanks Atlantic306 ( talk) 17:07, 2 June 2018 (UTC)

JL-Bot already removes the orphan tag, but based on the original discussion it requires 5 or more links (ignoring type). This was done as checking the type of link is not always straightforward and adds processing time. The 5 links was a community agreed compromise. The only exception is dab pages which should never be tagged as orphans (it will de-tag those regardless of number of links). That task runs ever week or two. If someone wants to build a fancier checking, let me know and I will discontinue mine. -- JLaTondre ( talk) 22:44, 2 June 2018 (UTC)
This botreq started on my talk page, I suggested posting here first, glad as I didn't know about JL-Bot. I wouldn't know how to improve on JL-Bot other than by using API:Backlinks but it's a wash in terms of functionality. BTW I wrote a command-line utility wikiget (github) that can be hooked through a system call eg. "wikiget -b Ocean -t t" will output all transcluded backlinks for Ocean. It handles all the paging and various API:Backlink options. -- Green C 23:15, 2 June 2018 (UTC)
Atlantic306, how is this different from the request you made a month ago at Wikipedia:AutoWikiBrowser/Tasks#AWB Request 2, which was decidedly a non-starter? Pinging the other contributors from that discussion, Premeditated Chaos & Sadads. If a large # of orphans have already been manually checked and all that remains for that group is the busywork of removing the tag, then that might be ok if others agree, but we need to see a link to such a discussion.
JLaTondre, do you have a link to the 5+ link discussion?   ~  Tom.Reding ( talkdgaf)  12:03, 4 June 2018 (UTC)
Hi, this is different to the AWB proposal as that was for the early category of 9000 articles whereas this proposal leaves that category out as it is being manually checked and referrs to all of the remaining orphan categories. As above, a bot is already removing tags but I think this needs to be set at one valid link as per WP:Orphan as the JPL bot approval was back in 2008 and now consensus has changed that one valid link is sufficient for the tag removal, thanks Atlantic306 ( talk) 12:11, 4 June 2018 (UTC)
Discussion is shown here and here. GreenC ( talk · contribs) has said his bot can differentiate the links so perhaps his bot could take over the task, thanks Atlantic306 ( talk) 12:25, 4 June 2018 (UTC)
WP:ORPHAN says "Although a single, relevant incoming link is sufficient to remove the tag, three or more is ideal...", I would object to a bot removing orphan tags on articles with fewer than 3 links on this basis alone. Headbomb { t · c · p · b} 12:25, 4 June 2018 (UTC)
The conclusions reached at WP:AWB/Tasks#AWB Request 2 apply to all most orphans, from 2009 up until some arbitrary time in the near-past.   ~  Tom.Reding ( talkdgaf)  12:30, 4 June 2018 (UTC)
( edit conflict) I still think automated removal of orphan tags in general is a bad idea. To me, going through the orphan categories isn't just about making sure something else points there. Orphan-tagged articles often suffer other issues, so the tag is kind of a heads-up that the article needs to be looked at. It's like a sneeze. It could be nothing, but it could mean you have allergies, or a cold.
Same thing with an orphan-tagged article. It could be a great but under-loved topic. But maybe it's a duplicate article or sub-topic and can be merge/redirected. Maybe it's a copyvio that flew under the radar. Maybe it's not actually notable and should be deleted. Maybe the title is wrong and it's orphaned because all the links point to the right (redlinked) title. Maybe the incoming links are incorrect and are trying to point to something else, and need to be changed.
If you just strip the tags without checking the article, you're getting rid of the symptom without checking to see if there's an underlying illness, which essentially reduces the value of the tag in the first place. ♠ PMC(talk) 12:55, 4 June 2018 (UTC)
Echoing this from PMC. We don't suffer from having a neverending backlog, and the current bots (per discussion above) and AWB minor fixes already remove templates from pages that are already in the clear. I would much rather that we take the time to go through and find merges or deletes, get these pages added to WikiProjects, and generally do other minor cleanup that happens when human eyes are on the pages. Anything that has lived with few or no links for 9+ years, suggests to me that it hasn't been integrated into the Wiki adequately. If we just remove the tag, we remove the likilihood of it's discovery again. Sadads ( talk) 14:35, 4 June 2018 (UTC) 

Popular pages - indexing and WikiProject banners

Could someone help with doing the following to the pages in Category:Lists of popular pages by WikiProject?:

  • add the name of the WikiProject as a sort key to Category:Lists of popular pages by WikiProject
  • add the corresponding WikiProject category, with sort key "Popular Pages"
  • create a talk page (if it doesn't exist) and add the corresponding WikiProject banner

Oornery ( talk) 05:14, 6 June 2018 (UTC)

WikiProject Athletics tagging

It's been four years since this project last had a tagging run and I'm looking to get Article Alerts to cover the many relevant articles that have not been tagged since. Anyone interested in doing a tagging run of the articles and categories under Category:Sport of athletics? SFB 19:03, 4 May 2018 (UTC)

  Working on this tagging part over the next few days.   ~  Tom.Reding ( talkdgaf)  22:11, 4 May 2018 (UTC)
Sillyfolkboy, 2 questions:
  1. there are ~6000 ~5600 pages to tag. I will propagate the |class= of other WikiProjects, if available. Should I leave |importance= blank, or use |importance=Low? The idea being that if importance were > "Low", it probably would have been tagged as such by now. I can also do this for articles less than a certain size instead.
  2. I'll leave pages alone (for now) which do not have any WikiProject tagged. To make classification of the resulting unclassified pages faster, I can apply |class=Stub to all pages less than 1000, 2000, 3000, etc. bytes. Please take a look at that list of ~6000 ~5600 and let me know what threshold below which to tag pages as stubs (if at all).
WP Athletics notified for input as well.   ~  Tom.Reding ( talkdgaf)  23:47, 4 May 2018 (UTC)
@ Tom.Reding: I would recommend propagation of other project's class if available, or mark as stub if under 2000 bytes. You can place importance as low by default. The project is quite well developed now, so the vast majority of important content is already tagged. These will mainly be recent articles on lower level athletes and events.
Category:Triathlon, Category:Duathlon, Category:Foot orienteers‎, Category:Athletics in ancient Greece and Category:Boston Marathon bombing need to be manually excluded. Thanks SFB 01:41, 5 May 2018 (UTC)
PetScan link updated to exclude those cats, ~400 removed. Won't start on this for a few days for possible comments.   ~  Tom.Reding ( talkdgaf)  03:15, 5 May 2018 (UTC)
Orienteers were not excluded on the previous run, and consequently a lot of orienteering articles are now tagged as being within the scope of WikiProject Athletics, even though (with some exceptions) they're actually not. Would it be possible to untag them by bot? Sideways713 ( talk) 16:20, 5 May 2018 (UTC)
Sideways713, pages < 2000 b mostly done. Will leave |class= blank for those >= 2000 b. Let me know if there's any desired change to the above guidance. Can do the untagging after.   ~  Tom.Reding ( talkdgaf)  13:16, 18 May 2018 (UTC)
  Done.
Re: Orienteering+Athletics, this scan shows 459 which are tagged as both. However, just because someone is in Orienteering doesn't mean they shouldn't be in Athletics, only that they're a candidate for removal. So it's probably best to do this manually, unless there's some rigorous exclusion criteria available?   ~  Tom.Reding ( talkdgaf)  17:28, 19 May 2018 (UTC)
If you exclude those in subcategories of Track and field athletes (at any level), and those in subcategories of Sports clubs at level 3 or lower, and possibly those in subcategories of Mountain running (I'm not entirely sure about this one - how does @ Sillyfolkboy feel?), and untag the rest, that should be good enough. (That's only a few dozen exclusions.) There will probably still be some false removals - orienteers who dabble in running enough they could be marked as runners on wiki, but aren't yet - but it's a lot less effort to happen upon those later and tag them manually than it is to untag the other 400 pages manually, and the false removals should all be of athletes whose main claim to fame is orienteering and whose articles will be more naturally developed by members of that wikiproject. Sideways713 ( talk) 22:43, 19 May 2018 (UTC)
I'm good with the above. There isn't actually a whole lot of crossover between orienteering and elite long-distance running, probably because the later is much better paying than the former so it isn't something, say, a marathon specialist would consider normally. SFB 23:43, 19 May 2018 (UTC)
Sideways713 & SFB: here is the PetScan (434 results) for these doubly-tagged pages with Category:Track and field athletes & Category:Mountain running, both fully recursed, removed. I've tried removing Category:Sports clubs at level 3 or lower via PetScan and locally via AWB's variably-recursive category utility, but both timeout at depths of 5 and greater. The tree grows very quickly, with ~13,000 unique mainspace pages at a depth of 2, to ~311,000 at a depth of 4. D2's pages subtracted from D4's pages gives a ~298,000 pool of pages to try to remove from the 434, but only 2 pages are removed ( Brit Volden & Øyvin Thon), leaving 432, so this isn't a practical approach.   ~  Tom.Reding ( talkdgaf)  14:31, 20 May 2018 (UTC)
@ Tom.Reding: On that basis, I would leave this to a manual task. Given the small article base, there aren't any major downsides to the accidental inclusion in scope, especially as WikiProject Orienteering seems inactive at the moment. SFB 14:47, 20 May 2018 (UTC)
I meant exclude levels 1, 2 and 3 but don't exclude 4 and up, rather than the opposite. Sorry if that was unclear. Sideways713 ( talk) 16:24, 20 May 2018 (UTC)
Right, but it's a distinction without a real difference. It would return a subset of the ~300k I found (since I lumped level 3 into those 300k instead of excluding them), so I decided to not be any more precise, since there's no need - the result would be either the same (i.e. I'd still find those same 2 to be removed from the 434) or worse (I'd find 0 or 1 of those same 2); basically a way for programmers to rationalize exerting least effort...   ~  Tom.Reding ( talkdgaf)  19:45, 20 May 2018 (UTC)
No, what I meant is this, which gives 420 results. Sorry if there's a communication problem, Sideways713 ( talk) 21:53, 20 May 2018 (UTC)
Sideways713, sorry for the delay. Just to be sure: those 420 results need to have {{ WikiProject Athletics}} removed?   ~  Tom.Reding ( talkdgaf)  14:25, 5 June 2018 (UTC)
SFB, can you confirm instead?   ~  Tom.Reding ( talkdgaf)  21:11, 6 June 2018 (UTC)
@ Tom.Reding: above link is down so I can't see the results, but I still think this action is better done manually, given the cross-over in the sports (i.e. just because an orienteer isn't currently in a track athlete category doesn't necessarily mean the athlete has not competed in track). Happy for you to proceed on your rationalized approach per above. SFB 22:27, 6 June 2018 (UTC)

Bot to correct common ", ". and "? typos

A very common typo I see all the time is when end quotation marks are placed before a comma (like this: ",) or a period when at the end of a sentence (like this: ".), etc. The rule is that commas, periods, and question marks are placed inside quotation marks, like this: ."/,"/?"

I see these mistakes everywhere I go, and it seems that no one bothers to correct them. Perhaps there should be a bot that swaps the quotation marks and punctuation marks to the position that they should be in. Radioactive Pixie Dust ( talk) 05:36, 26 July 2018 (UTC)

Are they really "common typo[s]"? After all, the quotation mark should only include punctuation if it is part of the quoted text and how will a bot be able to determine this? Our Manual of Style recommends using the logical quotation style, see MOS:QUOTEMARKS and MOS:LQ for more details. Regards So Why 07:21, 26 July 2018 (UTC)
@ Radioactive Pixie Dust: There are many things like this that are taught in schools as "rules" but are in reality merely recommendations preferred by the given national variety or house style that do not have a strong basis in actual usage of the language by its users (some of which are tallied here). Wikipedia, being an international project which strives for a neutral point of view and weighs heavily on community consensus, has built its own series of recommendations on style which tries to be objective and is always subject to scrutiny. Nardog ( talk) 08:20, 26 July 2018 (UTC)
  • Declined Not a good task for a bot. Per WP:CONTEXTBOT. See also MOS:LQ - if the punctuation is part of the material that is being quoted, it goes inside the quote marks; if it is not, it goes outside. -- Redrose64 🌹 ( talk) 11:51, 26 July 2018 (UTC)

Replace WikiProject History of Photography templates with WikiProject Photography

With these templates successfully approved for merging, I am request a bot that will replace any WikiProject History of Photography templates with the WikiProject Photography template with the "history=yes" parameter.

If a page already has the WikiProject Photography template, the "history=yes" parameter should be added (if it's not there already). If the page already has the WikiProject Photography template with the "history=yes" parameter, then the WikiProject History of Photography template simply needs to be removed.

If there are differing quality ratings between these two templates, the rating given by the WikiProject Photography template should be applied. If WikiProject Photography template has not given a quality rating and the WikiProject History of Photography template has, the WikiProject Photography template should inherit the WikiProject History of Photography's quality rating. Qono ( talk) 23:07, 24 July 2018 (UTC)

My bot has approval for this sort of task; I'll handle it. Primefac ( talk) 23:52, 24 July 2018 (UTC)

Replace links to AP news hosted by Google with AP website links

Can anyone create a bot to replace links matching the regex https://www.google.com/hostednews/ap/.*\?docID=([0-9a-f]{32}) with https://apnews.com/$1. There are about 2800 links to AP news hosted by Google and all the links are dead. I estimate about 20–30% of these links have the docId tag and can be rewritten to link to AP's website. This doesn't always work, but it works often enough to make this worth the effort. You'll need to download the page first and check for absence of the string "The page you’re looking for doesn’t exist. Try searching for a topic." and the presence of a non-empty div of articleBody class. You'll also have to flip the deadurl tag to no after replacement and avoid references that have already been archived. Some examples:

Gazoth ( talk) 13:09, 8 June 2018 (UTC)

Bot to tag all remaining disambiguation links.

We developed a consensus a while back to tag all remaining disambiguation links in the project with a {{ dn}} tag. In order to avoid excessive tagging, the idea is to generate a list of all links, let it sit for a few weeks, then recheck it and tag everything that has still not been fixed after that interval. Any takers? bd2412 T 22:20, 17 April 2018 (UTC)

@ BD2412: - I think I can write this one. Basically, we're looking for links to pages in Category:Disambiguation pages inside articles. Generate a list based on that, and after a couple weeks - rerun with tagging enabled for the links in that list. The query to find those pages should be: quarry:query/26624 if I understood right (Quarry's taking a long time to run it - DBeaver came back with 20,000+ hits) SQL Query me! 21:56, 23 April 2018 (UTC)
There should be fewer than 8,200 total disambiguation links at this time, per The Daily Disambig; of those, at least 2,100 should already be tagged (you can exclude pages that already have such a tag on them), although many of the articles with tags are likely to include multiple tagged links, so I would think that the task should involve no more than 6,000 links to be tagged. bd2412 T 22:29, 23 April 2018 (UTC)
Good point, I'll rewrite the query to exclude {{ dn}}. SQL Query me! 22:32, 23 April 2018 (UTC)
@ SQL: Hi, just following up on this. Cheers! bd2412 T 22:51, 9 June 2018 (UTC)

Change external links for Colombian municipalities

Recently the links to the official websites of the municipalities of Colombia have been changed, e.g. Zipaquirá (old, dead) to Zipaquirá (new, live). The only difference I saw with checking some of the links is the removal of "index.shtml". I changed it manually for Zipaquirá, but there are 1200+ municipalities to be done, so best done by a bot. Thanks in advance! Tisquesusa ( talk) 17:04, 7 August 2018 (UTC)

Y Done. I have updated the links using AWB. Rcsprinter123 (discourse) 15:17, 8 August 2018 (UTC)

Abbreviations and machine generated typos at uz.wikipedia

Can someone here please create a bot to take care of the abundance of encyclopedia abbreviations uzbek wikipedia? While some abbreviations are rather easy to figure out, other abbreviations may be challenging for unfamiliar or inexperienced readers. This task is incredibly tedious to do manually. Here are some of the most common abbreviations (or errors) and their needed replacements. Please take note of common Uzbek suffixes such as -lar, -i, -si, -da, ning, etc

  1. yanv. →‎ yanvar (and yanv.da or yanv. da →‎ yanvarda)
  2. fev. →‎ fevral (and fev.da or fev. da →‎ fevralda)
  3. apr. →‎ aprel (and apr.da or apr. da →‎ aprelda)
  4. avg. →‎ avgust (and avg.da or avg. da →‎ avgustda; Commonly miswritten by bot as "avg .")
  5. sent. →‎ sentabr (and sent.da or sent. da →‎ sentabrda)
  6. okt. →‎ oktabr (and okt.da or okt. da →‎ oktabrda)
  7. noyab. →‎ noyabr (and noyab.da or noyab. da →‎ noyabrda)
  8. dek. →‎ dekabr (and dek.da or dek. da →‎ dekabrda)
  9. -a. →‎ -asr (and -a.lar →‎ -asrlar)
  10. b-n →‎ bilan (only if "b-n" alone, NOT inside another word)
  11. va b. →‎ va boshqalar
  12. d-r → doktor
  13. f-k → fabrika (may be followed by several suffixes, but do not change if another letter directly in front of "f")
  14. f-t →‎ fakultet (may be followed by several suffixes, but do not change if another letter directly in front of "f")
  15. hoz. →‎ hozirgi
  16. FA →‎ fanlar akademiyasi (Only if "FA" by itself and capitalized)
  17. i.ch. →‎ ishlab chiqarish
  18. in-t →‎ institut (may be followed by several suffixes, but do not change if another letter directly in front of "in-t")
  19. i.t. →‎ ilmiy tadqiqot (may be followed by several suffixes, but do not change if another letter directly in front of "i.t." or if both are capitalized. Often written as i. t.)
  20. k-z → kolxoz (may be followed by several suffixes, but do not change if another letter directly in front of "k")
  21. kVt-soat -> Kilovatt-soat
  22. k-t → kombinat (may be followed by several suffixes, but do not change if another letter directly in front of "k")
  23. lab. → laboratoriya (may be followed by several suffixes, but do not change if another letter directly in front of "l")
  24. mayd. →‎ maydon ("M" will probably be capitalized and should remain so)
  25. prof. →‎ professor (may be followed by several suffixes, but do not change if another letter directly in front of "p")
  26. qad. →‎ qadimgi
  27. q.x. →‎ qishloq xoʻjaligi (Commonly miswritten by bot as "q. x.")
  28. r-n → rayon (may be followed by several suffixes, but do not change if another letter directly in front of "r")
  29. rej. → rejissyor (may be followed by several suffixes, but do not change if another letter directly in front of "r")
  30. RF → Rossiya Federatsiyasi (Only if "RF" by itself and capitalized)
  31. radiost-ya →‎ radiostansiya
  32. telest-ya →‎ telestansiya
  33. sh. →‎ shahri (only if it is "sh." alone; if written as "sh.lar", then it should be "shaharlar")
  34. s-z → sovxoz (may be followed by several suffixes, but do not change if another letter directly in front of "s")
  35. taxm. →‎ taxminan
  36. t-ra →‎ temperatura (may be followed by several suffixes, but do not change if another letter directly in front of "t", or if the "a" is followed by an "n", ie, transport)
  37. t.y. →‎ temir yoʻl (may be followed by several suffixes, but do not change if another letter directly in front of "t". DON'T change the "y." to "yil")
  38. un-t →‎ universitet (may be followed by several suffixes, but do not change if another letter directly in front of "u")
  39. y.lar →‎ yillar (Do not allow bot to perform function if the article title starts with "Y")
  40. y.da →‎ yilda (Do not allow bot to perform function if the article title starts with "Y")
  41. ya.o. →‎ yarim orol (may be followed by several suffixes, but do not change if another letter directly in front of "y". Abbreviation sometimes written as "ya. o.")
  42. z-d →‎ zavod (may be followed by several suffixes, but do not change if another letter directly in front of "z")
  43. ` → ʻ (this is the correct punctuation; do not change if in a template, infobox, or category; only change if in the main text of a page)
  44. 1-jahon urushi →‎ Birinchi jahon urushi (Sometimes comes up as 1jahon urushi ; may be followed by several suffixes)
  45. 2-jahon urushi →‎ Ikkinchi jahon urushi (Sometimes comes up as 2jahon urushi ; may be followed by several suffixes)
  46. Axolisi →‎ Aholisi (Bot error due to similarity of the cyrillic letters)
  47. jan.-sharqida →‎ janubi-sharqida
  48. shim.-sharqida →‎ shimoli-sharqida
  49. jan.gʻarbida →‎ janubi-gʻarbida
  50. shim.gʻarbida →‎ shimoli-gʻarbida
  51. jan.dagi →‎ janubidagi
  52. shim.dagi →‎ shimolidagi
  53. jan.da →‎ janubida
  54. shim.da →‎ shimolida
  55. jan.dan →‎ janubidan
  56. shim.dan →‎ shimolidan

and typos:

  1. axoliyey →‎ aholisi
  2. poytahti →‎ poytaxti
  3. shaxri →‎ shahri
  4. xalk →‎ xalq
  5. yil da →‎ yilda
  6. yil lar →‎ yillar
  7. katnashchisi →‎ qatnashchisi
  8. suyuklanish → suyuqlanish
  9. Kozogʻiston → Qozogʻiston
  10. Koraqalpogʻiston → Qoraqalpogʻiston

Please try not to change capitalization in the process.

If a bot could take care of these things, it would be absolutely fantastic. Thank you so much to anyone who can make a bot to take care of these. Thank you. If you have any questions about suffixes or anything like that, please don't hesitate to ping me.-- PlanespotterA320 ( talk) 23:50, 23 July 2018 (UTC)

I would think you need a person who is familiar with the language. It seems like a good task for AWB. I see uz:Wikipédia:AutoWikiBrowser/CheckPage is not created, but I would think that issue can be overcome. uz is a site option in the AWB menus, but won't work without the user name in the appropriate Check Page. Then it's just a matter in running a wikisearch, in time with a suitable "search and replace" - then each proposed change can be seen before you click "Save". I tried https://uz.wikipedia.org/?search=insource%3A+yanv.&title=Maxsus:Search&profile=advanced&fulltext=1&ns0=1 - looking for "yanv." in the source of any article and got 464 hits. Ronhjones   (Talk) 21:04, 28 July 2018 (UTC)
I have familiarity with the abbreviations and commons errors, and over 10,000 edits to uz.wikipedia. I've tried to download AWB but it won't work on my computer, and I can't run JWB on uz. wikipedia. We used to have a bot (Foydalanuvchi:Ximik1991Bot), but user Ximik1991 left the wiki a while ago. I can compile a complete list of words with abbreviations, including suffixes. I know nothing about bot-writing and can't use AWB...if there is a person or bot that would help overcome this issue, that would be much appreciated. I am literally fixing abbreviations and typos article by article, it's taking a long time.-- PlanespotterA320 ( talk) 23:19, 29 July 2018 (UTC)
There's no JWB either.-- PlanespotterA320 ( talk) 23:21, 29 July 2018 (UTC)

Will take this one. Can move to uzwiki. -- Edgars2007 ( talk/ contribs) 17:31, 30 July 2018 (UTC)

[r] → [ɾ] in IPA for Spanish

A consensus was reached at Help talk:IPA/Spanish#About R to change all instances of r that either occur at the end of a word or precede a consonant (i.e. any symbol except a, e, i, o, or u) to ɾ inside the first parameter of {{ IPA-es}}. There currently appear to be about 1,190 articles in need of this change. Could someone help with this task with a bot? Nardog ( talk) 19:24, 12 June 2018 (UTC)

Create list based on size of article

Hopefully a simple request. I would like a list of all pages that are tagged with {{ WikiProject Green Bay Packers}}, assessed as a stub, and assessed as low-importance listed out in a table ( Low-class stubs). The table would be two columns, one listing the article's name and the other the article size in bytes ( User:Gonzo fan2007/Stubs would be a fine place to put it). As long as the table is sortable, I don't care what order the articles are in the table. I am looking to review all of the WikiProject's stubs and reassess as start or C-class if necessary and would like to start by looking at the largest articles (and thus the most likely to no longer be a stub).

Let me know if there are any questions. Thank you for any assistance. « Gonzo fan2007 (talk) @ 18:50, 15 August 2018 (UTC)

Coding .. Green C 22:46, 15 August 2018 (UTC)
@ Gonzo fan2007: - done. I left a 1-line unix command there in case you want to try the same with other templates or arguments, looks like a good method for de-stubbing. -- Green C 23:40, 15 August 2018 (UTC)
Awesome! Appreciate the work GreenC! If it is too much of a pain, don't worry about it, but how hard would it be to wikilink the article titles in the table? « Gonzo fan2007 (talk) @ 23:41, 15 August 2018 (UTC)
Done. -- Green C 23:50, 15 August 2018 (UTC)
You're awesome GreenC! Thanks for the quick turnaround. « Gonzo fan2007 (talk) @ 02:14, 16 August 2018 (UTC)

creating minor planets articles

hi in fr.Wikipedia.org a bot creat thousands articles about minor planets with good quality please creating this articles for English Wikipedia — Preceding unsigned comment added by Amirh123 ( talkcontribs) 11:22, 10 August 2018 (UTC)

See WP:MASSCREATION, this would need a strong consensus on WP:VPR or the like. Anomie 12:25, 10 August 2018 (UTC)
We actually did the opposite - redirect a bunch of articles on minor planets into lists; see WP:DWMP: "Before 2012, when this notability guideline did not yet exist, approximately 20,000 asteroid stubs were mass-created by bots and human editors. This created a considerable backlog of articles to be cleaned up, redirected, merged, or deleted." Galobtter ( pingó mió) 12:30, 10 August 2018 (UTC)
@ Amirh123: You have made similar requests before, for a variety of topics. I refer you to some of the replies left at Wikipedia:Bot requests/Archive 75#please make bot for adding articles for footballdatabase.eu and Wikipedia:Bot requests/Archive 76#Requests from Amirh123. -- Redrose64 🌹 ( talk) 12:36, 10 August 2018 (UTC)
@ Amirh123: please stop making bot requests that have no chance of being adopted, or for which you haven't demonstrated consensus. Headbomb { t · c · p · b} 13:15, 10 August 2018 (UTC)

Move WikiProject Articles for creation to below other WikiProject templates

In Special:Diff/845715301, PRehse moved WikiProject Articles for creation to the bottom and updated the class for WikiProject Video games from "Stub" to "Start". Then, in Special:Diff/845730267, I updated the class for WikiProject Articles for creation, and moved WikiProject Articles for creation back to the top. But then, in Special:Diff/845730984, PRehse decided to move WikiProject Articles for creation to the bottom again. For consistency, we should have a bot move all {{ WikiProject Articles for creation}} templates on talk pages to below other WikiProject templates. If the WikiProject templates are within {{ WikiProject banner shell}}, then {{ WikiProject Articles for creation}} will stay within the shell along with other WikiProject templates. GeoffreyT2000 ( talk) 16:44, 14 June 2018 (UTC)

Needs wider discussion. That sounds like a lot of bot edits for questionable benefit. Seek approval at one of the village pumps. Anomie 17:35, 14 June 2018 (UTC)

This change should be fine per Wikipedia:Talk page layout. -- Magioladitis ( talk) 18:24, 14 June 2018 (UTC)

If anything, this could be bundled in AWB, assuming it has consensus, so that AWB bots make the change when they do other task. However, this very likely wouldn't get consensus to be done on its own. Headbomb { t · c · p · b} 20:08, 14 June 2018 (UTC)
There are no bots doing tasks in this direction. Unless, we decide that wikiproject tagging bots should also be doing this. Only Yobot ued to do this but right now there is no guideline to ask bot owners to perform this action. So we have two ways: Form a strategy or approve a sole task for this. I would certainly support the task to be done if ther was a discussion held somewhere about this task of similar tasks. -- Magioladitis ( talk) 22:54, 14 June 2018 (UTC)
  • I concur this needs wider discussion. Why does the order of the WikiProject banners matter? Primefac ( talk) 02:19, 15 June 2018 (UTC)
For instance, we have a lose rule that WikiProjct Biography "comes before any other WikiProject banners". At the moment, I do not see why WikiProject Articles for creation should be at the bottom of all Projects but there is a place to discuss this. If this get support we should then create bots to do it. It's about 60,000 talk pages with this template. -- Magioladitis ( talk) 07:31, 15 June 2018 (UTC)
Dedicated WP:TPL bots never had support as far as I recall. Maybe there was one shoving banners into the metabanner after a certain threshold, but that'd be the only one if it ever was a thing. I don't see what'd be different here. Headbomb { t · c · p · b} 13:05, 15 June 2018 (UTC)
There was a bot that was adding WPBS and was doing that task and Yobot was doing it as part of WikiProject tagging. My main questions are: a) whether we have a guarantee that current BAG will continue to accept this as a secondary task and b) is there a need to actully do it as a sole task? WP:TPL bots did not have much luck in the past due to not conrecte rules (which now we have; I decicated a lot of time in this direction) and not built in AWB tools (which now we have since at some point I did some thousands of edits to rename templates to standard names). -- Magioladitis ( talk) 14:12, 15 June 2018 (UTC)
BAG cannot guarantee that any specific thing will be accepted by the community. If a task is proposed and there is consensus for it (or at least a lack of objections after a call for comments/trial), it'll be approved. If there is no consensus for the task to be done, it won't be approved. Headbomb { t · c · p · b} 20:19, 18 June 2018 (UTC)

Indexing talk page

User:Legobot has stopped indexing talk pages and archives and User:HBC Archive Indexerbot is deactivated. I would like a replacement for that task. -- Tyw7  ( 🗣️ Talk to me •  ✍️ Contributions) 20:06, 12 June 2018 (UTC)

Any these work? Category:Wikipedia_archive_bots -- Green C 14:31, 17 June 2018 (UTC)
Most of them are archive bots. Looking for index bots to tame over Legobot which has developed a big and doesn't index all talk pages. -- Tyw7  ( 🗣️ Talk to me •  ✍️ Contributions) 17:52, 17 June 2018 (UTC)
User:Legobot has 33 tasks not sure which one (Task #15?). Did Legobot say why they stopped or was it abandoned without a word? -- Green C 20:41, 18 June 2018 (UTC)
it's working on random page and many people had reported it but it hadn't been fixed. See the discussion at User talk:Legobot -- Tyw7  ( 🗣️ Talk to me •  ✍️ Contributions) 21:05, 18 June 2018 (UTC)
Looks like Legobot is on vacation until July 7. They should either fix the bugs (if serious) or give permission for someone else to take it over, should anyone wish. -- Green C 21:17, 18 June 2018 (UTC)
Legobot ( talk · contribs) is not on vacation, it is still running (there would be chaos on several fronts if it had stopped completely). It is Legoktm ( talk · contribs) that is on vacation, and if you have been following both User talk:Legobot and User talk:Legoktm, you'll know that Legoktm has not been responding to questions concerning Legobot (other than one or two specifics on this page such as #Take over GAN functions from Legobot above) for well over two years. -- Redrose64 🌹 ( talk) 17:41, 19 June 2018 (UTC)

Automatically add protection templates to protected pages

I tried bringing this up on the noticeboard, but I got no response. I am now convinced that there is no bot (or maybe there used to be one but it no longer works) that automatically adds the appropriate template to a page that has been protected by an administrator. This means that the template has to be added manually, and many admins forget to do this. The bot would put the following things on the template:

  • The level of protection the page has: "semi-protected", "fully protected", "pending changes", ect. "Move protected" would only be shown if that is the only type of protection added to the page.
  • When the protection expires, if it is set to expire.
  • The reason for the protection: "vandalism", "editing disputes", "promoting policy on BLPs", ect., if applicable.

These are all tasks that the protecting admin (or someone else who is able to edit the page) has to put in themselves. I think it would be perfectly possible for a bot to do this. If there is indeed a bot that is supposed to do this it should probably be fixed. funplussmart ( talk) 17:21, 15 August 2018 (UTC)

It was originally Lowercase sigmabot, that stopped and was taken over by TheMagikBOT2 - Wikipedia:Bot_requests/Archive_73#Add_protection_templates_to_recently_protected_articles - That seems to have ceased on the 27July. ping bot owner - @ TheMagikCow:. Ronhjones   (Talk) 22:00, 15 August 2018 (UTC)
The first two bullets are redundant since the protection templates now have this functionality built in. In fact, if you use the |expiry= parameter on any prot template, it will be ignored. It is also not necessary to add a prot template to certain kinds of page, such as templates that have either {{ documentation}} or {{ collapsible option}} since those also autodetect the setting of edit protection, and add the padlock template where appropriate. -- Redrose64 🌹 ( talk) 08:03, 16 August 2018 (UTC)
Redundant See User_talk:TheMagikCow#Stalled_Bot? - Api changes have stalled bot. repair in progress. Ronhjones   (Talk) 18:33, 16 August 2018 (UTC)
@ Funplussmart: Ronhjones   (Talk) 20:42, 16 August 2018 (UTC)

Vandalism from user:194.199.4.202

Hello this anonymous user is changing verified articles left and right. This is a vandalism

User:194.199.4.202 — Preceding unsigned comment added by Heraldique21 ( talkcontribs) 16:28, 22 August 2018 (UTC)

Declined Not a good task for a bot. @ Heraldique21: I think you in the wrong place - try WP:AIV Ronhjones   (Talk) 19:37, 22 August 2018 (UTC)

Removing date headers

Many pages, such as the help desk, and all of the reference desks, have level 1 date headers for each day questions are asked. Scsbot automatically adds these headers at the beginning of each day. However, if no questions are asked a certain day, users have to manually remove the headers, which has to be done quite often for reference desks with less traffic. So I'm wondering, would it be possible to have a bot who removes these headers at the end of a day, if no questions were asked then?-- SkyGazer 512 Oh no, what did I do this time? 01:39, 21 August 2018 (UTC)

Doing... @ SkyGazer 512: It should be possible. Looks like they are all just a Heading 1 tags (single =) with a date, then a newline (from the next date addition). I'll start an in-depth look tonight. Ronhjones   (Talk) 15:25, 21 August 2018 (UTC)
@ SkyGazer 512: I've created Category:Wikpedia Help pages with dated sections - it saves having to hard code page names (makes any later updating of any new help pages a dream), I'll just get the list of pages. Is that all the (current) pages that need fixing? How quick do you actually want the date removed? I see two options, examples...
  1. August 1 (no edits); August 2 (some edits); August 3 (current day) - remove August 1 (don't have to worry about when Scsbot adds the August 3 heading
  2. August 1 (no edits); August 2 (current day) - remove August 1 - need to make sure that Scsbot has been and updated the page (I note sometimes he can be a few hours late).
Obviously I would prefer option 1 - but 2 is possible, if necessary. Option 1 can be set to be not long after 00:00 UTC. I suspect option 2 would have to be 06:00 to ensure the other bot has edited (obviously if it hasn't edited, then it would skip) Ronhjones   (Talk) 18:46, 21 August 2018 (UTC)
If Scsbot ( talk · contribs) adds them, would it not be possible to ask the botop ( Scs) to amend that bot for this new request? What I suggest is that when Scsbot is about to add a date heading, it checks to see if the page presently ends with an unused level 1 date heading and if so, removes that before adding the new heading. -- Redrose64 🌹 ( talk) 18:52, 21 August 2018 (UTC)
That might work, I've not started any coding yet, I'll wait until he comments - I think he's using a shell script to add the date - not sure how well that will work on analysing the page and removing the unwanted dates. Ronhjones   (Talk) 19:28, 21 August 2018 (UTC)
I've pinged him by e-mail as he does not appear to log on often. Ronhjones   (Talk) 19:31, 21 August 2018 (UTC)
Imo, sooner's better. I personally would support option 2 (that is, remove the August 1 header as soon as the August 2 header is added), although if the first is easier, that would certainly be better than nothing.-- SkyGazer 512 Oh no, what did I do this time? 20:43, 21 August 2018 (UTC)
No problem. We'll plan it for option 2, and see how it pans out. We'll wait for ( Scs) to comment first, in case he can kill two birds with one stone. Ronhjones   (Talk) 23:04, 21 August 2018 (UTC)
Sounds great! Thank you.-- SkyGazer 512 Oh no, what did I do this time? 23:11, 21 August 2018 (UTC)
No time for long explanations, but mods to Scsbot for this purpose are unlikely, so do carry on with Plan B. — Steve Summit ( talk) 03:59, 22 August 2018 (UTC)
Coding... Thanks, Steve. Ronhjones   (Talk) 16:28, 22 August 2018 (UTC)
BRFA filed Ronhjones   (Talk) 00:13, 23 August 2018 (UTC)
Y Done Ronhjones   (Talk) 14:39, 23 August 2018 (UTC)

Placement of cursor after a Search

I would like a modification made to the Search facility. Simply, I would like the cursor to be placed after the text of the first instance of the search. The reason for this is that it would make editing much quicker in that you don't have to search for the text (which is highlighted) and then place the cursor after it to make an update. — Preceding unsigned comment added by Ralph23 ( talkcontribs) 02:44, 11 August 2018 (UTC)

Ralph23, this has nothing at all to do with bots or bot editing. Heck, I'm not even sure that it's a Wikipedia thing; I'm pretty sure this is browser-determined. Primefac ( talk) 02:45, 11 August 2018 (UTC)

Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook