This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | Archive 76 | Archive 77 | Archive 78 | Archive 79 | Archive 80 | → | Archive 85 |
Hello and thanks in advance for your time. Is it possible for a bot to take a list of list-articles, with articles such as List of House episodes, go to the episode section of an article from that list, and then get all values in the "title" column? Basically the name of each episode. If that is a yes, can the bot then check if an article at that name is present or not? Finally creating a redirect based on that article name. So for example:
My goal is to be able to create episode redirects fast and easy, so trying to figure out how best to do it, as manually this is taking me a very long time (there are a few more steps, but would like to know if the general idea is even possible). --
Gonnym (
talk) 08:01, 24 October 2018 (UTC)
Centralize the ~1400+ instances of references ({{cite journal ...}}) to the "Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU" by replacing them with a single template (named e.g. Template:R:LunarNomenclature). The contents of the latter should be:
{{cite journal |last1=Menzel |first1=Donald H. |authorlink1=Donald Howard Menzel |last2=Minnaert |first2=Marcel |authorlink2=Marcel Minnaert |last3=Levin |first3=Boris J. |last4=Dollfus |first4=Audouin |authorlink4=Audouin Dollfus |last5=Bell |first5=Barbara |title=Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU |doi=10.1007/BF00171763 |journal=Space Science Reviews |volume=12 |issue=2 |pages=136–186 |date=1971 |bibcode=1971SSRv...12..136M |ref=harv }}
yielding:
Menzel, Donald H.;
Minnaert, Marcel; Levin, Boris J.;
Dollfus, Audouin; Bell, Barbara (1971). "Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU". Space Science Reviews. 12 (2): 136–186.
Bibcode:
1971SSRv...12..136M.
doi:
10.1007/BF00171763. {{
cite journal}}
: Invalid |ref=harv
(
help)
Urhixidur (
talk) 14:43, 27 October 2018 (UTC)
{{
cite journal}}
). So many things run on this system. Shortcut templates can create more problems then they solve. --
Green
C 06:00, 9 December 2018 (UTC)Recently, consensus was reached to move all election and referendum articles to have the year at the front. A bot, TheSandBot, was created to move the articles (approximately 35000) to the new titles. However, the bot did not change navboxes to use the new format. Per WP:BRINT, redirects from navigational templates should be bypassed to allow readers to see which page they are on in the template. This is a lot of simple work which would have to be doe by humans, if a bot were not created. Danski454 ( talk) 15:15, 9 December 2018 (UTC)
Done - If anyone wants User:RF1 Bot to run for you aswell let me know.
I'd like to have a bot that once a month creates a page at this months subpage for Talk Archives like here User:RhinosF1/Archives_2018/10_(October) And redirects to my main user page.
How would it be coded?
Happy to run it semi-automatic and monitored. Would not run outside my mainspace.
RhinosF1 ( talk) 14:43, 16 December 2018 (UTC)
It's showing an error :AssertUserFailedError: By default, mwclient protects you from accidentally editing without being logged in. If you actually want to edit without logging in, you can set force_login on the Site object to False. RhinosF1 ( talk) 15:58, 18 December 2018 (UTC)
It's surprising that given how long Wikipedia has been around and how easily automatable the task is that no bot exists to automatically update US congressional district pages which are almost uniformly a mess. Theirs exists no template for how to present results with some going in reverse chronological order than other pages and there being zero consistency in presentation many pages haven't been updated since 2014.
Going through and manually editing all 435 pages would be extremely tedious so the most logical solution is to create a bot dedicated to the task, which can not only update the pages but fix them.
The quality of the result section of congressional pages is abysmal and is easy to fix, simply create a standard congressional district page standard for displaying the results and then create a bot to automatically generate election templates and them to the page following the template. I'm a bit of newb so I don't know how exactly we would go around agreeing to a standard page but I'm sure there is a process.
I would be open to coding the bot myself if somebody more experienced with them is willing to offer help/assistance.
Some example of poor quality pages:
-- — Preceding unsigned comment added by Zubin12 ( talk • contribs)
There are about 1000 mainspace links to Spectator, most are broken. They changed URL schemes without redirects. The pages still exist at a new URL. Example:
There's no obvious way to program this, but posting if anyone has ideas. -- Green C 06:27, 7 November 2018 (UTC)
1)Identify the link which is identified as broken.
2)Remove the words "-.thtml" from the last portion of the link.
3)Add the month number and year number before the last section of url which needs to be separated by commas. This year and month number is the number on which the article appeared. If the month is only one digit, you need to add a zero before the month number.
Adithyak1997 (
talk) 10:40, 7 November 2018 (UTC)
Could a bot please identify articles that are not currently tagged as unreferenced but seem not to have references? Thanks for looking at this, Boleyn ( talk) 19:12, 10 November 2018 (UTC)
BRFA filed -- Green C 04:07, 31 December 2018 (UTC)
I'd like have a bot update the templates used to render college football schedule tables. Three old templates— Template:CFB Schedule Start, Template:CFB Schedule Entry, and Template:CFB Schedule End—which were developed in 2006, are to be replaced with two newer, module-based templates— Template:CFB schedule and Template:CFB schedule entry. The old templates remain on nearly 12,000 articles. The new templates were coded by User:Frietjes, who has also developed a process for converting the old templates to the new:
add {{subst:#invoke:CFB schedule/convert|subst|
at the top of the table, before the {{CFB Schedule Start}}
and }}
to the bottom after the {{CFB Schedule End}}
.
The development and use of these new templates has been much discussed in the last year at Wikipedia talk:WikiProject College football and has a consensus of support.
Thanks, Jweiss11 ( talk) 00:32, 8 November 2018 (UTC)
@ BU Rob13: would you be available to take on this bot request? Thanks, Jweiss11 ( talk) 03:16, 4 December 2018 (UTC)
Working. Primefac ( talk) 15:30, 27 January 2019 (UTC)
Check 5.7 million mainspace talk pages for sections that would benefit from a {{
reflist-talk}}
.
Scope: for each talk page, extract each 2-level section. For each section, check for existence of reference tags ie. <ref></ref>. If exist, check for existence of {{
reflist-talk}}
or <references/>. If none exist, add {{reflist-talk}}
at end of section (optionally in a 3rd-level subsection called "References").
-- Green C 16:27, 1 January 2019 (UTC)
<ol class="references">
- this will always exist if there is <ref></ref> somewhere in the page, regardless of existence of {{
reflist-talk}}
or <references/> and it will account for things like <!-- <ref></ref> --> --
Green
C 16:36, 1 January 2019 (UTC)
I ran a script. In 2000 Talk pages it found 11 cases:
Extrapolated it would be about 29,000 pages are like this. -- Green C 19:43, 1 January 2019 (UTC)
BRFA filed -- Green C 20:02, 1 January 2019 (UTC)
Done -- Green C 07:19, 11 February 2019 (UTC)
The request is to have {{ WikiProject Soil}} added to the article talk pages in 39 categories. Project notification posted. Much appreciated:
requested: -- Paleorthid ( talk) 23:10, 6 January 2019 (UTC)
)
In coming days, I'm going to redirect Star Sports to Fox Sports (Southeast Asian TV network). But some redirects to Star Sports need to be retargetted in advance.
(Correction: STAR Sports HD3, Star Sports HD3, STAR Sports HD4 and Star Sports HD4 did exist. JSH-alive/ talk/ cont/ mail 14:12, 21 January 2019 (UTC))
I don't know what to do with STAR Sports Network and Star Sports Network. Is it the name for Indian channels or Southeast Asian channels? JSH-alive/ talk/ cont/ mail 09:32, 20 January 2019 (UTC)
Done per User_talk:Xqt#Requesting_mass_redirect_fix @ xqt 13:50, 1 February 2019 (UTC)
This is a relatively simple query: https://quarry.wmflabs.org/query/18894
The images listed in that query ideally should be tagged with {{ Shadows Commons}} (unless already tagged as CSD F8)
As this is a repeatable, and felt to be uncontroversial task, It would be better to let a bot do it, freeing up contributors for more complex tasks that require human skills rather than simple tagging clicks. Thanks
Given the query size, the bot would not need to be run continuously, but once a week should prove to be more than adequate.
ShakespeareFan00 (
talk) 10:58, 6 February 2019 (UTC)
BRFA filed -- Green C 15:30, 6 February 2019 (UTC)
Done -- Green C 22:40, 23 February 2019 (UTC)
per Wikipedia:Categories_for_discussion/Speedy#Current_requests (Consistency with main article's name per official renaming)
please list all subcategories of Category:GTK+ with "GTK+" to plain GTK, their number is high and I can't do it manually, thanks. -- Editor-1 ( talk) 08:19, 10 February 2019 (UTC)
* [[:Category:old name with plus]] to [[:Category:same name without plus]] – per official renaming (request by [[User:Editor-1]])
to can include into Wikipedia:Categories_for_discussion/Speedy#Current_requests
thanks. -- Editor-1 ( talk) 08:46, 10 February 2019 (UTC)
Please tag all sub-cats of Category:English-language singers (except Uganda) with
{{subst:cfr-speedy|English-language singers from ...}}
i.e. the nomination is to change the word "of" to "from".
Ideally, each category's country name should replace "...", but the ellipsis would be sufficient.
I will then list them at WP:CFDS myself. – Fayenatic London 23:03, 13 February 2019 (UTC)
The
ARKive project has ended and its website has been replaced with a single page noting that act. Links to pages on arkive.org need to be replaced with archive.org equivalents; and citations need |archive-url=
and |archive-date=
attributes. I've already updated {{
ARKive}}. Can someone oblige, please?
Links like the one at the foot of Bitis schneideri could usefully be replaced using {{ ARKive}}. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:23, 16 February 2019 (UTC)
Hello! I would like to request to operate a bot! My idea for a bot is a bot that can revert reference blanking. As my job as a recent changes patroller, I see many people blanking the reference. I know that CluBot reverts vandalism, but usually, CluBot does not revert the reference blanking. Let me know what you think! Shalvey 17:10, 19 February 2019 (UTC)
Hello, I would like to know if it is possible that you guys could create a bot that would check references in articles, and see if they are actually websites, not just a random URL that isn’t even existent. What I mean is that, when you type in a website, you have a blue outline, which then forwards you to the cite. What I’m seeing is URL’s that aren’t highlighted in blue, but just URLs. The bot could be ran by me, but I don’t know how to code a bot. Thanks! Shalvey 18:49, 19 February 2019 (UTC)
Let me know what you think! — Preceding
unsigned comment added by
Shalvey (
talk •
contribs) 19:13, 19 February 2019 (UTC)
A bot that will, in certain situations, switch links to web.archive.org. Sun Sunris ( talk) 01:31, 23 February 2019 (UTC)
Please can someone add {{ Section sizes}} to the talk pages of ~6300 articles that are longer than 150,000 bytes (per Special:LongPages), like in this edit?
The location is not critical, but I would suggest giving preference to putting it immediately after the last Wikiproject template, whee possible. Omit pages that already have the template. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:59, 29 December 2018 (UTC)
This is a request for a ONE-Shot bot run to remove -
{{Copy to Commons|human=ShakespeareFan00}}
From the 8000 or so images currently tagged with it. This is requested because both Commons and Wikipedia policy has changed in significant ways since the vast bulk of the images were tagged. It would be tedious to remove the tag individually. ShakespeareFan00 ( talk) 17:53, 28 February 2019 (UTC)
I'd like to request moving of pages from Category:Missing U-boats to Category:Missing U-boats of World War II - it is grouping only WWII boats at the moment and is in such parent category (WWI boats were excluded to Category:Missing U-boats of World War I). Pibwl ←« 14:39, 6 March 2019 (UTC)
Hi all, Kadane very helpfully created KadaneBot for us over at Wikipedia Peer Review - it sends out automated reminders based on topic areas of interest for unanswered peer reviews. Unfortunately, Kadane's been inactive almost since creation (September 2018), and hasn't responded to my request [1]. Would anyone be so kind as to usurp this bot so we can continue to use it? -- Tom (LT) ( talk) 07:32, 22 February 2019 (UTC)
Am seeking a bot that can periodically notify volunteers at WP:PRV about new or unanswered reviews, would be very useful and (I hope) increase peer review activity levels. -- Tom (LT) ( talk) 03:55, 3 March 2019 (UTC)
{{PRV|Kadane|Computer Science|contact=monthly}}
how would a bot know which reviews are for "computer science"? Thanks for the clarification. --
Green
C 22:58, 5 March 2019 (UTC)
Done, Kadane migrated to Toolforge and running via cron (ie. run automatically at set times). -- Green C 16:41, 10 March 2019 (UTC)
The Medical Translation Task Force faces an issue with respect to Content Translation. Basically the tool loses references when the metadata exists within template:infobox medical condition (new) and template:drugbox. The issue is described here and the task is supposedly not easily fixable and thus will not be fixed anytime soon. [3]
As a work around I am proposing a bot that moves the metadata for references from these two infoboxes to the lead or body of the article in question. Will be done for these ~1200 articles. Category:RTT
An example of what such an edit will look like is this. [4]
Doc James ( talk · contribs · email) 19:14, 30 January 2019 (UTC)
Done ( Wikipedia:Bots/Requests for approval/Fz29bot) -- Green C 16:40, 10 March 2019 (UTC)
The task of the bot would be to identify all articles in Category:Articles needing translation from French Wikipedia who have a French Commune infobox, then add the |topic=geo parameter to their Expand French template, to categorize them in Category:Geography articles needing translation from French Wikipedia. Its second task would be more complex, as it should identify the Category:Communes of departement name category, and add the fitting expansion category of Category:departement name communes articles needing translation from French Wikipedia
These tasks should apply to anything between four to seven thousand articles.
Knowing nothing about how bots function, there may be an easier way to do such a large scale category move that I am not seeing. Sadenar40000 ( talk) 18:47, 28 February 2019 (UTC)
Hello,
I would like to suggest a bot that fixes dates in Category:Use mdy dates and Category:Use dmy dates.
RhinosF1 (chat) (status) (contribs) 18:09, 15 February 2019 (UTC)
access-date
dates from BIGENDIAN to either 'mdy' or 'dmy' when the article has established use of BIGENDIAN for the access-date
dates would be in violation of
WP:DATERET and
WP:CITESTYLE/
WP:CITEVAR – but how is a bot supposed to figure this out? --
IJBall (
contribs •
talk) 21:45, 15 February 2019 (UTC)
There is a huge backlog within most Wikipedia Projects of unclassified articles. I've been recently assessing a number of these for the Politics Project, and have noticed a few patterns that I believe could be automated to heavily reduce this backlog.
And of course, if we can heavily reduce the backlog like this, we will make attempting the remaining tasks that must be classified by hand less daunting, and thus more likely to be done. It is true that the second part of this proposal will sometimes result in incorrect classification, but the criteria will be up for each taskforce to determine and so I don't believe that risk should prevent this bot being created - and even if they are incorrectly assessed, a few incorrect assessments are better than numerous unassessed articles.
If no one is interested in taking this up then I do intend to get around to it at some point - unless someone is able to explain why it is stupid/unnecessary, though I think the first part of this proposal would be better as a modification to the existing tag-update bot.
-- NoCOBOL (talk) 07:50, 25 January 2019 (UTC)
Hello there! The Church of Jesus Christ of Latter-day Saints has formally requested that all references to the 'Mormon Church', 'LDS Church', and 'Mormonism' be discontinued by all users in all media outlets including Wikipedia. It would be astronomically easier for someone to make this change via AWB rather than manually search out and make every single change.
This request is complicated by two items: (1) no formal replacement for these 3 informal references has been provided by the Church of Jesus Christ of Latter-day Saints, which will affect how the AWB algorithm needs to work; (2) the name of a volume of this church's scripture is 'The Book of Mormon', and eliminating all references to the 'Mormon Church' or 'Mormonism' without editing all references to 'The Book of Mormon' may be difficult for AWB. — Preceding unsigned comment added by Dpammm ( talk • contribs) 07:26, 10 March 2019 (UTC)
If the reliable sources written after the change is announced routinely use the new nameis the criterion for changing how we refer to someone/something). There are quite a few examples of our continuing to use a name over the subject's objections because it's continued to be the name commonly used in the sources; North Korea is an obvious example that springs to mind. If you want Wikipedia to deprecate the use of these terms, you need to provide evidence that reliable sources are no longer using the terms "Mormonism", "LDS Church" etc, and then start a WP:RFC to deprecate the terms; only then is it time to start making bot requests, or even to start manually editing the articles. ‑ Iridescent 07:40, 10 March 2019 (UTC)
The Mormons have had this idea before but then they continued using Mormon themselves. I seriously doubt RS are going to follow this requested change. Legacypac ( talk) 07:57, 10 March 2019 (UTC)
Isn't that pretty obnoxious though? The Church of Jesus Christ of Latter-day Saints has made a 100% formal shift in use of its name. It has openly and formally disassociated itself from the terms "Mormon Church", "LDS Church", and "Mormonism". Of course it could take time for common usage to change, but how is that going to happen if media outlets requested to make the change refuse to do so? I'm not an expert at this but it's fairly common sense to go along with the intended request despite its occurring in phases even as the 5 March 2019 re-affirmation news release and first presidency letter states and changes have already been made (www.lds.org to www.churchofjesuschrist.org, etc. as this article describes, including "mormonnewsroom" to follow suit shortly): https://www.mormonnewsroom.org/article/church-name-alignment -- Dpammma ( talk) 08:41, 10 March 2019 (UTC)
This was also just posted at the other discussion, although I think that discussion page is dead as it's the only comment since January https://twitter.com/APStylebook/status/1104071713476755457. How does anything get progressed to a decision one way other the other, especially on issues where some editors are obviously bound and determined to not respect this church's request despite adoption of these changes by the largest mainstream media outlet? -- Dpammma ( talk) 08:49, 10 March 2019 (UTC)
Hi guys, I'd like to create lists of NZ heritage sites. Lists would be very similar to those at German Wikipedia, see List of monuments in New Zealand. The database with the heritage sites is available here: http://www.heritage.org.nz/the-list You can search all sites in a specific region and export CSV.
I'm not technically skilled enough to program a bot that'd help me to do that. Is there anyone keen to help out? List of heritage sites is quite commons practice here, see eg. Listed buildings in Windermere, Cumbria (town). Regards, Podzemnik ( talk) 11:34, 22 January 2019 (UTC)
Well, the CSV contains this header and first record:
RegisterNumber,Name,RegistrationType,RegistrationStatus,DateRegistered,Address,RegisteredLegalDescription,ExtentOfRegistration,LocalAuthorit yName,NZAANumbers 660,1YA Radio Station Building (Former),Historic Place Category 1,Listed,1990-02-15,"74 Shortland Street, AUCKLAND","Pt Allots 10‐11 Sec 3 City of Auckland (CT NA67C/507), Pt Allot 12 Sec 3 City of Auckland (CT NA152/135), North Auckland Land District","Extent includes the land described as Pt Allots 10‐11 Sec 3 City of Auckland defined on DP 874 (CT NA67C/507), Pt Allot 12 Sec 3 City of Auckland (CT NA152/135), North Auckland Land District, and the building known as 1YA Radio Station Building (Former) thereon.",Auckland Council (Auckland City Council),[]
The problem will be mapping the "Name" field (eg. "1YA Radio Station Building (Former)") with the Wikipedia article name ( Kenneth Myers Centre). There's no bot magic for that. -- Green C 16:34, 22 January 2019 (UTC)
so text 1,text 2,text 3,text 4 becomes | text 1 || text 2 || text 3 || text 4 |-
Alright, thanks for the inputs guys, I'll try to do it myself! Podzemnik ( talk) 07:51, 25 January 2019 (UTC)
I just found and fixed an article with two Wikipedia links to two articles that were nothing but redirects back to it, and apparently that's all they had ever been. [5] Can you make a bot to check all Wikipedia links that point to pages that are redirects, then checks to see if that redirect points back to the page its coming from, and then remove the brackets around it so it doesn't link there anymore? If the link has a | in it, then keep what's after that and ditch the rest. Dream Focus 16:29, 26 January 2019 (UTC)
(See [[Promotion (chess)#Promotion to rook or bishop|Underpromotion: Promotion to rook or bishop]] for examples ...
Redrose64 I meant a link to another article that then redirects back to the first article again. Dream Focus 18:18, 26 January 2019 (UTC)
Adverts pretending to be peer-reviewed papers are cited in thousands, possibly tens of thousands, of Wikipedia articles. Articles in paid supplements to journals are generally not independent sources. See this discussion for details.
Sometimes, the citation contains the abbreviation "Suppl.". In this case, the citation could be bot-tagged with {{
Unreliable medical source|sure=no|reason=sponsored supplements generally unreliable per
WP:SPONSORED and
WP:MEDINDY|date=30 April 2024}}
The "sure=no" parameter will add a question mark to the tag, as, rarely, the supplement might actually be a valid source. I think these exceptions would probably be rare enough to manually mark for exclusion by the bot.
This would increase awareness of this problem among editors as well as encouraging editors to scrutinize the tagged sources. HLHJ ( talk) 04:42, 27 January 2019 (UTC)
Marking as Not done per WP:CONTEXTBOT Kadane ( talk) 17:23, 15 March 2019 (UTC)
I'm thinking of making a bot that creates and/or updates the {{ Weather box}} template based on climate data from BOM for Australian articles - with the possibility on expanding to other countries.
High level pseudocode
This seems well suited to automation, especially since with climate change many of these weather boxes should change over time (assuming the longevity of Wikipedia).
This would be my first bot - and I'm not familiar with the specifics of WikiBots (yet). Do you think this task is best suited to a bot or a tool? I assume most bots start off as tools? Can you recommend a next step here?
"well sized" - measured by either quality grading (C class or above?) or number of bytes (>4000?). — Preceding unsigned comment added by Spacepine ( talk • contribs) 01:45, 11 March 2019 (UTC)
Not done - OTRS ticket needs to be filed by BOM granting permission to use copyrighted materials before bot can scrape the website. Once this is complete please open a new request if you aren't scripting the bot yourself. Kadane ( talk) 17:35, 16 March 2019 (UTC)
This will affect very few pages in WP; mostly those in any articles about the company or the games it makes.
As follows, links that begin http://www.mortalonline.com -if still live- are to be found at under https://www.starvault.se ; furthermore, about Mortal Online's official forums and its post URLs: http://www.mortalonline.com/forums/threads/<words-in-title>.<specificnumber> is no longer the current URL pattern; it is (at this time)
https://www.starvault.se/mortalforums/threads/<words-in-title>.<specificnumber>
I think a bot could fix any of the old-form URLs into ones that could work. Nlaylah ( talk) 22:58, 12 March 2019 (UTC)
Per
this tfd, {{
rnd}}, {{
round}} and {{
decimals}} are all to be merged. Would be great to get a bot to convert the necessary transclusions (more than 10,000). Please {{ping|zackmann08}}
if you have any questions! --
Zackmann (
Talk to me/
What I been doing) 16:51, 11 March 2019 (UTC)
{{
round}}
and {{
decimals}}
to {{
rnd}}
in the wiki text - about 22,000 cases. Then redirect {{
round}}
and {{
decimals}}
to {{
rnd}}
. Examine all the arguments available for the present {{
round}}
template and determine how to translate those to the arguments available in the {{
rnd}}
template (same with {{
decimals}}
) then make the conversion in wikitext.{{
rnd}}
so that it seamlessly supports the current arguments available for {{
round}}
and {{
decimals}}
- if this is even possible or not requires some investigation of the {{
rnd}}
source as there might be argument name conflicts. If this is possible, it is a simple matter of redirecting {tlx|round}} and/or {{
decimals}}
to {{
rnd}}
.{{
rnd}}
is the target merge template because it has 261,105 use cases (compared to 15k and 8k for the others) and it has the most options available. Once everything is merged into {{
rnd}}
then {{
round}}
can redirect to it and the template docs can be changed to reflect the new name {{
round}}
going forward. Wouldn't be required to rename 261,105 legacy instances of {{
rnd}}
to {{
round}}
. --
Green
C 17:07, 12 March 2019 (UTC)
Okay. Coding... Kadane ( talk) 20:42, 12 March 2019 (UTC)
@ DannyS712: Oops. I have already programmed it and am ready to file the BRFA. Here is the source. Didn't mean to step on toes. Let me know what you want to do. Kadane ( talk) 22:16, 12 March 2019 (UTC)
BRFA filed Kadane ( talk) 23:20, 12 March 2019 (UTC)
I've withdrawn my BRFA. All of the edits were completed within the trial. The template is ready to be merged. @ Zackmann08: Task is Done Kadane ( talk) 08:05, 17 March 2019 (UTC)
Hi, it would be wonderful if we had a bot that looked for uses of a template called {{ remindme}} or something similar (with a time parameter, such as 12 hours, 1 year, etc. etc.) and duly dropped a message on your own talk page at the designated time with a link to the page on which you put the remindme tag. It would only send such reminders to the person who posted the edit containing the template in the first place. Kind of like the functionality of such bots on reddit, I guess. Fish+ Karate 13:11, 20 November 2018 (UTC)
OK, so the basic use is as follows:
User:Alice places a template (to be created, let's call it {{
remind me}}
) inside a thread of which they wish to be reminded. The user specifies the date/time at which the reminder should be given as an argument of the template (either as "on Monday 7th" or "in three days" - syntax to be discussed later). At the given date, a bot "notifies" Alice.
On a policy level, I see a few questions:
Depending on the choice for each of those, this will change the amount of technical work needed, but as far as I can tell, those questions entirely define the next steps (coding/testing/approval request etc.). Please discuss here if I missed something, but below to answer the questions. Tigraan Click here to contact me 13:36, 25 November 2018 (UTC)
I made a separate section for this because I am almost sure of the questions that need asking but less sure of the answers they should get. What follows is my $0.02 for each:
(Ping: Fish and karate) Tigraan Click here to contact me 13:36, 25 November 2018 (UTC)
If your task could be controversial (e.g. (...) most bots posting messages on user talk pages), seek consensus for the task in the appropriate forums. (...) Link to this discussion from your request for approval. Again, VPP is the catch-all, but that's because I have no other idea. Maybe a link from the talk page of WP:PING as well, since the functionality is closely related.
)
We please need a bot to remove (and NOT subst; the source is non-RS) all entries of {{ TheFinalBall}} following the consensus at Wikipedia:Templates for discussion/Log/2019 February 13#Template:TheFinalBall, and save @ Zackmann08: from removing it manually (as they have been doing). Giant Snowman 20:54, 13 March 2019 (UTC)
Hey, based on this TfD the template {{ PBB Controls}} should be removed. It currently has 4395 transclusions, so a bit much for AWB. Could anyone help with a bot? Thanks. -- Gonnym ( talk) 08:24, 20 March 2019 (UTC)
This is Done. BRFA has been approved Kadane ( talk) 15:49, 20 March 2019 (UTC)
Sorry ~Jer ( Talk • Contributions) 12:17, 22 March 2019 (UTC)
This long standing project Wikipedia:WikiProject_Abandoned_Drafts/Stale_drafts would benefit from a bot to do two things we now do manually on the 40 or so remaining numbered subpages. 1. Remove red links to deleted articles 2. Remove links to pages that are redirects (page has been moved to mainspace, draftspace, or redirected to an article). 3. Pages that are now completely blank or have a userspace page blanked template. Once one of these three cases occurs the project no longer cares about the name of the page or the link to it. If a bot could sweep through the pages daily or even weekly this would save a ton of time manually removing red links and checking and removing links that are redirects. Even unlinking pages would make manually removing them from the lists much faster. If this is not clear look at the hisoty of any of the numbersd list pages to see the process. Legacypac ( talk) 01:35, 24 February 2019 (UTC)
Wikipedia:Categories for discussion is looking for a new bot to process category deletions, mergers, and moves. User:Cydebot currently processes the main /Working page, but there is a growing list of issues that call out for a replacement bot:
At a minimum, the new bot should process the main /Working page:
Ideally, it would also do some or all of the following:
* REDIRECT [[:Category:Foo]] to [[:Category:Bar]]
.Your assistance would earn the gratitude of some very tired and increasingly frustrated CfD'ers.
Thank you, -- Black Falcon ( talk) 20:48, 18 February 2019 (UTC)
The task is rather simple. Find all pages with Foobar (barfoo). If they redirect to Foobar, tag those with {{ R from unnecessary disambiguation}}. This should be case-sensitive (e.g. Foobar (barfoo) → FOOBAR should be left alone).
Could probably be done with AWB to add/streamline other redirect tags if they exist. Headbomb { t · c · p · b} 13:15, 14 November 2018 (UTC)
@ TheSandDoctor: any updates on this? Headbomb { t · c · p · b} 08:24, 13 January 2019 (UTC)
@ Headbomb: what exactly are you looking for? Do you just want someone to do a database scan, or do you want a bot to fix this? It sounds like there might be context issues with a task list this per Adam Cuerden. Kadane ( talk) 00:53, 18 March 2019 (UTC)
\(.*(album|song|journal|magazine|publisher)\)
and the like.
Headbomb {
t ·
c ·
p ·
b} 01:02, 18 March 2019 (UTC)
@ Headbomb: I couldn't sleep tonight and ran a database query to find all redirects that end with parenthesis. I have a question about how the bot would handle a few cases.
I am assuming the bot would skip the page if any {{ R from ...}} templates were present? Should the bot ignore any characters such as ", ★, or *? If a page ends in (disambiguation) should it be tagged with {{ R from disambiguation}} or {{ R from unnecessary disambiguation}}? Once I have a better idea of the criteria I will put together a script to estimate the number of articles that will be edited. Kadane ( talk) 08:40, 18 March 2019 (UTC)
@
Headbomb: - Okay, from my understanding there are two cases. Foobar (^disambiguation) -> Foobar
and Foobar (disambiguation) -> Foobar
. {{
R from unnecessary disambiguation}} should be added to the ^disambiguation cases, and {{
R to disambiguation page}} to the disambiguation cases (if the template is missing). In that case there are 207 pages that need the template {{
R to disambiguation page}} and there are 55,824 pages that need {{
R from unnecessary disambiguation}}. If my understanding sounds correct I am ready to go to BRFA. Looking forward to your reply.
Kadane (
talk) 05:34, 19 March 2019 (UTC)
BRFA filed @ Headbomb: Kadane ( talk) 16:13, 19 March 2019 (UTC)
{{
infobox cricketer}} used to have a |deliveries=
parameter which was
removed in 2009. There are 7000ish pages using the parameter, which makes up the overwhelming majority of
the unknown parameter tracking category. I made a start on clearing them off with AWB but figured it may be quicker to get a bot to do them. There is a
list of possibly affected pages if that helps, and I did a regexp replace of \|\s*deliveries\s*=.*\n
(with nothing) as part of some more targeted clean-ups. Could a bot please remove the rest?
Spike 'em (
talk) 14:42, 13 March 2019 (UTC)
|deliveries1=
... |deliveries4=
in them. This may not be a straightfoward removal (which could very well be against
WP:COSMETICBOT on its own).
Headbomb {
t ·
c ·
p ·
b} 14:56, 13 March 2019 (UTC)
the "administration of the encyclopedia"are substantive, which may apply here. Spike 'em ( talk) 15:41, 13 March 2019 (UTC)
deliveries
parameter with a blank string? I can do it, and will file a BRFA in the next few days as long as Headbomb has no objection to the "cosmetic"-ness of this task (they haven't replied yet) --
DannyS712 (
talk) 16:03, 13 March 2019 (UTC)
|deliveries=balls
so replacing those with blank string would leave few enough for me to do by hand.
Spike 'em (
talk) 16:13, 13 March 2019 (UTC)
)
Hi bot people. I was wondering whether it might be appropriate/worthwhile/a good idea to get a bot to remove "living=yes", "living=y", "blp=yes", "blp=y", etc from the talkpages of the articles listed at Wikipedia:Database reports/Potential biographies of dead people (3). I recognize that automating such a process might result in a few errors, but I think that would be a reasonable tradeoff compared to how tedious it would be for humans to check and update all 968 articles in the list one by one. (And hopefully, for those few(?) articles where an error does occur, someone watching the article will fix it). I spot-checked a random sample of articles in the list, and for every one I checked, it would have been appropriate to remove the "living=yes", etc from the talkpage, i.e. the article had a sourced date of death. To minimize potential errors, I would suggest the bot skips any articles which cover multiple people, e.g. ones with "and" or "&" in the title and Dionne quintuplets, Clarke brothers, etc. Thoughts? DH85868993 ( talk) 12:53, 15 January 2019 (UTC)
The Austrian metadata templates storing population figures were deleted with a consensus that they should be replaced by WikiData figures. I set up a new template for that, Template:Austria population Wikidata, and now it should be implemented (as in this diff) so that the updated figures can be displayed. Hopefully a bot can help with that.-- eh bien mon prince ( talk) 19:35, 9 March 2019 (UTC)
BRFA filed -- Green C 14:42, 12 March 2019 (UTC)
A zoomable, labeled location map can be included in the articles about German districts by adding {{Germany district OSM map|parent_subdivision=QXXXX}} to the 'map' parameter of {{ Infobox District DE}}, where QXXXX is the Wikidata ID of the German state the district belongs to. A live example of the template can be see in the Nordfriesland (district) article.-- eh bien mon prince ( talk) 08:24, 12 March 2019 (UTC)
wikibase_item
), and if it has no parameter, add both the parameter and the template, which is the tricky case. I'll let you know once I've filed a BRFA --
DannyS712 (
talk) 23:30, 15 March 2019 (UTC)Hey, based on
this TfD the template {{
PBB Summary}} should be removed, and based on
this extended discussion the removal should keep the |summary_text=
text in the article. I'm not sure if |section_title=
is used. Another editor is currently manually doing {{
PBB Further reading}}, though a bot operation would be much more faster. If that is included also, then just remove the outer template code, leaving the actual citation templates in the article. Could anyone help with a bot? Thanks. — Preceding
unsigned comment added by
Gonnym (
talk •
contribs) 20:08, 20 March 2019 (UTC)
Hi, I might be able to have a bash myself but could someone help create code for a Python Bot that would get a list of users not on Wikipedia: WikiProject Apple Inc./Subscribe that are have edited the WP's articles at least 10 times in the last 90 days or has added our User Box to their userpage and add them to a mass message list at Wikipedia: WikiProject Apple Inc./To Welcome (this should be cleaned at each run) . It also should remove users who haven't Edited in the last 5 years from the first mailing list. The Bot should then create a mass message request's code to send out the newsletter and a welcome message to the lists for me to submit. I'd like to be able to run the bot myself. (Pinging User: Smuckola) Thanks, RhinosF1 (chat) (status) (contribs) 20:32, 20 March 2019 (UTC)
RhinosF1 WP:VP is a good place to start. I am not the one that will judge consensus. That will be up to someone in the bot approvals group, but that is where most go to gain consensus for a bot task. Kadane ( talk) 21:33, 22 March 2019 (UTC)
I want a bot that can belong to a user, specifically me because I am making this request. Since I’m making this request, the name of my bot would be “MetricSupporter89Bot”. — Preceding unsigned comment added by MetricSupporter89 ( talk • contribs) 23:17, 21 March 2019 (UTC)
Some more stuff I was to add were that it would contribute stub articles I made that would take too long for me to contribute, edit my user page to be like other users pages, etc. Metric Supporter 89 ( talk) 23:26, 21 March 2019 (UTC)
This bot would edit articles that would need citations & that would need checking over for mistakes in the information, such as "Earth is the only planet that has life" where it should be "Earth is the Only known planet to have life"-- Jeriqui123 ~~ Talk 12:04, 25 March 2019 (UTC)
I would like to see a neural network capable of marking pages for speedy deletion.
Here is a list of criterion I believe that the bot could handle:
Thanks InvalidOS talk 18:09, 27 March 2019 (UTC)
P1 and P2 are rare. While trying to upgrade P2 some people are saying Admins can't judge P2 as it is, so how could a bot? Legacypac ( talk) 21:50, 27 March 2019 (UTC)
There are several links to World Matchplay (darts)#Previous_incarnation and pipe links. But this section no longer exists since it has been expanded into its own page MFI World Matchplay. Can we get a bot to update the links to the new page? It exists on (probably) hundreds of dart player pages and more. DLManiac ( talk) 16:32, 29 March 2019 (UTC)
World( |_)Matchplay( |_)(darts)#Previous( |_)incarnation
and didn't find any more. --
DannyS712 (
talk) 16:57, 29 March 2019 (UTC)
In order to reduce the load on the Signpost staff, it would really be nice if we could have a bot that would synchronize drafts with the newsroom.
If something exists at Wikipedia:Wikipedia Signpost/Next issue/Foobar, make a correspondence between the parameters of {{ Signpost draft}} present on the draft page and those present at Wikipedia:Wikipedia Signpost/Newsroom#Article status.
Specifically
Draft parameters | Newsroom parameters | |
|title=foobar |
→ | |Has-title=yes
|
|blurb=foobar |
→ | |Has-blurb=yes
|
|Ready-for-Copyedit=foobar |
→ | |Ready-for-Copyedit=foobar
|
|Copyedit-done=foobar |
→ | |Copyedit-done=foobar
|
|Final-approval=foobar |
→ | |Final-approval=foobar
|
The second thing the bot should do is if an irregular column is found to exist at Wikipedia:Wikipedia Signpost/Next issue/Foobar, then copy the corresponding item from ...Newsroom#Irregular columns and paste it at the bottom the bottom of ...Newsroom#Article status. And then keep it synchronized like the other things in ...Newsroom#Article status.
The bot could review the relevant pages every 15 minutes or so (or whatever time interval people think is best). Headbomb { t · c · p · b} 18:09, 2 April 2019 (UTC)
lua magicis to have the page update automatically without the need for a bot. {{3x|p}}ery ( talk) 21:56, 2 April 2019 (UTC)
Lua magic implemented, this is no longuer needed. I'll be archiving this. Headbomb { t · c · p · b} 23:56, 2 April 2019 (UTC)
To revert vandalism and disruptive behaviour and help users. — Preceding unsigned comment added by Hurricane Bunter ( talk • contribs) 12:24, 5 April 2019 (UTC)
/info/en/?search=Wikipedia:Database_reports/Unused_file_redirects
Contains a small number of images that were renamed, but where article links were not updated.
Although not essential, updating image links helps avoid conflicts with Commons, and of the 'wrong image' being displayed in articles.
Would it be possible for a BOT to do this kind of repetitive check, update, refresh cycle, until there are no image links to redirects in File: namespace from Articles or other important pages.? ShakespeareFan00 ( talk) 11:49, 16 March 2019 (UTC)
A bot that reports broken ref tags to a user, so he/she can fix it. — Preceding unsigned comment added by Darkwolfz ( talk • contribs) 04:48, 6 February 2019 (UTC)
Sure
DannyS712 For example if an article have a <ref>
and the editors maybe used source editor, and that cause a backspace or enter in ref tag, which will make it broken, or editors giving wrong parameters, for example I found a article today where they entered url correct, but Instead of giving website name, they added url. So if there's a bot which can detect broken ref tags or hyperlinks, and report it to me, I can fix them.
Darkwolfz (
talk) 05:02, 6 February 2019 (UTC)
DannyS712
/info/en/?search=Formby_Hall
In it's recent history, I fixed an error like that, and maybe we should scan for source that are in red color between <ref>...</ref>
or a missing opening <ref>
or closing </ref>
, also reference title missing ones.
<ref>
or </ref>
Yes it helps a bit, but is it possible to find articles which doesn't belong to the category, as in a new error made by someone accidentally. And filter missing <ref>
tags?
{{
sfn}}
template isn't Harvard-style references, it's
Shortened footnotes. Harvard-style references are
parenthetical, as used on pages like
Actuary. However, the two methods have a number of common features, primarily the separation of page number information from the long-form citation, with the association between the two being by means of a link formed from up to four surnames and a year. From my reading of the above, it is these links that need to be tested; and we have a script to do that, see
User:Ucucha/HarvErrors. --
Redrose64 🌹 (
talk) 20:38, 11 February 2019 (UTC)Could someone generate a list of values used for Template:Tooltip (the redirect, not Template:Abbr) in a table form, so it would be easier to see what needs to be converted to {{ abbr}} per the result of this discussion? -- Gonnym ( talk) 14:20, 15 February 2019 (UTC)
Hi. MOS:ACCESS#Text / MOS:FONTSIZE are clear. We are to "avoid using smaller font sizes in elements that already use a smaller font size, such as infoboxes, navboxes and reference sections." However, many infoboxes use {{ small}} or the html code, especially around degrees earned ( here's one example I corrected yesterday). I used AWB to remove small font from many U.S. politician infoboxes of presidents, senators, and governors, but there are so many more articles that have them. Here's an example for a TV station. I've noticed many movies and TV shows have small text in the infobox as well. Since I cannot calculate how many articles violate this particular rule of MOS, I would like someone to automate a bot to remove small text from infoboxes of all kinds. – Muboshgu ( talk) 22:04, 20 December 2018 (UTC)
<small>...</small>
tags within infoboxes, along with small tags wrapping multiple lines, both of which cause Linter errors, so it may be possible to get a bot approved to remove tags as long as fixing Linter errors is in the bot's scope. I welcome corrections on the four things I got wrong in these four sentences. –
Jonesey95 (
talk) 23:58, 20 December 2018 (UTC)
@ Jonesey95 and Muboshgu: Hello. Although the 85% font-size is defined, the computed value of the font-size is below 11.9px (it is 10.4667px). This is because font-size percentages work based on the parent container, not the document (see 1 under percentages). In this case the infobox has already decreased the font-size to 88% of the document, the font-size computed from the {{ small}} tag will be 74.8% smaller than the rest of the document (0.88 * 0.85 = 0.748). This is the case in Firefox, Chrome, Edge (10.4px), Opera and Internet Explorer. This behaviour is the standard and so will be experienced in all browsers. Dreamy Jazz 🎷 talk to me | my contributions 10:46, 23 December 2018 (UTC)
<small>...</small>
and {{
small}} (and its size-reducing siblings) from infoboxes, both in Template space and in article space. –
Jonesey95 (
talk) 14:31, 23 December 2018 (UTC)
The
Wikipedia:Good articles/mismatches page details some conflicts with good articles and usually indicates a mistake of some sort that needs to be sorted out. Category:Good articles means that an article has the green spot that indicates it is classified as good, while Category:Wikipedia good articles are articles which have undergone a review. So the In Category:Good articles but not Category:Wikipedia good articles
indicates that a good article symbol may be present on an article that has not actually undergone a review.
Wikipedia:Good articles/all is a list of all good articles and is manually updated. The last two headings usually indicate articles that have not been added after passing a review or removed after being delisted.
This page was originally created by JJMC89 a year ago using AWB after I requested it. At the time it contained thousands of mismatches [11]. We have just resolved all those, mainly through the efforts of DepressedPer. I was hoping there could be a bot that would update the page periodically so we can keep on top of any further mismatches. I have tried running it myself through AWB, but the number of articles is too large to do in one hit. There was also an issue that articles that had been moved would show up as a mismatch if the name was different at the Wikipedia:Good articles/all page. Maybe there is a better workaround for this, the last time I just renamed the articles at the GA list but that was quite time consuming. Regards AIRcorn (talk) 04:47, 13 April 2019 (UTC)
See background (pardon the pun).
The idea is to change the css element background
to background-color
(and other similar attributes) in sortable tables (
example).
Headbomb {
t ·
c ·
p ·
b} 19:14, 5 March 2019 (UTC)
background
is shorthand for a number of attributes. Otherwise seems like a good idea. --
Izno (
talk) 22:44, 5 March 2019 (UTC)background
to background-style
would break all existing uses, because background-style
is not a defined property. See
CSS Backgrounds and Borders Module Level 3 for examples of valid property names. --
Redrose64 🌹 (
talk) 13:03, 7 March 2019 (UTC)
Stop Predatory Journals maintains a list of hijacked journals. Could someone search wikipedia for the presence of hijacked URLs and produce a daily/weekly/whateverly report? Maybe have a WP:WCW task for it too? Headbomb { t · c · p · b} 00:09, 4 February 2019 (UTC)
Extended content
|
---|
https://scholarlyoa.com/other-pages/hijacked-journals/u
http://www.bnas.org/
http://acjournal.in/journal-of-renewable-natural-resources-bhutan
|
@ Headbomb: can post the report on a regular basis if there is a page. Script takes less than 20 seconds to complete so not expensive on resources. -- Green C 17:02, 4 February 2019 (UTC)
This would be useful for New Page Patrol: it would save us sending multiple messages about an editor's creations (which can cause upset) and show clearly what the problem is and what articles have been identified as needing improvements. This has been requested more than once of me by an editor and I've had to find and list them manually. It would also benefit other editors - I would love to look over which of my creations have tags and improve them. This would give creators (if they want to) the chance to make improvements and bring down the backlogs. Is it feasible? Thanks for looking into this, Boleyn ( talk) 08:43, 9 March 2019 (UTC)
There was a request to move categories with "eSports" to "esports" per WP:C2D at WT:VG, but that list is sizable. Is there someone here who can take care of the listing and tagging? (Avoid the WikiProject assessment categories.) -- Izno ( talk) 18:04, 31 March 2019 (UTC)
I imagine it's fairly confusing for IP users to have to scroll through lots of old warnings from previous users of their IP before getting to their actual message. We have Template:Old IP warnings top (and its partner), but it's rarely used—thoughts on writing a bot to automatically apply it to everything more than a yearish ago? Gaelan 💬 ✏️ 16:21, 10 January 2019 (UTC)
It seems like there is community support to implement this from the discussions. Should be open another discussion to iron out the implementation details? If there is consensus to do this task with a bot, I am willing to do it. Kadane ( talk) 05:45, 15 March 2019 (UTC)
Data to be taken from Wikidata to give the the year of publication of a taxon and create "Category:Taxa described in ()" within the(English) wikipedia taxon entry, if a wikipedia enty has been created. MargaretRDonald ( talk) 22:55, 22 January 2019 (UTC)
The bot would use the wikidata taxon entry to find the auhor of a taxon, and then use it again to find the corresponding author article to find the appropriate author category. (This will not always work - but will work in large number of cases. Thus, the English article for "Edward Rudge" corresponds to the category:"Category:Taxa named by Edward Rudge", and the simple strategy outlined here would work for Edward Rudge, Stephen Hopper and .... The category created would be an entry in the article. MargaretRDonald ( talk) 23:08, 22 January 2019 (UTC)
the category created would be an entry in the article, and do you want "described by" or "named by"? -- DannyS712 ( talk) 06:05, 1 March 2019 (UTC)
|authority
parameter in {{
Speciesbox}} and its ilk? That would make this a lot simpler... --
DannyS712 (
talk) 06:51, 1 March 2019 (UTC)
|authority
in {{
Speciesbox}}. The year is not. It is found associated with the basionym in Wikidata entry (an entry which is often missing from wikidata, but if it exists that would be the safest place to take it from). Most articles show the author of the basionym (the name in the brackets), bur have no taxonomy section and even when they do it is unstructured text... So probably the year of the description is in the too-hard basket. (But as I indicated, I find the year category somewhat less important..)
MargaretRDonald (
talk) 07:07, 1 March 2019 (UTC)And if we were to do this the result would be that we would get, e.g., a list of accepted taxa named by John Lindley, and not a whole ragtag list of plants where the assigning of the initial genus is now considered incorrect. In achieving that we could be a far better resource than IPNI. MargaretRDonald ( talk) 06:57, 1 March 2019 (UTC)
About 6 months ago Batting average was split into a short parent article about the concept of batting average across sports and 2 child articles Batting average (cricket) and Batting average (baseball) dealing with the specifics of the metric in the individual sports. Articles related to each sport still point to the parent article but should generally point to the sport specific one. After some searches using AWB, I found just over 15k links to Batting average. Using a recursive category search, I found that Category:Cricketers, Category:Seasons in cricket and Category:Years in cricket account for about 3k links and Category:Baseball players, Category:Seasons in baseball, Category:Years in baseball about 12k. There are about 300 remaining links in none of these categories, I am working through those manually with AWB. As an aside, a lot of the baseball players have a link in both an infobox and in article text. I had the cricketer infobox changed already, as that had a hardcoded link to the parent article.
The plan would be to replace
[[Batting average]]
with [[Batting average (cricket)|]]
[[Batting average|foo]]
with [[Batting average (cricket)|foo]]
in the first set of categories and
[[Batting average]]
with [[Batting average (baseball)|]]
[[Batting average|foo]]
with [[Batting average (baseball)|foo]]
in the second set. A lot of the non-piped links use lower-case, so don't know if that needs another set of rules. I'm also assuming that the pipe trick works in bot edits, otherwise the replacement text will need to be slightly expanded. I can provide the lists I created of the links to the article, of the categories and then intersections if this helps. Spike 'em ( talk) 20:27, 1 April 2019 (UTC)
pipe trick works in bot editsIt does outside of references and other tags. -- Izno ( talk) 20:42, 1 April 2019 (UTC)
[[Batting average]]
with [[Batting average (cricket)|Batting average]]
Request to add " List of Medal of Honor recipients in non-combat incidents" in 185 recipients that are still dated with the old main's article's title. — Preceding unsigned comment added by XXzoonamiXX ( talk • contribs) 04:02, 14 April 2019 (UTC)
The Church of Jesus Christ of Latter-Day Saints recently gave an announcement about the correct name of the church [1]. Because of this announcement, the church site has been changed from lds.org to ChurchofJesusChrist.org, and the newsroom from mormonnewsroom.com to newsroom.ChurchofJesusChrist.org. Most wiki pages still have the old site linked. I need a bot to go through and change al the links. The only thing to be changed is the domain. The rest of the URLs are the same.
Thanks, The 2nd Red Guy ( talk) 14:50, 23 April 2019 (UTC)
References
I've noticed that a lot of articles are not in compliance with MOS:SURNAME, especially in Category:Living people. I've manually changed a few pages, but as a programmer, I think this could be greatly automated. Any repeats of the full name, or the first name, beyond the title, first sentence, and infobox should not be allowed and replaced with the last name. I can help out in creating a bot that can accomplish this. InnovativeInventor ( talk) 01:21, 21 March 2019 (UTC)
Just bumped into this: Wikipedia_talk:Manual_of_Style/Biography#Second_mention_of_forenames, so there should be detection of other people with the same last name. Additionally, this bot should intend to provide support for humans, not to automate the whole thing (as context is important). InnovativeInventor ( talk) 03:57, 21 March 2019 (UTC)
When an AfD discussion ends with no discussion, WP:NOQUORUM indicates that the closing admin should treat the article as one would treat an expired PROD. One mundane part of this process is specifically checking whether the article is eligible for PROD ("the page is not a redirect, never previously proposed for deletion, never undeleted, and never subject to a deletion discussion"). It would be really nice, when an AfD listing is reaching full term (seven days) with no discussion, if a bot could check the subject's page history and leave a comment on, say, the beginning of the listing's seventh day as to whether the article is eligible for PROD (a simple yes/no). If impossible to check each aspect of PROD eligibility, it would at least be helpful to know whether the article has been proposed for deletion before, rather than having to scour the page history. A bot here could help the closing admin more easily determine whether to relist or soft delete. More discussion here. czar 21:12, 23 March 2019 (UTC)
Most articles on settlements in India (eg. Bambolim) still use 2001 census data. They need to be updated to use the 2011 census data. SD0001 ( talk) 18:10, 29 March 2019 (UTC)
Thousands of articles about music artists, albums and songs reference the source in the body text (example: OnePointFive). Such references belong in a <ref> block at the end of the page and not in the body text. Most of these references follow a common pattern, so I hope this kind of edit can be made by a bot.
I suggest making a bulk replacement from
= =Track listing= =
Credits adapted from [[Tidal (service)|Tidal]].<ref name="Tidal">{{cite web|url=https://listen.tidal.com/album/93301143|title=ONEPOINTFIVE / Aminé on TIDAL|publisher=Tidal|accessdate=August 15, 2018}}</ref>
to
= =Track listing<ref name="Tidal">{{cite web|url=https://listen.tidal.com/album/93301143|title=ONEPOINTFIVE / Aminé on TIDAL|publisher=Tidal|accessdate=August 15, 2018}}</ref>= =
Difference sources: Tidal (service), “the album notes”, “the album sleeve”, “the album notes”, “the liner notes of XXX” Different heading names, including “Track listing”, “Personnel”, ”Credits and personnel”. Variants: “Credits adapted from XXX”, “All credits adapted from XXX”, “All personnel credits adapted XXX”
Does this sound feasible/sensible? -- C960657 ( talk) 17:14, 28 February 2019 (UTC)
Citations should not be placed within, or on the same line as, section headings.WP:CITEFOOT — JJMC89 ( T· C) 03:38, 1 March 2019 (UTC)
Section headings should: ... Not contain links, especially where only part of a heading is linked.Unless you use pure plain-text parenthetical referencing, refs always generate a link. -- Redrose64 🌹 ( talk) 12:41, 1 March 2019 (UTC)
Adequately sourced population figures for all Spanish municipalities can be deployed by using {{ Spain metadata Wikidata}}, as was recently done for Austria. See this diff for an example of the change.-- eh bien mon prince ( talk) 11:35, 11 April 2019 (UTC)
Category:Pages using deprecated image syntax has over 89k pages listed, making manually fixing these not possible. Could a bot be created to handle this? -- Gonnym ( talk) 06:18, 12 April 2019 (UTC)
{{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|upright={{{image_upright|1}}}|alt={{{alt|}}}}}
style that pass to the |image=
field an image syntax in the format |image=
File:Example.jpg
. However, as per usual when dealing with templates, the exact parameters used and their names will differ between the templates. So for example:{{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|upright={{{image_upright|1.13}}}<!-- 1.13 is the most common size used in TV articles. -->|alt={{{image_alt|{{{alt|}}}}}}}}
{{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|{{{imagesize|}}}}}}|sizedefault=frameless|upright={{{image_upright|1}}}|alt={{{image_alt|{{{alt|}}}}}}}}
{{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|alt={{{alt|}}}}}
Also, an image isn't the only value that can be passed in |image=
File:Example.jpg
, but it sometimes is combined with an image size and caption, which will need to be extracted and passed through the correct parameters. --
Gonnym (
talk) 06:37, 12 April 2019 (UTC)
image=[[File:West Wing S3 DVD.jpg|250px]]
. Instead it should be, |image=West Wing S3 DVD.jpg
and |image_size=250px
(it can also be without "px" as the module does that automatically).image=[[File:Red Dwarf X logo.jpg|alt=Logo for the tenth series of ''Red Dwarf''|250px]]
. Instead it should be, |image=Red Dwarf X logo.jpg
, |image_size=250px
and |image_alt=Logo for the tenth series of Red Dwarf
.|image#_size=
parameter. The number "#" needs to match the image# parameter, e.g. |image2=
gets |image2_size=
. Drop me a line if this is confusing; I feel like it's a lot to explain in a short paragraph.|image_size=250px
(or equivalent) may simply be omitted, because most infoboxes are set up to use a default size where none has been set (
example). In my opinion, falling back to the default is preferable since it gives a consistent look between articles. --
Redrose64 🌹 (
talk) 12:46, 12 April 2019 (UTC)
|flag_image=
, a 300px for |map_image#=
and no default for |image#=
which defaults then to frameless (which I'm not sure what it is). If there is a correct size that the template should use, then the template should probably be edited to handle it. --
Gonnym (
talk) 14:02, 12 April 2019 (UTC)|image1=[[File:Soleiman Eskandari.jpg|150x150px]]
format it puts the page into
Category:Pages using deprecated image syntax, because the parameter is intended for a bare filename and nothing else, as in |image1=Soleiman Eskandari.jpg
. --
Redrose64 🌹 (
talk) 14:05, 12 April 2019 (UTC)
All of our articles and categories on transport "accidents and incidents" use that phrasing, as opposed to "incidents and accidents" (which is a line from " You Can Call Me Al"). However, there are a lot of section heads that are "== Incidents and accidents". I would like a bot to search articles for the phrasing "== Incidents and accidents ==" and replace it with "== Accidents and incidents ==". Can that be done?-- Mike Selinker ( talk) 19:13, 20 April 2019 (UTC)
I recently started patrolling newly created redirects and have realized that certain common types of redirects could be approved through an automated process where a bot would just have to parse the target article and carry out some trivial string manipulation to determine if the redirect is appropriate. A working list of such uncontroversial redirects:
Potentially more controversial tasks could include automated RfD nomination for clearly unnecessary redirects, such as redirects with specific patterns of incorrect spacing. I also think it would be a good idea to include an attack filter, so that if a redirect contains profanity or other potentially attackish content the bot will not automatically patrol them even if it appears to meet the above criteria. I anticipate that if this bot were to be implemented, it would cut necessary human work for the redirect backlog by more than half. I've never written a Wikipedia bot before, but I am a software engineer so I anticipate that if people think that this is a good idea I could do a lot of the coding myself, but obviously the idea needs to be workshopped first. There's also potential extensions that could be written, such as detecting common abbreviations or alternate titles (e.g. USSR space program --> Soviet space program, OTAN --> NATO) signed, Rosguill talk 22:17, 28 April 2019 (UTC)
If there's not going to be any further discussion here, is there anywhere else I should post or things I should do before implementing this bot? The Help:Creating a bot has a flowchart including the steps for writing a specification and making a proposal for the bot, but it's not clear to me which forums I should be using for that (or if the above discussion was sufficient). An additional concern is that while I believe that from a technical perspective this shouldn't be a terribly difficult bot to implement, I would need an admin to give the bot NPP permissions in order to run the bot. signed, Rosguill talk 23:04, 5 May 2019 (UTC)
Please change all occurrences of "Astana" in all articles to new name "Nur-Sultan". Also please move all articles with "Astana" to "Nur-Sultan". Thanks! -- Patriccck ( talk) 18:16, 7 May 2019 (UTC)
Would like a bot that could search all the articles listed under Category:WikiProject Mountains articles and its children categories that are also listed in Category:Articles with dead external links? Or maybe there's an existing tool that can do this? RedWolf ( talk) 21:19, 16 May 2019 (UTC)
The New England Wild Flower Society [17] changed its name and web presence to the Native Plant Trust [18]. And in the process broke most of its old URLs. Only insecure http requests to the old web site get an HTTP 301 redirect. https links time out. I suspect a firewall misconfiguration on their end, but I emailed about the problem and it hasn't been fixed.
I am requesting a bot find all the instances of DOMAIN.newenglandwild.org/PATH (http or https) and rewrite to DOMAIN.nativeplanttrust.org/PATH (https only, optionally only if that new URL returns a 2xx or 3xx status code).
I don't have a count of edits to make. Here is a sample page: Vaccinium caesariense. As I write this, reference 2 links to https://gobotany.newenglandwild.org/species/vaccinium/caesariense/ (a timeout error). It should link to https://gobotany.nativeplanttrust.org/species/vaccinium/caesariense/.
Vox Sciurorum ( talk) 17:51, 17 May 2019 (UTC)
{{
dead link}}
exists if needed. It is quite complex. Everything should be checked and tested. There are bots designed for making URL changes see
WP:URLREQ. --
Green
C 21:51, 17 May 2019 (UTC)I find myself regularly using the excellent User:Anomie/unsignedhelper.js to document unsigned comments in talk page discussions. This looks like a perfect task for a bot, and I wonder whether there are any reasons it has not been done earlier. Could a kind contributor take up this uncontroversial and useful talk? The process should work similarly to rescuing orphaned references in article space, as performed by User:AnomieBOT. — JFG talk 15:55, 25 May 2019 (UTC)
Hi. Could somebody move all userboxes with the word "expat" in the name from Category:Residence user templates to its subcategory Category:Expat user templates? — andrybak ( talk) 08:23, 17 June 2019 (UTC)
Hi, I would like to request for a bot to add the Template:WPEUR10k, to all articles that appears in the list of created articles at Wikipedia:The 2500 Challenge (Nordic) and Wikipedia:The 10,000 Challenge. I think it would be very helpful so all the articles recieved the template tag. I suggest this as there are literally thousands of articles in need of the tag.-- BabbaQ ( talk) 13:37, 26 April 2019 (UTC)
If someone could find all URLs (found across any namespace) that have this pattern in them, that would be great
https?:\/\/(.+)\/handle\/.+
That would be great. Sorting the results by domain ($1
) would also be even greater.
Headbomb {
t ·
c ·
p ·
b} 23:22, 24 June 2019 (UTC)
This is what I see to be a rather uncontroversial request which I have been doing manually for about a month or so now. In order to better identify pages that use bare URL(s) in reference(s) in an effort to get the URLs fixed, I am requesting that the {{ Cleanup bare URLs}} tag be added to all pages by a bot which meet the following conditions:
<ref>
tag immediately followed by http:
and/or https:
, followed by any combination of keystrokes and a </ref>
closing tag when there are no instances of spaces between the <ref>
and </ref>
tags (underscores are okay).
...From my experiences recently with tagging these pages, tagging the pages with the aforementioned parameters will avoid most, if not all, false positives.
I am requesting this run only once so that it doesn't need constant checks, and this should adequately provide an assessment on how many pages need reference url correction. Steel1943 ( talk) 17:54, 22 April 2019 (UTC)
I believe
GreenC could do a
fast scan (a little bit offtopic, but could that awk solution work with .bz2 files?). For lvwiki scan, I use such regex (more or less the same conditions as OP asked for) which works pretty well: <ref>\s*\[?\s*(https?:\/\/[^][<>\s"]+)\s*\]?\s*<\/ref>
. For actually fixing those URLs, we can use
this tool. Can be used both manually and with bot (it has pretty nice API). --
Edgars2007 (
talk/
contribs) 15:36, 23 April 2019 (UTC)
I recently made a bot that looks for articles that need {{
unreferenced}}
and this is basically the same thing other than a change to the core regex statement, which
User:Edgars2007 just helpfully provided. So this could be up and running quickly. It runs on Toolforge and uses the API to download each of 5.5M articles sequentially. The only question is which method: > 50%, or max size of the tracking category, or maybe both (anything over 50% is exempted from the category max size). The mixed method has the advantage of filling up the category with the worst cases foremost and lesser cases will only make it there once the worst cases are fixed. --
Green
C 17:51, 23 April 2019 (UTC)
MarnetteD, yes understand what you are saying. Was thinking, what about an 'on demand' system where you can specify when to add more, and how many to add - and it only works if the category is mostly empty, and maxes at 200 (or less). This is more technically challenging as it would require some kind of basic authentication to prevent abuse, but I have an idea how to do it. It would be done all on-Wiki similar to a bot stop page. This gives participants the freedom to fill the queue whenever they are ready, and it could keep a log page. Would that be useful? -- Green C 19:19, 25 April 2019 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | Archive 76 | Archive 77 | Archive 78 | Archive 79 | Archive 80 | → | Archive 85 |
Hello and thanks in advance for your time. Is it possible for a bot to take a list of list-articles, with articles such as List of House episodes, go to the episode section of an article from that list, and then get all values in the "title" column? Basically the name of each episode. If that is a yes, can the bot then check if an article at that name is present or not? Finally creating a redirect based on that article name. So for example:
My goal is to be able to create episode redirects fast and easy, so trying to figure out how best to do it, as manually this is taking me a very long time (there are a few more steps, but would like to know if the general idea is even possible). --
Gonnym (
talk) 08:01, 24 October 2018 (UTC)
Centralize the ~1400+ instances of references ({{cite journal ...}}) to the "Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU" by replacing them with a single template (named e.g. Template:R:LunarNomenclature). The contents of the latter should be:
{{cite journal |last1=Menzel |first1=Donald H. |authorlink1=Donald Howard Menzel |last2=Minnaert |first2=Marcel |authorlink2=Marcel Minnaert |last3=Levin |first3=Boris J. |last4=Dollfus |first4=Audouin |authorlink4=Audouin Dollfus |last5=Bell |first5=Barbara |title=Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU |doi=10.1007/BF00171763 |journal=Space Science Reviews |volume=12 |issue=2 |pages=136–186 |date=1971 |bibcode=1971SSRv...12..136M |ref=harv }}
yielding:
Menzel, Donald H.;
Minnaert, Marcel; Levin, Boris J.;
Dollfus, Audouin; Bell, Barbara (1971). "Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU". Space Science Reviews. 12 (2): 136–186.
Bibcode:
1971SSRv...12..136M.
doi:
10.1007/BF00171763. {{
cite journal}}
: Invalid |ref=harv
(
help)
Urhixidur (
talk) 14:43, 27 October 2018 (UTC)
{{
cite journal}}
). So many things run on this system. Shortcut templates can create more problems then they solve. --
Green
C 06:00, 9 December 2018 (UTC)Recently, consensus was reached to move all election and referendum articles to have the year at the front. A bot, TheSandBot, was created to move the articles (approximately 35000) to the new titles. However, the bot did not change navboxes to use the new format. Per WP:BRINT, redirects from navigational templates should be bypassed to allow readers to see which page they are on in the template. This is a lot of simple work which would have to be doe by humans, if a bot were not created. Danski454 ( talk) 15:15, 9 December 2018 (UTC)
Done - If anyone wants User:RF1 Bot to run for you aswell let me know.
I'd like to have a bot that once a month creates a page at this months subpage for Talk Archives like here User:RhinosF1/Archives_2018/10_(October) And redirects to my main user page.
How would it be coded?
Happy to run it semi-automatic and monitored. Would not run outside my mainspace.
RhinosF1 ( talk) 14:43, 16 December 2018 (UTC)
It's showing an error :AssertUserFailedError: By default, mwclient protects you from accidentally editing without being logged in. If you actually want to edit without logging in, you can set force_login on the Site object to False. RhinosF1 ( talk) 15:58, 18 December 2018 (UTC)
It's surprising that given how long Wikipedia has been around and how easily automatable the task is that no bot exists to automatically update US congressional district pages which are almost uniformly a mess. Theirs exists no template for how to present results with some going in reverse chronological order than other pages and there being zero consistency in presentation many pages haven't been updated since 2014.
Going through and manually editing all 435 pages would be extremely tedious so the most logical solution is to create a bot dedicated to the task, which can not only update the pages but fix them.
The quality of the result section of congressional pages is abysmal and is easy to fix, simply create a standard congressional district page standard for displaying the results and then create a bot to automatically generate election templates and them to the page following the template. I'm a bit of newb so I don't know how exactly we would go around agreeing to a standard page but I'm sure there is a process.
I would be open to coding the bot myself if somebody more experienced with them is willing to offer help/assistance.
Some example of poor quality pages:
-- — Preceding unsigned comment added by Zubin12 ( talk • contribs)
There are about 1000 mainspace links to Spectator, most are broken. They changed URL schemes without redirects. The pages still exist at a new URL. Example:
There's no obvious way to program this, but posting if anyone has ideas. -- Green C 06:27, 7 November 2018 (UTC)
1)Identify the link which is identified as broken.
2)Remove the words "-.thtml" from the last portion of the link.
3)Add the month number and year number before the last section of url which needs to be separated by commas. This year and month number is the number on which the article appeared. If the month is only one digit, you need to add a zero before the month number.
Adithyak1997 (
talk) 10:40, 7 November 2018 (UTC)
Could a bot please identify articles that are not currently tagged as unreferenced but seem not to have references? Thanks for looking at this, Boleyn ( talk) 19:12, 10 November 2018 (UTC)
BRFA filed -- Green C 04:07, 31 December 2018 (UTC)
I'd like have a bot update the templates used to render college football schedule tables. Three old templates— Template:CFB Schedule Start, Template:CFB Schedule Entry, and Template:CFB Schedule End—which were developed in 2006, are to be replaced with two newer, module-based templates— Template:CFB schedule and Template:CFB schedule entry. The old templates remain on nearly 12,000 articles. The new templates were coded by User:Frietjes, who has also developed a process for converting the old templates to the new:
add {{subst:#invoke:CFB schedule/convert|subst|
at the top of the table, before the {{CFB Schedule Start}}
and }}
to the bottom after the {{CFB Schedule End}}
.
The development and use of these new templates has been much discussed in the last year at Wikipedia talk:WikiProject College football and has a consensus of support.
Thanks, Jweiss11 ( talk) 00:32, 8 November 2018 (UTC)
@ BU Rob13: would you be available to take on this bot request? Thanks, Jweiss11 ( talk) 03:16, 4 December 2018 (UTC)
Working. Primefac ( talk) 15:30, 27 January 2019 (UTC)
Check 5.7 million mainspace talk pages for sections that would benefit from a {{
reflist-talk}}
.
Scope: for each talk page, extract each 2-level section. For each section, check for existence of reference tags ie. <ref></ref>. If exist, check for existence of {{
reflist-talk}}
or <references/>. If none exist, add {{reflist-talk}}
at end of section (optionally in a 3rd-level subsection called "References").
-- Green C 16:27, 1 January 2019 (UTC)
<ol class="references">
- this will always exist if there is <ref></ref> somewhere in the page, regardless of existence of {{
reflist-talk}}
or <references/> and it will account for things like <!-- <ref></ref> --> --
Green
C 16:36, 1 January 2019 (UTC)
I ran a script. In 2000 Talk pages it found 11 cases:
Extrapolated it would be about 29,000 pages are like this. -- Green C 19:43, 1 January 2019 (UTC)
BRFA filed -- Green C 20:02, 1 January 2019 (UTC)
Done -- Green C 07:19, 11 February 2019 (UTC)
The request is to have {{ WikiProject Soil}} added to the article talk pages in 39 categories. Project notification posted. Much appreciated:
requested: -- Paleorthid ( talk) 23:10, 6 January 2019 (UTC)
)
In coming days, I'm going to redirect Star Sports to Fox Sports (Southeast Asian TV network). But some redirects to Star Sports need to be retargetted in advance.
(Correction: STAR Sports HD3, Star Sports HD3, STAR Sports HD4 and Star Sports HD4 did exist. JSH-alive/ talk/ cont/ mail 14:12, 21 January 2019 (UTC))
I don't know what to do with STAR Sports Network and Star Sports Network. Is it the name for Indian channels or Southeast Asian channels? JSH-alive/ talk/ cont/ mail 09:32, 20 January 2019 (UTC)
Done per User_talk:Xqt#Requesting_mass_redirect_fix @ xqt 13:50, 1 February 2019 (UTC)
This is a relatively simple query: https://quarry.wmflabs.org/query/18894
The images listed in that query ideally should be tagged with {{ Shadows Commons}} (unless already tagged as CSD F8)
As this is a repeatable, and felt to be uncontroversial task, It would be better to let a bot do it, freeing up contributors for more complex tasks that require human skills rather than simple tagging clicks. Thanks
Given the query size, the bot would not need to be run continuously, but once a week should prove to be more than adequate.
ShakespeareFan00 (
talk) 10:58, 6 February 2019 (UTC)
BRFA filed -- Green C 15:30, 6 February 2019 (UTC)
Done -- Green C 22:40, 23 February 2019 (UTC)
per Wikipedia:Categories_for_discussion/Speedy#Current_requests (Consistency with main article's name per official renaming)
please list all subcategories of Category:GTK+ with "GTK+" to plain GTK, their number is high and I can't do it manually, thanks. -- Editor-1 ( talk) 08:19, 10 February 2019 (UTC)
* [[:Category:old name with plus]] to [[:Category:same name without plus]] – per official renaming (request by [[User:Editor-1]])
to can include into Wikipedia:Categories_for_discussion/Speedy#Current_requests
thanks. -- Editor-1 ( talk) 08:46, 10 February 2019 (UTC)
Please tag all sub-cats of Category:English-language singers (except Uganda) with
{{subst:cfr-speedy|English-language singers from ...}}
i.e. the nomination is to change the word "of" to "from".
Ideally, each category's country name should replace "...", but the ellipsis would be sufficient.
I will then list them at WP:CFDS myself. – Fayenatic London 23:03, 13 February 2019 (UTC)
The
ARKive project has ended and its website has been replaced with a single page noting that act. Links to pages on arkive.org need to be replaced with archive.org equivalents; and citations need |archive-url=
and |archive-date=
attributes. I've already updated {{
ARKive}}. Can someone oblige, please?
Links like the one at the foot of Bitis schneideri could usefully be replaced using {{ ARKive}}. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:23, 16 February 2019 (UTC)
Hello! I would like to request to operate a bot! My idea for a bot is a bot that can revert reference blanking. As my job as a recent changes patroller, I see many people blanking the reference. I know that CluBot reverts vandalism, but usually, CluBot does not revert the reference blanking. Let me know what you think! Shalvey 17:10, 19 February 2019 (UTC)
Hello, I would like to know if it is possible that you guys could create a bot that would check references in articles, and see if they are actually websites, not just a random URL that isn’t even existent. What I mean is that, when you type in a website, you have a blue outline, which then forwards you to the cite. What I’m seeing is URL’s that aren’t highlighted in blue, but just URLs. The bot could be ran by me, but I don’t know how to code a bot. Thanks! Shalvey 18:49, 19 February 2019 (UTC)
Let me know what you think! — Preceding
unsigned comment added by
Shalvey (
talk •
contribs) 19:13, 19 February 2019 (UTC)
A bot that will, in certain situations, switch links to web.archive.org. Sun Sunris ( talk) 01:31, 23 February 2019 (UTC)
Please can someone add {{ Section sizes}} to the talk pages of ~6300 articles that are longer than 150,000 bytes (per Special:LongPages), like in this edit?
The location is not critical, but I would suggest giving preference to putting it immediately after the last Wikiproject template, whee possible. Omit pages that already have the template. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:59, 29 December 2018 (UTC)
This is a request for a ONE-Shot bot run to remove -
{{Copy to Commons|human=ShakespeareFan00}}
From the 8000 or so images currently tagged with it. This is requested because both Commons and Wikipedia policy has changed in significant ways since the vast bulk of the images were tagged. It would be tedious to remove the tag individually. ShakespeareFan00 ( talk) 17:53, 28 February 2019 (UTC)
I'd like to request moving of pages from Category:Missing U-boats to Category:Missing U-boats of World War II - it is grouping only WWII boats at the moment and is in such parent category (WWI boats were excluded to Category:Missing U-boats of World War I). Pibwl ←« 14:39, 6 March 2019 (UTC)
Hi all, Kadane very helpfully created KadaneBot for us over at Wikipedia Peer Review - it sends out automated reminders based on topic areas of interest for unanswered peer reviews. Unfortunately, Kadane's been inactive almost since creation (September 2018), and hasn't responded to my request [1]. Would anyone be so kind as to usurp this bot so we can continue to use it? -- Tom (LT) ( talk) 07:32, 22 February 2019 (UTC)
Am seeking a bot that can periodically notify volunteers at WP:PRV about new or unanswered reviews, would be very useful and (I hope) increase peer review activity levels. -- Tom (LT) ( talk) 03:55, 3 March 2019 (UTC)
{{PRV|Kadane|Computer Science|contact=monthly}}
how would a bot know which reviews are for "computer science"? Thanks for the clarification. --
Green
C 22:58, 5 March 2019 (UTC)
Done, Kadane migrated to Toolforge and running via cron (ie. run automatically at set times). -- Green C 16:41, 10 March 2019 (UTC)
The Medical Translation Task Force faces an issue with respect to Content Translation. Basically the tool loses references when the metadata exists within template:infobox medical condition (new) and template:drugbox. The issue is described here and the task is supposedly not easily fixable and thus will not be fixed anytime soon. [3]
As a work around I am proposing a bot that moves the metadata for references from these two infoboxes to the lead or body of the article in question. Will be done for these ~1200 articles. Category:RTT
An example of what such an edit will look like is this. [4]
Doc James ( talk · contribs · email) 19:14, 30 January 2019 (UTC)
Done ( Wikipedia:Bots/Requests for approval/Fz29bot) -- Green C 16:40, 10 March 2019 (UTC)
The task of the bot would be to identify all articles in Category:Articles needing translation from French Wikipedia who have a French Commune infobox, then add the |topic=geo parameter to their Expand French template, to categorize them in Category:Geography articles needing translation from French Wikipedia. Its second task would be more complex, as it should identify the Category:Communes of departement name category, and add the fitting expansion category of Category:departement name communes articles needing translation from French Wikipedia
These tasks should apply to anything between four to seven thousand articles.
Knowing nothing about how bots function, there may be an easier way to do such a large scale category move that I am not seeing. Sadenar40000 ( talk) 18:47, 28 February 2019 (UTC)
Hello,
I would like to suggest a bot that fixes dates in Category:Use mdy dates and Category:Use dmy dates.
RhinosF1 (chat) (status) (contribs) 18:09, 15 February 2019 (UTC)
access-date
dates from BIGENDIAN to either 'mdy' or 'dmy' when the article has established use of BIGENDIAN for the access-date
dates would be in violation of
WP:DATERET and
WP:CITESTYLE/
WP:CITEVAR – but how is a bot supposed to figure this out? --
IJBall (
contribs •
talk) 21:45, 15 February 2019 (UTC)
There is a huge backlog within most Wikipedia Projects of unclassified articles. I've been recently assessing a number of these for the Politics Project, and have noticed a few patterns that I believe could be automated to heavily reduce this backlog.
And of course, if we can heavily reduce the backlog like this, we will make attempting the remaining tasks that must be classified by hand less daunting, and thus more likely to be done. It is true that the second part of this proposal will sometimes result in incorrect classification, but the criteria will be up for each taskforce to determine and so I don't believe that risk should prevent this bot being created - and even if they are incorrectly assessed, a few incorrect assessments are better than numerous unassessed articles.
If no one is interested in taking this up then I do intend to get around to it at some point - unless someone is able to explain why it is stupid/unnecessary, though I think the first part of this proposal would be better as a modification to the existing tag-update bot.
-- NoCOBOL (talk) 07:50, 25 January 2019 (UTC)
Hello there! The Church of Jesus Christ of Latter-day Saints has formally requested that all references to the 'Mormon Church', 'LDS Church', and 'Mormonism' be discontinued by all users in all media outlets including Wikipedia. It would be astronomically easier for someone to make this change via AWB rather than manually search out and make every single change.
This request is complicated by two items: (1) no formal replacement for these 3 informal references has been provided by the Church of Jesus Christ of Latter-day Saints, which will affect how the AWB algorithm needs to work; (2) the name of a volume of this church's scripture is 'The Book of Mormon', and eliminating all references to the 'Mormon Church' or 'Mormonism' without editing all references to 'The Book of Mormon' may be difficult for AWB. — Preceding unsigned comment added by Dpammm ( talk • contribs) 07:26, 10 March 2019 (UTC)
If the reliable sources written after the change is announced routinely use the new nameis the criterion for changing how we refer to someone/something). There are quite a few examples of our continuing to use a name over the subject's objections because it's continued to be the name commonly used in the sources; North Korea is an obvious example that springs to mind. If you want Wikipedia to deprecate the use of these terms, you need to provide evidence that reliable sources are no longer using the terms "Mormonism", "LDS Church" etc, and then start a WP:RFC to deprecate the terms; only then is it time to start making bot requests, or even to start manually editing the articles. ‑ Iridescent 07:40, 10 March 2019 (UTC)
The Mormons have had this idea before but then they continued using Mormon themselves. I seriously doubt RS are going to follow this requested change. Legacypac ( talk) 07:57, 10 March 2019 (UTC)
Isn't that pretty obnoxious though? The Church of Jesus Christ of Latter-day Saints has made a 100% formal shift in use of its name. It has openly and formally disassociated itself from the terms "Mormon Church", "LDS Church", and "Mormonism". Of course it could take time for common usage to change, but how is that going to happen if media outlets requested to make the change refuse to do so? I'm not an expert at this but it's fairly common sense to go along with the intended request despite its occurring in phases even as the 5 March 2019 re-affirmation news release and first presidency letter states and changes have already been made (www.lds.org to www.churchofjesuschrist.org, etc. as this article describes, including "mormonnewsroom" to follow suit shortly): https://www.mormonnewsroom.org/article/church-name-alignment -- Dpammma ( talk) 08:41, 10 March 2019 (UTC)
This was also just posted at the other discussion, although I think that discussion page is dead as it's the only comment since January https://twitter.com/APStylebook/status/1104071713476755457. How does anything get progressed to a decision one way other the other, especially on issues where some editors are obviously bound and determined to not respect this church's request despite adoption of these changes by the largest mainstream media outlet? -- Dpammma ( talk) 08:49, 10 March 2019 (UTC)
Hi guys, I'd like to create lists of NZ heritage sites. Lists would be very similar to those at German Wikipedia, see List of monuments in New Zealand. The database with the heritage sites is available here: http://www.heritage.org.nz/the-list You can search all sites in a specific region and export CSV.
I'm not technically skilled enough to program a bot that'd help me to do that. Is there anyone keen to help out? List of heritage sites is quite commons practice here, see eg. Listed buildings in Windermere, Cumbria (town). Regards, Podzemnik ( talk) 11:34, 22 January 2019 (UTC)
Well, the CSV contains this header and first record:
RegisterNumber,Name,RegistrationType,RegistrationStatus,DateRegistered,Address,RegisteredLegalDescription,ExtentOfRegistration,LocalAuthorit yName,NZAANumbers 660,1YA Radio Station Building (Former),Historic Place Category 1,Listed,1990-02-15,"74 Shortland Street, AUCKLAND","Pt Allots 10‐11 Sec 3 City of Auckland (CT NA67C/507), Pt Allot 12 Sec 3 City of Auckland (CT NA152/135), North Auckland Land District","Extent includes the land described as Pt Allots 10‐11 Sec 3 City of Auckland defined on DP 874 (CT NA67C/507), Pt Allot 12 Sec 3 City of Auckland (CT NA152/135), North Auckland Land District, and the building known as 1YA Radio Station Building (Former) thereon.",Auckland Council (Auckland City Council),[]
The problem will be mapping the "Name" field (eg. "1YA Radio Station Building (Former)") with the Wikipedia article name ( Kenneth Myers Centre). There's no bot magic for that. -- Green C 16:34, 22 January 2019 (UTC)
so text 1,text 2,text 3,text 4 becomes | text 1 || text 2 || text 3 || text 4 |-
Alright, thanks for the inputs guys, I'll try to do it myself! Podzemnik ( talk) 07:51, 25 January 2019 (UTC)
I just found and fixed an article with two Wikipedia links to two articles that were nothing but redirects back to it, and apparently that's all they had ever been. [5] Can you make a bot to check all Wikipedia links that point to pages that are redirects, then checks to see if that redirect points back to the page its coming from, and then remove the brackets around it so it doesn't link there anymore? If the link has a | in it, then keep what's after that and ditch the rest. Dream Focus 16:29, 26 January 2019 (UTC)
(See [[Promotion (chess)#Promotion to rook or bishop|Underpromotion: Promotion to rook or bishop]] for examples ...
Redrose64 I meant a link to another article that then redirects back to the first article again. Dream Focus 18:18, 26 January 2019 (UTC)
Adverts pretending to be peer-reviewed papers are cited in thousands, possibly tens of thousands, of Wikipedia articles. Articles in paid supplements to journals are generally not independent sources. See this discussion for details.
Sometimes, the citation contains the abbreviation "Suppl.". In this case, the citation could be bot-tagged with {{
Unreliable medical source|sure=no|reason=sponsored supplements generally unreliable per
WP:SPONSORED and
WP:MEDINDY|date=30 April 2024}}
The "sure=no" parameter will add a question mark to the tag, as, rarely, the supplement might actually be a valid source. I think these exceptions would probably be rare enough to manually mark for exclusion by the bot.
This would increase awareness of this problem among editors as well as encouraging editors to scrutinize the tagged sources. HLHJ ( talk) 04:42, 27 January 2019 (UTC)
Marking as Not done per WP:CONTEXTBOT Kadane ( talk) 17:23, 15 March 2019 (UTC)
I'm thinking of making a bot that creates and/or updates the {{ Weather box}} template based on climate data from BOM for Australian articles - with the possibility on expanding to other countries.
High level pseudocode
This seems well suited to automation, especially since with climate change many of these weather boxes should change over time (assuming the longevity of Wikipedia).
This would be my first bot - and I'm not familiar with the specifics of WikiBots (yet). Do you think this task is best suited to a bot or a tool? I assume most bots start off as tools? Can you recommend a next step here?
"well sized" - measured by either quality grading (C class or above?) or number of bytes (>4000?). — Preceding unsigned comment added by Spacepine ( talk • contribs) 01:45, 11 March 2019 (UTC)
Not done - OTRS ticket needs to be filed by BOM granting permission to use copyrighted materials before bot can scrape the website. Once this is complete please open a new request if you aren't scripting the bot yourself. Kadane ( talk) 17:35, 16 March 2019 (UTC)
This will affect very few pages in WP; mostly those in any articles about the company or the games it makes.
As follows, links that begin http://www.mortalonline.com -if still live- are to be found at under https://www.starvault.se ; furthermore, about Mortal Online's official forums and its post URLs: http://www.mortalonline.com/forums/threads/<words-in-title>.<specificnumber> is no longer the current URL pattern; it is (at this time)
https://www.starvault.se/mortalforums/threads/<words-in-title>.<specificnumber>
I think a bot could fix any of the old-form URLs into ones that could work. Nlaylah ( talk) 22:58, 12 March 2019 (UTC)
Per
this tfd, {{
rnd}}, {{
round}} and {{
decimals}} are all to be merged. Would be great to get a bot to convert the necessary transclusions (more than 10,000). Please {{ping|zackmann08}}
if you have any questions! --
Zackmann (
Talk to me/
What I been doing) 16:51, 11 March 2019 (UTC)
{{
round}}
and {{
decimals}}
to {{
rnd}}
in the wiki text - about 22,000 cases. Then redirect {{
round}}
and {{
decimals}}
to {{
rnd}}
. Examine all the arguments available for the present {{
round}}
template and determine how to translate those to the arguments available in the {{
rnd}}
template (same with {{
decimals}}
) then make the conversion in wikitext.{{
rnd}}
so that it seamlessly supports the current arguments available for {{
round}}
and {{
decimals}}
- if this is even possible or not requires some investigation of the {{
rnd}}
source as there might be argument name conflicts. If this is possible, it is a simple matter of redirecting {tlx|round}} and/or {{
decimals}}
to {{
rnd}}
.{{
rnd}}
is the target merge template because it has 261,105 use cases (compared to 15k and 8k for the others) and it has the most options available. Once everything is merged into {{
rnd}}
then {{
round}}
can redirect to it and the template docs can be changed to reflect the new name {{
round}}
going forward. Wouldn't be required to rename 261,105 legacy instances of {{
rnd}}
to {{
round}}
. --
Green
C 17:07, 12 March 2019 (UTC)
Okay. Coding... Kadane ( talk) 20:42, 12 March 2019 (UTC)
@ DannyS712: Oops. I have already programmed it and am ready to file the BRFA. Here is the source. Didn't mean to step on toes. Let me know what you want to do. Kadane ( talk) 22:16, 12 March 2019 (UTC)
BRFA filed Kadane ( talk) 23:20, 12 March 2019 (UTC)
I've withdrawn my BRFA. All of the edits were completed within the trial. The template is ready to be merged. @ Zackmann08: Task is Done Kadane ( talk) 08:05, 17 March 2019 (UTC)
Hi, it would be wonderful if we had a bot that looked for uses of a template called {{ remindme}} or something similar (with a time parameter, such as 12 hours, 1 year, etc. etc.) and duly dropped a message on your own talk page at the designated time with a link to the page on which you put the remindme tag. It would only send such reminders to the person who posted the edit containing the template in the first place. Kind of like the functionality of such bots on reddit, I guess. Fish+ Karate 13:11, 20 November 2018 (UTC)
OK, so the basic use is as follows:
User:Alice places a template (to be created, let's call it {{
remind me}}
) inside a thread of which they wish to be reminded. The user specifies the date/time at which the reminder should be given as an argument of the template (either as "on Monday 7th" or "in three days" - syntax to be discussed later). At the given date, a bot "notifies" Alice.
On a policy level, I see a few questions:
Depending on the choice for each of those, this will change the amount of technical work needed, but as far as I can tell, those questions entirely define the next steps (coding/testing/approval request etc.). Please discuss here if I missed something, but below to answer the questions. Tigraan Click here to contact me 13:36, 25 November 2018 (UTC)
I made a separate section for this because I am almost sure of the questions that need asking but less sure of the answers they should get. What follows is my $0.02 for each:
(Ping: Fish and karate) Tigraan Click here to contact me 13:36, 25 November 2018 (UTC)
If your task could be controversial (e.g. (...) most bots posting messages on user talk pages), seek consensus for the task in the appropriate forums. (...) Link to this discussion from your request for approval. Again, VPP is the catch-all, but that's because I have no other idea. Maybe a link from the talk page of WP:PING as well, since the functionality is closely related.
)
We please need a bot to remove (and NOT subst; the source is non-RS) all entries of {{ TheFinalBall}} following the consensus at Wikipedia:Templates for discussion/Log/2019 February 13#Template:TheFinalBall, and save @ Zackmann08: from removing it manually (as they have been doing). Giant Snowman 20:54, 13 March 2019 (UTC)
Hey, based on this TfD the template {{ PBB Controls}} should be removed. It currently has 4395 transclusions, so a bit much for AWB. Could anyone help with a bot? Thanks. -- Gonnym ( talk) 08:24, 20 March 2019 (UTC)
This is Done. BRFA has been approved Kadane ( talk) 15:49, 20 March 2019 (UTC)
Sorry ~Jer ( Talk • Contributions) 12:17, 22 March 2019 (UTC)
This long standing project Wikipedia:WikiProject_Abandoned_Drafts/Stale_drafts would benefit from a bot to do two things we now do manually on the 40 or so remaining numbered subpages. 1. Remove red links to deleted articles 2. Remove links to pages that are redirects (page has been moved to mainspace, draftspace, or redirected to an article). 3. Pages that are now completely blank or have a userspace page blanked template. Once one of these three cases occurs the project no longer cares about the name of the page or the link to it. If a bot could sweep through the pages daily or even weekly this would save a ton of time manually removing red links and checking and removing links that are redirects. Even unlinking pages would make manually removing them from the lists much faster. If this is not clear look at the hisoty of any of the numbersd list pages to see the process. Legacypac ( talk) 01:35, 24 February 2019 (UTC)
Wikipedia:Categories for discussion is looking for a new bot to process category deletions, mergers, and moves. User:Cydebot currently processes the main /Working page, but there is a growing list of issues that call out for a replacement bot:
At a minimum, the new bot should process the main /Working page:
Ideally, it would also do some or all of the following:
* REDIRECT [[:Category:Foo]] to [[:Category:Bar]]
.Your assistance would earn the gratitude of some very tired and increasingly frustrated CfD'ers.
Thank you, -- Black Falcon ( talk) 20:48, 18 February 2019 (UTC)
The task is rather simple. Find all pages with Foobar (barfoo). If they redirect to Foobar, tag those with {{ R from unnecessary disambiguation}}. This should be case-sensitive (e.g. Foobar (barfoo) → FOOBAR should be left alone).
Could probably be done with AWB to add/streamline other redirect tags if they exist. Headbomb { t · c · p · b} 13:15, 14 November 2018 (UTC)
@ TheSandDoctor: any updates on this? Headbomb { t · c · p · b} 08:24, 13 January 2019 (UTC)
@ Headbomb: what exactly are you looking for? Do you just want someone to do a database scan, or do you want a bot to fix this? It sounds like there might be context issues with a task list this per Adam Cuerden. Kadane ( talk) 00:53, 18 March 2019 (UTC)
\(.*(album|song|journal|magazine|publisher)\)
and the like.
Headbomb {
t ·
c ·
p ·
b} 01:02, 18 March 2019 (UTC)
@ Headbomb: I couldn't sleep tonight and ran a database query to find all redirects that end with parenthesis. I have a question about how the bot would handle a few cases.
I am assuming the bot would skip the page if any {{ R from ...}} templates were present? Should the bot ignore any characters such as ", ★, or *? If a page ends in (disambiguation) should it be tagged with {{ R from disambiguation}} or {{ R from unnecessary disambiguation}}? Once I have a better idea of the criteria I will put together a script to estimate the number of articles that will be edited. Kadane ( talk) 08:40, 18 March 2019 (UTC)
@
Headbomb: - Okay, from my understanding there are two cases. Foobar (^disambiguation) -> Foobar
and Foobar (disambiguation) -> Foobar
. {{
R from unnecessary disambiguation}} should be added to the ^disambiguation cases, and {{
R to disambiguation page}} to the disambiguation cases (if the template is missing). In that case there are 207 pages that need the template {{
R to disambiguation page}} and there are 55,824 pages that need {{
R from unnecessary disambiguation}}. If my understanding sounds correct I am ready to go to BRFA. Looking forward to your reply.
Kadane (
talk) 05:34, 19 March 2019 (UTC)
BRFA filed @ Headbomb: Kadane ( talk) 16:13, 19 March 2019 (UTC)
{{
infobox cricketer}} used to have a |deliveries=
parameter which was
removed in 2009. There are 7000ish pages using the parameter, which makes up the overwhelming majority of
the unknown parameter tracking category. I made a start on clearing them off with AWB but figured it may be quicker to get a bot to do them. There is a
list of possibly affected pages if that helps, and I did a regexp replace of \|\s*deliveries\s*=.*\n
(with nothing) as part of some more targeted clean-ups. Could a bot please remove the rest?
Spike 'em (
talk) 14:42, 13 March 2019 (UTC)
|deliveries1=
... |deliveries4=
in them. This may not be a straightfoward removal (which could very well be against
WP:COSMETICBOT on its own).
Headbomb {
t ·
c ·
p ·
b} 14:56, 13 March 2019 (UTC)
the "administration of the encyclopedia"are substantive, which may apply here. Spike 'em ( talk) 15:41, 13 March 2019 (UTC)
deliveries
parameter with a blank string? I can do it, and will file a BRFA in the next few days as long as Headbomb has no objection to the "cosmetic"-ness of this task (they haven't replied yet) --
DannyS712 (
talk) 16:03, 13 March 2019 (UTC)
|deliveries=balls
so replacing those with blank string would leave few enough for me to do by hand.
Spike 'em (
talk) 16:13, 13 March 2019 (UTC)
)
Hi bot people. I was wondering whether it might be appropriate/worthwhile/a good idea to get a bot to remove "living=yes", "living=y", "blp=yes", "blp=y", etc from the talkpages of the articles listed at Wikipedia:Database reports/Potential biographies of dead people (3). I recognize that automating such a process might result in a few errors, but I think that would be a reasonable tradeoff compared to how tedious it would be for humans to check and update all 968 articles in the list one by one. (And hopefully, for those few(?) articles where an error does occur, someone watching the article will fix it). I spot-checked a random sample of articles in the list, and for every one I checked, it would have been appropriate to remove the "living=yes", etc from the talkpage, i.e. the article had a sourced date of death. To minimize potential errors, I would suggest the bot skips any articles which cover multiple people, e.g. ones with "and" or "&" in the title and Dionne quintuplets, Clarke brothers, etc. Thoughts? DH85868993 ( talk) 12:53, 15 January 2019 (UTC)
The Austrian metadata templates storing population figures were deleted with a consensus that they should be replaced by WikiData figures. I set up a new template for that, Template:Austria population Wikidata, and now it should be implemented (as in this diff) so that the updated figures can be displayed. Hopefully a bot can help with that.-- eh bien mon prince ( talk) 19:35, 9 March 2019 (UTC)
BRFA filed -- Green C 14:42, 12 March 2019 (UTC)
A zoomable, labeled location map can be included in the articles about German districts by adding {{Germany district OSM map|parent_subdivision=QXXXX}} to the 'map' parameter of {{ Infobox District DE}}, where QXXXX is the Wikidata ID of the German state the district belongs to. A live example of the template can be see in the Nordfriesland (district) article.-- eh bien mon prince ( talk) 08:24, 12 March 2019 (UTC)
wikibase_item
), and if it has no parameter, add both the parameter and the template, which is the tricky case. I'll let you know once I've filed a BRFA --
DannyS712 (
talk) 23:30, 15 March 2019 (UTC)Hey, based on
this TfD the template {{
PBB Summary}} should be removed, and based on
this extended discussion the removal should keep the |summary_text=
text in the article. I'm not sure if |section_title=
is used. Another editor is currently manually doing {{
PBB Further reading}}, though a bot operation would be much more faster. If that is included also, then just remove the outer template code, leaving the actual citation templates in the article. Could anyone help with a bot? Thanks. — Preceding
unsigned comment added by
Gonnym (
talk •
contribs) 20:08, 20 March 2019 (UTC)
Hi, I might be able to have a bash myself but could someone help create code for a Python Bot that would get a list of users not on Wikipedia: WikiProject Apple Inc./Subscribe that are have edited the WP's articles at least 10 times in the last 90 days or has added our User Box to their userpage and add them to a mass message list at Wikipedia: WikiProject Apple Inc./To Welcome (this should be cleaned at each run) . It also should remove users who haven't Edited in the last 5 years from the first mailing list. The Bot should then create a mass message request's code to send out the newsletter and a welcome message to the lists for me to submit. I'd like to be able to run the bot myself. (Pinging User: Smuckola) Thanks, RhinosF1 (chat) (status) (contribs) 20:32, 20 March 2019 (UTC)
RhinosF1 WP:VP is a good place to start. I am not the one that will judge consensus. That will be up to someone in the bot approvals group, but that is where most go to gain consensus for a bot task. Kadane ( talk) 21:33, 22 March 2019 (UTC)
I want a bot that can belong to a user, specifically me because I am making this request. Since I’m making this request, the name of my bot would be “MetricSupporter89Bot”. — Preceding unsigned comment added by MetricSupporter89 ( talk • contribs) 23:17, 21 March 2019 (UTC)
Some more stuff I was to add were that it would contribute stub articles I made that would take too long for me to contribute, edit my user page to be like other users pages, etc. Metric Supporter 89 ( talk) 23:26, 21 March 2019 (UTC)
This bot would edit articles that would need citations & that would need checking over for mistakes in the information, such as "Earth is the only planet that has life" where it should be "Earth is the Only known planet to have life"-- Jeriqui123 ~~ Talk 12:04, 25 March 2019 (UTC)
I would like to see a neural network capable of marking pages for speedy deletion.
Here is a list of criterion I believe that the bot could handle:
Thanks InvalidOS talk 18:09, 27 March 2019 (UTC)
P1 and P2 are rare. While trying to upgrade P2 some people are saying Admins can't judge P2 as it is, so how could a bot? Legacypac ( talk) 21:50, 27 March 2019 (UTC)
There are several links to World Matchplay (darts)#Previous_incarnation and pipe links. But this section no longer exists since it has been expanded into its own page MFI World Matchplay. Can we get a bot to update the links to the new page? It exists on (probably) hundreds of dart player pages and more. DLManiac ( talk) 16:32, 29 March 2019 (UTC)
World( |_)Matchplay( |_)(darts)#Previous( |_)incarnation
and didn't find any more. --
DannyS712 (
talk) 16:57, 29 March 2019 (UTC)
In order to reduce the load on the Signpost staff, it would really be nice if we could have a bot that would synchronize drafts with the newsroom.
If something exists at Wikipedia:Wikipedia Signpost/Next issue/Foobar, make a correspondence between the parameters of {{ Signpost draft}} present on the draft page and those present at Wikipedia:Wikipedia Signpost/Newsroom#Article status.
Specifically
Draft parameters | Newsroom parameters | |
|title=foobar |
→ | |Has-title=yes
|
|blurb=foobar |
→ | |Has-blurb=yes
|
|Ready-for-Copyedit=foobar |
→ | |Ready-for-Copyedit=foobar
|
|Copyedit-done=foobar |
→ | |Copyedit-done=foobar
|
|Final-approval=foobar |
→ | |Final-approval=foobar
|
The second thing the bot should do is if an irregular column is found to exist at Wikipedia:Wikipedia Signpost/Next issue/Foobar, then copy the corresponding item from ...Newsroom#Irregular columns and paste it at the bottom the bottom of ...Newsroom#Article status. And then keep it synchronized like the other things in ...Newsroom#Article status.
The bot could review the relevant pages every 15 minutes or so (or whatever time interval people think is best). Headbomb { t · c · p · b} 18:09, 2 April 2019 (UTC)
lua magicis to have the page update automatically without the need for a bot. {{3x|p}}ery ( talk) 21:56, 2 April 2019 (UTC)
Lua magic implemented, this is no longuer needed. I'll be archiving this. Headbomb { t · c · p · b} 23:56, 2 April 2019 (UTC)
To revert vandalism and disruptive behaviour and help users. — Preceding unsigned comment added by Hurricane Bunter ( talk • contribs) 12:24, 5 April 2019 (UTC)
/info/en/?search=Wikipedia:Database_reports/Unused_file_redirects
Contains a small number of images that were renamed, but where article links were not updated.
Although not essential, updating image links helps avoid conflicts with Commons, and of the 'wrong image' being displayed in articles.
Would it be possible for a BOT to do this kind of repetitive check, update, refresh cycle, until there are no image links to redirects in File: namespace from Articles or other important pages.? ShakespeareFan00 ( talk) 11:49, 16 March 2019 (UTC)
A bot that reports broken ref tags to a user, so he/she can fix it. — Preceding unsigned comment added by Darkwolfz ( talk • contribs) 04:48, 6 February 2019 (UTC)
Sure
DannyS712 For example if an article have a <ref>
and the editors maybe used source editor, and that cause a backspace or enter in ref tag, which will make it broken, or editors giving wrong parameters, for example I found a article today where they entered url correct, but Instead of giving website name, they added url. So if there's a bot which can detect broken ref tags or hyperlinks, and report it to me, I can fix them.
Darkwolfz (
talk) 05:02, 6 February 2019 (UTC)
DannyS712
/info/en/?search=Formby_Hall
In it's recent history, I fixed an error like that, and maybe we should scan for source that are in red color between <ref>...</ref>
or a missing opening <ref>
or closing </ref>
, also reference title missing ones.
<ref>
or </ref>
Yes it helps a bit, but is it possible to find articles which doesn't belong to the category, as in a new error made by someone accidentally. And filter missing <ref>
tags?
{{
sfn}}
template isn't Harvard-style references, it's
Shortened footnotes. Harvard-style references are
parenthetical, as used on pages like
Actuary. However, the two methods have a number of common features, primarily the separation of page number information from the long-form citation, with the association between the two being by means of a link formed from up to four surnames and a year. From my reading of the above, it is these links that need to be tested; and we have a script to do that, see
User:Ucucha/HarvErrors. --
Redrose64 🌹 (
talk) 20:38, 11 February 2019 (UTC)Could someone generate a list of values used for Template:Tooltip (the redirect, not Template:Abbr) in a table form, so it would be easier to see what needs to be converted to {{ abbr}} per the result of this discussion? -- Gonnym ( talk) 14:20, 15 February 2019 (UTC)
Hi. MOS:ACCESS#Text / MOS:FONTSIZE are clear. We are to "avoid using smaller font sizes in elements that already use a smaller font size, such as infoboxes, navboxes and reference sections." However, many infoboxes use {{ small}} or the html code, especially around degrees earned ( here's one example I corrected yesterday). I used AWB to remove small font from many U.S. politician infoboxes of presidents, senators, and governors, but there are so many more articles that have them. Here's an example for a TV station. I've noticed many movies and TV shows have small text in the infobox as well. Since I cannot calculate how many articles violate this particular rule of MOS, I would like someone to automate a bot to remove small text from infoboxes of all kinds. – Muboshgu ( talk) 22:04, 20 December 2018 (UTC)
<small>...</small>
tags within infoboxes, along with small tags wrapping multiple lines, both of which cause Linter errors, so it may be possible to get a bot approved to remove tags as long as fixing Linter errors is in the bot's scope. I welcome corrections on the four things I got wrong in these four sentences. –
Jonesey95 (
talk) 23:58, 20 December 2018 (UTC)
@ Jonesey95 and Muboshgu: Hello. Although the 85% font-size is defined, the computed value of the font-size is below 11.9px (it is 10.4667px). This is because font-size percentages work based on the parent container, not the document (see 1 under percentages). In this case the infobox has already decreased the font-size to 88% of the document, the font-size computed from the {{ small}} tag will be 74.8% smaller than the rest of the document (0.88 * 0.85 = 0.748). This is the case in Firefox, Chrome, Edge (10.4px), Opera and Internet Explorer. This behaviour is the standard and so will be experienced in all browsers. Dreamy Jazz 🎷 talk to me | my contributions 10:46, 23 December 2018 (UTC)
<small>...</small>
and {{
small}} (and its size-reducing siblings) from infoboxes, both in Template space and in article space. –
Jonesey95 (
talk) 14:31, 23 December 2018 (UTC)
The
Wikipedia:Good articles/mismatches page details some conflicts with good articles and usually indicates a mistake of some sort that needs to be sorted out. Category:Good articles means that an article has the green spot that indicates it is classified as good, while Category:Wikipedia good articles are articles which have undergone a review. So the In Category:Good articles but not Category:Wikipedia good articles
indicates that a good article symbol may be present on an article that has not actually undergone a review.
Wikipedia:Good articles/all is a list of all good articles and is manually updated. The last two headings usually indicate articles that have not been added after passing a review or removed after being delisted.
This page was originally created by JJMC89 a year ago using AWB after I requested it. At the time it contained thousands of mismatches [11]. We have just resolved all those, mainly through the efforts of DepressedPer. I was hoping there could be a bot that would update the page periodically so we can keep on top of any further mismatches. I have tried running it myself through AWB, but the number of articles is too large to do in one hit. There was also an issue that articles that had been moved would show up as a mismatch if the name was different at the Wikipedia:Good articles/all page. Maybe there is a better workaround for this, the last time I just renamed the articles at the GA list but that was quite time consuming. Regards AIRcorn (talk) 04:47, 13 April 2019 (UTC)
See background (pardon the pun).
The idea is to change the css element background
to background-color
(and other similar attributes) in sortable tables (
example).
Headbomb {
t ·
c ·
p ·
b} 19:14, 5 March 2019 (UTC)
background
is shorthand for a number of attributes. Otherwise seems like a good idea. --
Izno (
talk) 22:44, 5 March 2019 (UTC)background
to background-style
would break all existing uses, because background-style
is not a defined property. See
CSS Backgrounds and Borders Module Level 3 for examples of valid property names. --
Redrose64 🌹 (
talk) 13:03, 7 March 2019 (UTC)
Stop Predatory Journals maintains a list of hijacked journals. Could someone search wikipedia for the presence of hijacked URLs and produce a daily/weekly/whateverly report? Maybe have a WP:WCW task for it too? Headbomb { t · c · p · b} 00:09, 4 February 2019 (UTC)
Extended content
|
---|
https://scholarlyoa.com/other-pages/hijacked-journals/u
http://www.bnas.org/
http://acjournal.in/journal-of-renewable-natural-resources-bhutan
|
@ Headbomb: can post the report on a regular basis if there is a page. Script takes less than 20 seconds to complete so not expensive on resources. -- Green C 17:02, 4 February 2019 (UTC)
This would be useful for New Page Patrol: it would save us sending multiple messages about an editor's creations (which can cause upset) and show clearly what the problem is and what articles have been identified as needing improvements. This has been requested more than once of me by an editor and I've had to find and list them manually. It would also benefit other editors - I would love to look over which of my creations have tags and improve them. This would give creators (if they want to) the chance to make improvements and bring down the backlogs. Is it feasible? Thanks for looking into this, Boleyn ( talk) 08:43, 9 March 2019 (UTC)
There was a request to move categories with "eSports" to "esports" per WP:C2D at WT:VG, but that list is sizable. Is there someone here who can take care of the listing and tagging? (Avoid the WikiProject assessment categories.) -- Izno ( talk) 18:04, 31 March 2019 (UTC)
I imagine it's fairly confusing for IP users to have to scroll through lots of old warnings from previous users of their IP before getting to their actual message. We have Template:Old IP warnings top (and its partner), but it's rarely used—thoughts on writing a bot to automatically apply it to everything more than a yearish ago? Gaelan 💬 ✏️ 16:21, 10 January 2019 (UTC)
It seems like there is community support to implement this from the discussions. Should be open another discussion to iron out the implementation details? If there is consensus to do this task with a bot, I am willing to do it. Kadane ( talk) 05:45, 15 March 2019 (UTC)
Data to be taken from Wikidata to give the the year of publication of a taxon and create "Category:Taxa described in ()" within the(English) wikipedia taxon entry, if a wikipedia enty has been created. MargaretRDonald ( talk) 22:55, 22 January 2019 (UTC)
The bot would use the wikidata taxon entry to find the auhor of a taxon, and then use it again to find the corresponding author article to find the appropriate author category. (This will not always work - but will work in large number of cases. Thus, the English article for "Edward Rudge" corresponds to the category:"Category:Taxa named by Edward Rudge", and the simple strategy outlined here would work for Edward Rudge, Stephen Hopper and .... The category created would be an entry in the article. MargaretRDonald ( talk) 23:08, 22 January 2019 (UTC)
the category created would be an entry in the article, and do you want "described by" or "named by"? -- DannyS712 ( talk) 06:05, 1 March 2019 (UTC)
|authority
parameter in {{
Speciesbox}} and its ilk? That would make this a lot simpler... --
DannyS712 (
talk) 06:51, 1 March 2019 (UTC)
|authority
in {{
Speciesbox}}. The year is not. It is found associated with the basionym in Wikidata entry (an entry which is often missing from wikidata, but if it exists that would be the safest place to take it from). Most articles show the author of the basionym (the name in the brackets), bur have no taxonomy section and even when they do it is unstructured text... So probably the year of the description is in the too-hard basket. (But as I indicated, I find the year category somewhat less important..)
MargaretRDonald (
talk) 07:07, 1 March 2019 (UTC)And if we were to do this the result would be that we would get, e.g., a list of accepted taxa named by John Lindley, and not a whole ragtag list of plants where the assigning of the initial genus is now considered incorrect. In achieving that we could be a far better resource than IPNI. MargaretRDonald ( talk) 06:57, 1 March 2019 (UTC)
About 6 months ago Batting average was split into a short parent article about the concept of batting average across sports and 2 child articles Batting average (cricket) and Batting average (baseball) dealing with the specifics of the metric in the individual sports. Articles related to each sport still point to the parent article but should generally point to the sport specific one. After some searches using AWB, I found just over 15k links to Batting average. Using a recursive category search, I found that Category:Cricketers, Category:Seasons in cricket and Category:Years in cricket account for about 3k links and Category:Baseball players, Category:Seasons in baseball, Category:Years in baseball about 12k. There are about 300 remaining links in none of these categories, I am working through those manually with AWB. As an aside, a lot of the baseball players have a link in both an infobox and in article text. I had the cricketer infobox changed already, as that had a hardcoded link to the parent article.
The plan would be to replace
[[Batting average]]
with [[Batting average (cricket)|]]
[[Batting average|foo]]
with [[Batting average (cricket)|foo]]
in the first set of categories and
[[Batting average]]
with [[Batting average (baseball)|]]
[[Batting average|foo]]
with [[Batting average (baseball)|foo]]
in the second set. A lot of the non-piped links use lower-case, so don't know if that needs another set of rules. I'm also assuming that the pipe trick works in bot edits, otherwise the replacement text will need to be slightly expanded. I can provide the lists I created of the links to the article, of the categories and then intersections if this helps. Spike 'em ( talk) 20:27, 1 April 2019 (UTC)
pipe trick works in bot editsIt does outside of references and other tags. -- Izno ( talk) 20:42, 1 April 2019 (UTC)
[[Batting average]]
with [[Batting average (cricket)|Batting average]]
Request to add " List of Medal of Honor recipients in non-combat incidents" in 185 recipients that are still dated with the old main's article's title. — Preceding unsigned comment added by XXzoonamiXX ( talk • contribs) 04:02, 14 April 2019 (UTC)
The Church of Jesus Christ of Latter-Day Saints recently gave an announcement about the correct name of the church [1]. Because of this announcement, the church site has been changed from lds.org to ChurchofJesusChrist.org, and the newsroom from mormonnewsroom.com to newsroom.ChurchofJesusChrist.org. Most wiki pages still have the old site linked. I need a bot to go through and change al the links. The only thing to be changed is the domain. The rest of the URLs are the same.
Thanks, The 2nd Red Guy ( talk) 14:50, 23 April 2019 (UTC)
References
I've noticed that a lot of articles are not in compliance with MOS:SURNAME, especially in Category:Living people. I've manually changed a few pages, but as a programmer, I think this could be greatly automated. Any repeats of the full name, or the first name, beyond the title, first sentence, and infobox should not be allowed and replaced with the last name. I can help out in creating a bot that can accomplish this. InnovativeInventor ( talk) 01:21, 21 March 2019 (UTC)
Just bumped into this: Wikipedia_talk:Manual_of_Style/Biography#Second_mention_of_forenames, so there should be detection of other people with the same last name. Additionally, this bot should intend to provide support for humans, not to automate the whole thing (as context is important). InnovativeInventor ( talk) 03:57, 21 March 2019 (UTC)
When an AfD discussion ends with no discussion, WP:NOQUORUM indicates that the closing admin should treat the article as one would treat an expired PROD. One mundane part of this process is specifically checking whether the article is eligible for PROD ("the page is not a redirect, never previously proposed for deletion, never undeleted, and never subject to a deletion discussion"). It would be really nice, when an AfD listing is reaching full term (seven days) with no discussion, if a bot could check the subject's page history and leave a comment on, say, the beginning of the listing's seventh day as to whether the article is eligible for PROD (a simple yes/no). If impossible to check each aspect of PROD eligibility, it would at least be helpful to know whether the article has been proposed for deletion before, rather than having to scour the page history. A bot here could help the closing admin more easily determine whether to relist or soft delete. More discussion here. czar 21:12, 23 March 2019 (UTC)
Most articles on settlements in India (eg. Bambolim) still use 2001 census data. They need to be updated to use the 2011 census data. SD0001 ( talk) 18:10, 29 March 2019 (UTC)
Thousands of articles about music artists, albums and songs reference the source in the body text (example: OnePointFive). Such references belong in a <ref> block at the end of the page and not in the body text. Most of these references follow a common pattern, so I hope this kind of edit can be made by a bot.
I suggest making a bulk replacement from
= =Track listing= =
Credits adapted from [[Tidal (service)|Tidal]].<ref name="Tidal">{{cite web|url=https://listen.tidal.com/album/93301143|title=ONEPOINTFIVE / Aminé on TIDAL|publisher=Tidal|accessdate=August 15, 2018}}</ref>
to
= =Track listing<ref name="Tidal">{{cite web|url=https://listen.tidal.com/album/93301143|title=ONEPOINTFIVE / Aminé on TIDAL|publisher=Tidal|accessdate=August 15, 2018}}</ref>= =
Difference sources: Tidal (service), “the album notes”, “the album sleeve”, “the album notes”, “the liner notes of XXX” Different heading names, including “Track listing”, “Personnel”, ”Credits and personnel”. Variants: “Credits adapted from XXX”, “All credits adapted from XXX”, “All personnel credits adapted XXX”
Does this sound feasible/sensible? -- C960657 ( talk) 17:14, 28 February 2019 (UTC)
Citations should not be placed within, or on the same line as, section headings.WP:CITEFOOT — JJMC89 ( T· C) 03:38, 1 March 2019 (UTC)
Section headings should: ... Not contain links, especially where only part of a heading is linked.Unless you use pure plain-text parenthetical referencing, refs always generate a link. -- Redrose64 🌹 ( talk) 12:41, 1 March 2019 (UTC)
Adequately sourced population figures for all Spanish municipalities can be deployed by using {{ Spain metadata Wikidata}}, as was recently done for Austria. See this diff for an example of the change.-- eh bien mon prince ( talk) 11:35, 11 April 2019 (UTC)
Category:Pages using deprecated image syntax has over 89k pages listed, making manually fixing these not possible. Could a bot be created to handle this? -- Gonnym ( talk) 06:18, 12 April 2019 (UTC)
{{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|upright={{{image_upright|1}}}|alt={{{alt|}}}}}
style that pass to the |image=
field an image syntax in the format |image=
File:Example.jpg
. However, as per usual when dealing with templates, the exact parameters used and their names will differ between the templates. So for example:{{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|upright={{{image_upright|1.13}}}<!-- 1.13 is the most common size used in TV articles. -->|alt={{{image_alt|{{{alt|}}}}}}}}
{{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|{{{imagesize|}}}}}}|sizedefault=frameless|upright={{{image_upright|1}}}|alt={{{image_alt|{{{alt|}}}}}}}}
{{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|alt={{{alt|}}}}}
Also, an image isn't the only value that can be passed in |image=
File:Example.jpg
, but it sometimes is combined with an image size and caption, which will need to be extracted and passed through the correct parameters. --
Gonnym (
talk) 06:37, 12 April 2019 (UTC)
image=[[File:West Wing S3 DVD.jpg|250px]]
. Instead it should be, |image=West Wing S3 DVD.jpg
and |image_size=250px
(it can also be without "px" as the module does that automatically).image=[[File:Red Dwarf X logo.jpg|alt=Logo for the tenth series of ''Red Dwarf''|250px]]
. Instead it should be, |image=Red Dwarf X logo.jpg
, |image_size=250px
and |image_alt=Logo for the tenth series of Red Dwarf
.|image#_size=
parameter. The number "#" needs to match the image# parameter, e.g. |image2=
gets |image2_size=
. Drop me a line if this is confusing; I feel like it's a lot to explain in a short paragraph.|image_size=250px
(or equivalent) may simply be omitted, because most infoboxes are set up to use a default size where none has been set (
example). In my opinion, falling back to the default is preferable since it gives a consistent look between articles. --
Redrose64 🌹 (
talk) 12:46, 12 April 2019 (UTC)
|flag_image=
, a 300px for |map_image#=
and no default for |image#=
which defaults then to frameless (which I'm not sure what it is). If there is a correct size that the template should use, then the template should probably be edited to handle it. --
Gonnym (
talk) 14:02, 12 April 2019 (UTC)|image1=[[File:Soleiman Eskandari.jpg|150x150px]]
format it puts the page into
Category:Pages using deprecated image syntax, because the parameter is intended for a bare filename and nothing else, as in |image1=Soleiman Eskandari.jpg
. --
Redrose64 🌹 (
talk) 14:05, 12 April 2019 (UTC)
All of our articles and categories on transport "accidents and incidents" use that phrasing, as opposed to "incidents and accidents" (which is a line from " You Can Call Me Al"). However, there are a lot of section heads that are "== Incidents and accidents". I would like a bot to search articles for the phrasing "== Incidents and accidents ==" and replace it with "== Accidents and incidents ==". Can that be done?-- Mike Selinker ( talk) 19:13, 20 April 2019 (UTC)
I recently started patrolling newly created redirects and have realized that certain common types of redirects could be approved through an automated process where a bot would just have to parse the target article and carry out some trivial string manipulation to determine if the redirect is appropriate. A working list of such uncontroversial redirects:
Potentially more controversial tasks could include automated RfD nomination for clearly unnecessary redirects, such as redirects with specific patterns of incorrect spacing. I also think it would be a good idea to include an attack filter, so that if a redirect contains profanity or other potentially attackish content the bot will not automatically patrol them even if it appears to meet the above criteria. I anticipate that if this bot were to be implemented, it would cut necessary human work for the redirect backlog by more than half. I've never written a Wikipedia bot before, but I am a software engineer so I anticipate that if people think that this is a good idea I could do a lot of the coding myself, but obviously the idea needs to be workshopped first. There's also potential extensions that could be written, such as detecting common abbreviations or alternate titles (e.g. USSR space program --> Soviet space program, OTAN --> NATO) signed, Rosguill talk 22:17, 28 April 2019 (UTC)
If there's not going to be any further discussion here, is there anywhere else I should post or things I should do before implementing this bot? The Help:Creating a bot has a flowchart including the steps for writing a specification and making a proposal for the bot, but it's not clear to me which forums I should be using for that (or if the above discussion was sufficient). An additional concern is that while I believe that from a technical perspective this shouldn't be a terribly difficult bot to implement, I would need an admin to give the bot NPP permissions in order to run the bot. signed, Rosguill talk 23:04, 5 May 2019 (UTC)
Please change all occurrences of "Astana" in all articles to new name "Nur-Sultan". Also please move all articles with "Astana" to "Nur-Sultan". Thanks! -- Patriccck ( talk) 18:16, 7 May 2019 (UTC)
Would like a bot that could search all the articles listed under Category:WikiProject Mountains articles and its children categories that are also listed in Category:Articles with dead external links? Or maybe there's an existing tool that can do this? RedWolf ( talk) 21:19, 16 May 2019 (UTC)
The New England Wild Flower Society [17] changed its name and web presence to the Native Plant Trust [18]. And in the process broke most of its old URLs. Only insecure http requests to the old web site get an HTTP 301 redirect. https links time out. I suspect a firewall misconfiguration on their end, but I emailed about the problem and it hasn't been fixed.
I am requesting a bot find all the instances of DOMAIN.newenglandwild.org/PATH (http or https) and rewrite to DOMAIN.nativeplanttrust.org/PATH (https only, optionally only if that new URL returns a 2xx or 3xx status code).
I don't have a count of edits to make. Here is a sample page: Vaccinium caesariense. As I write this, reference 2 links to https://gobotany.newenglandwild.org/species/vaccinium/caesariense/ (a timeout error). It should link to https://gobotany.nativeplanttrust.org/species/vaccinium/caesariense/.
Vox Sciurorum ( talk) 17:51, 17 May 2019 (UTC)
{{
dead link}}
exists if needed. It is quite complex. Everything should be checked and tested. There are bots designed for making URL changes see
WP:URLREQ. --
Green
C 21:51, 17 May 2019 (UTC)I find myself regularly using the excellent User:Anomie/unsignedhelper.js to document unsigned comments in talk page discussions. This looks like a perfect task for a bot, and I wonder whether there are any reasons it has not been done earlier. Could a kind contributor take up this uncontroversial and useful talk? The process should work similarly to rescuing orphaned references in article space, as performed by User:AnomieBOT. — JFG talk 15:55, 25 May 2019 (UTC)
Hi. Could somebody move all userboxes with the word "expat" in the name from Category:Residence user templates to its subcategory Category:Expat user templates? — andrybak ( talk) 08:23, 17 June 2019 (UTC)
Hi, I would like to request for a bot to add the Template:WPEUR10k, to all articles that appears in the list of created articles at Wikipedia:The 2500 Challenge (Nordic) and Wikipedia:The 10,000 Challenge. I think it would be very helpful so all the articles recieved the template tag. I suggest this as there are literally thousands of articles in need of the tag.-- BabbaQ ( talk) 13:37, 26 April 2019 (UTC)
If someone could find all URLs (found across any namespace) that have this pattern in them, that would be great
https?:\/\/(.+)\/handle\/.+
That would be great. Sorting the results by domain ($1
) would also be even greater.
Headbomb {
t ·
c ·
p ·
b} 23:22, 24 June 2019 (UTC)
This is what I see to be a rather uncontroversial request which I have been doing manually for about a month or so now. In order to better identify pages that use bare URL(s) in reference(s) in an effort to get the URLs fixed, I am requesting that the {{ Cleanup bare URLs}} tag be added to all pages by a bot which meet the following conditions:
<ref>
tag immediately followed by http:
and/or https:
, followed by any combination of keystrokes and a </ref>
closing tag when there are no instances of spaces between the <ref>
and </ref>
tags (underscores are okay).
...From my experiences recently with tagging these pages, tagging the pages with the aforementioned parameters will avoid most, if not all, false positives.
I am requesting this run only once so that it doesn't need constant checks, and this should adequately provide an assessment on how many pages need reference url correction. Steel1943 ( talk) 17:54, 22 April 2019 (UTC)
I believe
GreenC could do a
fast scan (a little bit offtopic, but could that awk solution work with .bz2 files?). For lvwiki scan, I use such regex (more or less the same conditions as OP asked for) which works pretty well: <ref>\s*\[?\s*(https?:\/\/[^][<>\s"]+)\s*\]?\s*<\/ref>
. For actually fixing those URLs, we can use
this tool. Can be used both manually and with bot (it has pretty nice API). --
Edgars2007 (
talk/
contribs) 15:36, 23 April 2019 (UTC)
I recently made a bot that looks for articles that need {{
unreferenced}}
and this is basically the same thing other than a change to the core regex statement, which
User:Edgars2007 just helpfully provided. So this could be up and running quickly. It runs on Toolforge and uses the API to download each of 5.5M articles sequentially. The only question is which method: > 50%, or max size of the tracking category, or maybe both (anything over 50% is exempted from the category max size). The mixed method has the advantage of filling up the category with the worst cases foremost and lesser cases will only make it there once the worst cases are fixed. --
Green
C 17:51, 23 April 2019 (UTC)
MarnetteD, yes understand what you are saying. Was thinking, what about an 'on demand' system where you can specify when to add more, and how many to add - and it only works if the category is mostly empty, and maxes at 200 (or less). This is more technically challenging as it would require some kind of basic authentication to prevent abuse, but I have an idea how to do it. It would be done all on-Wiki similar to a bot stop page. This gives participants the freedom to fill the queue whenever they are ready, and it could keep a log page. Would that be useful? -- Green C 19:19, 25 April 2019 (UTC)