This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | ← | Archive 78 | Archive 79 | Archive 80 | Archive 81 | Archive 82 | → | Archive 85 |
Per a request at Wikipedia:Village pump (technical)#Popular user scripts, SD0001 created a script to update the table at Wikipedia:User scripts/Most imported scripts. While it's great that the table can now be updated without too much effort, the ongoing work of keeping this table up to date on a periodic basis seems like a task better suited to a bot than a human. Plus bots have the higher query limit, and shouldn't mind waiting a bit longer for results in order to lessen the sever load of ~1700-1900 API calls (e.g. making sequential API calls and/or using an appropriate maxlag setting). - Evad37 [ talk 13:45, 18 November 2019 (UTC)
Is there a way the bot could differentiate active from inactive users, Evad37? (I suppose differentiating admins from editors would be trivial.)
Also, if the bot could also spit out the change in number of imports, so that we could rank by trending scripts, that'd be a bonus! Guarapiranga ( talk) 01:46, 20 November 2019 (UTC)
— diff
list=allusers
with auactiveusers
. I believe that's just 1 edit in the last 30 days, but it's something. It returns the number of actions for each user, so it could also be subsequently filtered for higher definitions of active. I think you can also limit the results to just autoconfirmed or extendedconfirmed users. ~ Amory (
u •
t •
c) 11:07, 22 November 2019 (UTC)
auactiveusers
only lists out all active users, there is no option to list active users from a given set of users. We would need to first get the list of 138,000 active users (28 queries), and then we've to pull in the full search results (which can be done in 1 query for each script except the top 2 scripts - which need 2 coz of >5000 installations) and check how many of these users are in our list of active users. This does seem just about practical enough.
SD0001 (
talk) 17:57, 23 November 2019 (UTC)Hello, I was wondering if it might be possible to construct a bot to streamline the otherwise tedious task of fixing numerous (thousands) of Wikipedia articles that use certain medical jargon terms and revise them to their more widely understood counterparts. For example, I would propose a bot that changes the phrase "renal failure" to kidney failure (except on pages when they are a part of quotes or in the title of a research article that is being cited, if a bot can be programmed to screen for those exceptions). There are several other examples and programming is foreign to me. Please let me know if this is a viable idea for an otherwise tedious (and herculean) task. Thank you! TylerDurden8823 ( talk) 00:46, 26 January 2020 (UTC)
This bot is unfortunately down again and both of the users who maintained it have departed the project. without the bot there is no on-wiki record of UTRS appeals, leaving the system ripe for WP:FORUMSHOP abuses. At the very least if the "notify user" functionality could be replicated, that would be great. Beeblebrox ( talk) 21:25, 4 December 2019 (UTC)
Hello. I wasn't sure where to put this as this is a request in regards to a live website that's going down in a few months. SR/Olympics will be closing by March 2020. On Wikipedia, there's 945 articles that are using this website url, with 2 here and 391 more here (might be duplicates). I feel that InternetArchiveBot nor WaybackMedic would be suitable for this request as the links aren't dead yet. Should a bot archive these links before they break or wait? -- MrLinkinPark333 ( talk) 19:25, 6 January 2020 (UTC)
Is it possible for someone to search the English Wikipedia and create a list of articles that do not follow
WP:BOLDAVOID? Specifically, the search would need to find if there is any use of linking ([[ ]]
) within the bolded (''' '''
) portion of the first sentence of the article. If this is possible, would you be able to provide an output list of the linked article names here:
User:Gonzo_fan2007/BOLD. Cheers,
« Gonzo fan2007
(talk) @ 21:50, 27 January 2020 (UTC)
The assassination of Archduke Franz Ferdinand of Austria...(which is incorrect), in which you would need to remove the link, but then still find the best place to add a link to Archduke Franz Ferdinand of Austria somewhere in the lead. « Gonzo fan2007 (talk) @ 16:30, 28 January 2020 (UTC)
Geregen2 has added hundreds of "proposed deletion" messages to User talk:WildCherry06 and other user talk pages without signing any of them with four tildes. I suggest that a bot add a signature (Geregen2's signature, not the bot's signature) and timestamp to the end of all those messages using {{subst: unsigned}}. GeoffreyT2000 ( talk) 15:41, 19 December 2019 (UTC)
Hello, I'd appreciate if someone can help me with placing the following templates on the relevant TV task force categories found here Category:WikiProject Television task forces, with {{ WikiProject Television task force assessment quality category}} and {{ WikiProject Television task force assessment importance category}}. There was a recent discussion which lead to WikiProjects being converted to TV task forces and this is part of the clean up. Using the templates on these categories will categorize them in their correct place. An example can be seen here Category:A-Class Avatar: The Last Airbender articles (which uses the template) and Category:A-Class Holby articles (which does not). As can be seen by the example, usually only the template will be needed, without any other category or text being used on the page. -- Gonnym ( talk) 01:45, 24 December 2019 (UTC)
This is a multi-part request.
The first part should be relatively uncontroversial: it is to generate a page (for instance,
User:Tigraan/Exxx redirects) containing a list of all pages which are redirects and whose title matches the regexp E1?[0-9]{3}[a-j]?
. (If there is an easy way that I could do it myself, please enlighten me.) Bonus points if the page contains the current redirect targets as well. I estimate this would be around 1000 pages.
The second part would be, after manual inspection of the redirects to clear up false positives, to mass-tag those redirects for a WP:RFD bundled nomination. That certainly requires consensus but I got mostly ignored when asking at the places I would think to ask: I posted at Wikipedia_talk:WikiProject_Food_and_drink#Food_additives_codes_redirect_to_chemical_compounds_instead_of_E_number_article (where you can read a sketch of the RfD nomination rationale) and Wikipedia_talk:Redirects_for_discussion#Nominating_lots_of_related_redirects, both of which combined attracted a whole one other comment (supporting the proposed RfD) after a week. (If you want to see more solid consensus, please tell me where to ask for it.)
The third part would be to clean up after the RfD, either by untagging and leaving things in place if rejected, or by retargeting the redirects according to a relatively simple scheme. Tigraan Click here to contact me 17:49, 23 January 2020 (UTC)
Hi, I want to add this template ( Template:Ash'ari) to all the pages/articles that are listed/linked. Thanks in advance!-- TheEagle107 ( talk) 01:45, 22 January 2020 (UTC)
— Preceding unsigned comment added by TheEagle107 ( talk • contribs) 03:47, 22 January 2020 (UTC)
After a discussion here a couple weeks ago, there was a rough local consensus that it might be beneficial to merge the majority of Russian rural locality articles (95% of which are two-line permastubs) to list articles (currently these lists are by first-level division, such as List of rural localities in Vologda Oblast). As you can see on that article, Fram is in the process of merging the pertinent information from the individual stubs into tables, but it's tedious work and there's something on the order of like 10,000 or so of such articles.
I was wondering if it's possible/plausible to create a bot that could automate any part of that process? ♠ PMC♠ (talk) 04:04, 29 November 2019 (UTC)
@ Premeditated Chaos, Fram, Ymblanter, and Jo-Jo Eumerus: I took a pass at parsing out population data into User:AntiCompositeNumber/rustubs, trying to get data from the infobox, {{ ru-census}}, and string matching. The character count is also included. (It's lower than the MW byte count because of UTF-8 character encoding.) -- AntiCompositeNumber ( talk) 01:38, 1 January 2020 (UTC)
Among moth articles, and I suspect many others, there are sometimes template links to Wikispecies and Wikimedia Commons but there's nothing at the target location in the sister project. I'd love to see a bot which could go through and check these and remove the deceptive templates.
Even better if it could remove links to Commons if the only file in Commons is already in use in the article.
Another refinement would be to change from a general Commons link to a Commons category link when that exists.
Examples:
Thank you. SchreiberBike | ⌨ 03:59, 16 December 2019 (UTC)
The parameters were changed 9 July 2018 per this discussion: Template talk:WikiProject Christianity#Parameter Correction. church-of-the-nazarene was changed to holiness-movement as was the -importance parameter. However, per the discussion above, and as I've seen, it wasn't updated everywhere. Jerod Lycett ( talk) 04:24, 21 January 2020 (UTC)
I know this is going to be quite a bit of work, however I feel it will have significant value once the process has caught up.
I refer to the error cat "Tidy bug affecting font tags wrapping links (4,275,998 errors)" as at today, some of which date back to 2006.
As an example the followong sinature;
[[User:AndonicO|<font face="Papyrus" color="Black">'''A'''</font><font face="Papyrus" color="DarkSlateGray">ndonic</font><font face="Papyrus" color="Black" size="2">'''O'''</font>]] <small><sup><font face="Times New Roman" color="Tan">[[User talk:AndonicO|''Talk'']]</font> | <font face="Times New Roman" color="Tan">[[User:AndonicO/My Autograph Book|''Sign Here'']]</font></sup></small>
has various errors that may cross various error categories and will never be fixed as per current methodology, a bot that does a simple find and replace, with something like;
[[User:AndonicO|talk]] signature adjusted by lint bot for lint errors.
would fix every instance of each signature as identified and could cover many instances in order, this is especially important for these aged and non-active users, and could also be used to identify current user signatures with errors and we could offer a reformatted signature solution.Thoughts
121.99.108.78 (
talk) 00:03, 28 January 2020 (UTC)
<b>...</b>
to '''...'''
. Lint errors are a clear criteria. This would probably have consensus, although that's still not a guarantee. Basically, take it to
WP:VPR and see how the dice lands.
Headbomb {
t ·
c ·
p ·
b} 01:42, 28 January 2020 (UTC)
<font>[[link]]</font>
works just as well as [[link|<font>link</font>]]
. Yes, I know that font tags are deprecated, but there are literally millions of pages that use the former format. --
Ahecht (
TALKtyi's and if you want to add anything 121.99.108.78 ( talk) 10:07, 28 January 2020 (UTC)
<font>[[link]]</font>
did work like [[link|<font>link</font>]]
. That is, <font color="x">[[link]]</font>
, and also <font style="color:x">[[link]]</font>
both were processed by Tidy into [[link|<font...>link</font>]]
(piped appropriately, of course). The font tag had to immediately wrap the Wikilink or External link, otherwise it was ignored. The font color
, but not the font style
, is detected as the Tidy font link bug, but the font style
version of the Tidy font link bug is quite rare. Tidy has been replaced, so now font coloring tags immediately wrapping a Wikilink or external link are overridden, as you would logically expect, by default link colors. The replacement parser is an
HTML 5-compatible upgrade from Tidy and we are not going back.
Wikipedia:Linter#How you can help was written November 23, 2017, and since it was first written it has always said that it is OK to fix lint errors, including on talk pages, but one should "[t]ry to preserve the appearance." So, for more than two years, it has officially been OK to de-lint user signatures, preserving the appearance, and this has never been officially challenged or disputed; it is the consensus. (However, I don't think there's a consensus on systematic lint fixing by bot.) The Tidy font bug is a high priority lint error and I would favor fixing these lint errors in a systematic way by bot, taking care, of course, to exclude talk page discussions where fixing an instance of this error would confuse a question about this exact behavior. —
Anomalocaris (
talk) 02:22, 3 February 2020 (UTC)
Hi all, I saw that " peer reviews" are now included on WP:Article alerts (yay!). Unfortunately it turns out there's more than a few reviews that editors either haven't been opened properly. These will clog up article alert lists and I was wondering if I could have some help with a bot to process them (or even generate a list to give me).
In short:
Thanks for your help, -- Tom (LT) ( talk) 08:47, 10 February 2020 (UTC)
We're in the process at WikiProject New York (state) of converting 5 WikiProjects to taskforces. Specifically the following are being consolidated, under the statewide banner:
|Capital=yes
|Hudson=yes
|LI=yes
|Syracuse=yes
|Western=yes
However, we can't just convert the existing templates to wrappers and have AnomieBOT substitute them without creating a mess of duplicates, because some pages are tagged by more than one subproject or are already tagged with {{ WikiProject New York (state)}} in addition or both, e.g. Talk:Albany, New York.
I don't want to take the time to do the scripting just yet unless it's necessary, going off the assumption that this has been done frequently enough that there's already a working version for project mergers. I'll just take a quick minute here to give some basic examples to avoid confusion.
Examples
|
---|
Without loss of generality we will use Case {{WikiProject Capital District|class=c|importance=mid}} Output {{WikiProject New York (state)|class=c|importance=|Capital=yes|Capital-importance=mid}} Case {{WikiProject New York (state)|class=c|importance=low}} {{WikiProject Capital District|class=c|importance=mid}} Output {{WikiProject New York (state)|class=c|importance=low|Capital=yes|Capital-importance=mid}} Case {{WikiProject New York (state)|class=c|importance=low}} {{WikiProject Capital District|class=c|importance=mid}} {{WikiProject Hudson Valley|class=c|importance=high}} Output {{WikiProject New York (state)|class=c|importance=low|Capital=yes|Capital-importance=mid|Hudson=yes|Hudson-importance=high}} Case {{WikiProject Capital District|class=c|importance=mid}} {{WikiProject Hudson Valley|class=c|importance=high}} Output {{WikiProject New York (state)|class=c|importance=|Capital=yes|Capital-importance=mid|Hudson=yes|Hudson-importance=high}} |
I'm not particularly active around here, but I should be around for an hour or two more today; I will try to find time at least once every 48 hours this week to log in and do some work, so hopefully I'll be able to respond to any inquiries reasonably promptly, thank you. (please ping on reply)
𝒬 𝔔 23:42, 25 February 2020 (UTC)
Hi there! I’ve been editing the Duolingo Wikipedia article to keep it up to date with the number of learners on each course. I was wondering if there’s a boy that could update the lists daily, rather than having to do it myself, or how I could create such a bot? Thanks! :-) — Preceding unsigned comment added by CcfUk2018 ( talk • contribs) 03:28, 22 January 2020 (UTC)
The bot ( approved here) for updating vital articles counts, icons, and corresponding talk pages has been inoperable for a while, per this discussion. Could one of you please look into fixing it? Thanks! Sdkb ( talk) 19:03, 3 January 2020 (UTC)
There are about 2,000 transclusions: Special:WhatLinksHere/Template:Distinguish&namespace=14&limit=500
For an example of the change, see change history of Category:Literature:
Thanks. fgnievinski ( talk) 21:18, 26 January 2020 (UTC)
Hi there, re:
this permalinked discussion, could you stellar bot handlers please remove the |residence=
parameter and subsequent content from articles using {{
Infobox person}}? Per some of the discussions,
Category:Infobox person using residence might list most of the pages using this template. And
RexxS said:
hastemplate:"infobox person" insource:/residence *= *[A-Za-z\[]/
) shows 36,844 results, but it might have missed a few (like {{
plainlist}}); there are at least 766 uses of the parameter with a blank value."I don't know if this helps. This is not my exact area of expertise. Thanks! Cyphoidbomb ( talk) 05:31, 27 December 2019 (UTC)
This is simple and can be handled by just about any bot.
The Federal Telecommunications Institute (IFT) of Mexico made a one-character change in document URLs that will need updating. Hundreds of Mexican radio articles cite its technical and other authorizations.
They added a "v" to the URL, so URLs that were formerly
https://rpc.ift.org.mx/rpc/pdfs/96255_181211120729_7489.pdf
changed to
https://rpc.ift.org.mx/vrpc/pdfs/96255_181211120729_7489.pdf
Is this possible to have done as a bot task? The articles that need it are mostly in Category:Radio stations in Mexico or Category:Television stations in Mexico. Raymie ( t • c) 20:10, 4 February 2020 (UTC)
@
Raymie: In addition they now serve https only, but left no http->https redirect, and most all of the links on WP are http. This should be done by URL-specific bot because of archive URLs and {{
dead link}}
tags (some may already be marked dead and/or archived that need to be unwound once corrected). Could you post/copy the request to URLREQ, there is a backlog but I will get to it. --
Green
C 20:02, 8 February 2020 (UTC)
Thread moved to Wikipedia:Link_rot/URL_change_requests#Request_for_change_of_(soon_to_be)_broken_links_to_LPSN and poster notified. -- Green C 03:26, 14 February 2020 (UTC)
Hello. I want a bot . Because as I'm a student so I'm unable to be active on Wikipedia as much as it is required. So I think if I will get a bot then when I will unable to do template like about 100 pages at that time , instead of me my bot will do that . — Preceding unsigned comment added by Tanisha priyadarshini ( talk • contribs) 16:07, 7 April 2020 (UTC)
There are many articles (or at least enough that it would be tedious to go through and check each and every one) in the backlog (en.wikipedia.org/wiki/Category:Wikipedia_infobox_backlog) that actually do have infoboxes. I think a bot that could be submitted a category of 200-500 pages, read the wikitext of each one, and if the page has {{infobox
in it, go to the talk page of that article and remove the needs-infobox=yes
parameter
Firestarforever (
talk) 13:39, 28 March 2020 (UTC)
While it is likely impossible to automate all of these guidelines Wikipedia:Writing_about_women, things like using last name or relationships in lede are systematic bias which can have systematic solutions. A bot attempting to do this would be AMAZING (where exceptions like Icelandic folks would be an opt out rather than opt in) — Preceding unsigned comment added by Icy13 ( talk • contribs) 21:39, 25 February 2020 (UTC)
When using footnoted referencing, the task of assessing what source supports what text is complicated. A reference may be tagged e.g. {{
self-published source}}, {{
self-published inline}}, {{
deprecated inline}}, {{
dubious}} and other tags which may be applied to the footnoted reference but these are not linked to the readable content. When using <ref>
tags, by contrast, we can use <nowiki><ref>{{cite [...] | publisher=$VANITYPRESS [...] {{self-published source}}</ref>{{self-published inline}}
to flag both the reference and the inline citation.
I would like a maintenance tag bot to add, e.g., {{ self-published inline}} after the {{ sfn}}/{{ harv}} instances matching footnoted citations that are flagged as self-published, deprecated or otherwise dubious. Guy ( help!) 09:15, 25 February 2020 (UTC)
|journal=
, but a similar bot could be coded to look for domains found in |url=
and |publisher/website/magazine/journal/work/...=
Headbomb {
t ·
c ·
p ·
b} 22:22, 25 February 2020 (UTC)Greetings. I'm here once again to bother you all about Files!
Tagging a file {{Non-free reduce}} places it in Category:Wikipedia non-free file size reduction requests, where User:DatBot performs the file size reduction automatically if the file is in .png or .jpg format. However, DatBot doesn't process any other format, and therefore files in other formats need manual processing.
I am requesting a bot to, once daily, check all files in Category:Wikipedia non-free file size reduction requests and, if the file format is not .png or .jpg, change {{Non-free reduce}} into {{Non-free manual reduce}}, so that they're more readily processed.
Thanks! The Squirrel Conspiracy ( talk) 02:31, 23 March 2020 (UTC)
Done — Preceding unsigned comment added by The Squirrel Conspiracy ( talk • contribs) 19:50, 27 March 2020 (UTC)
I sometimes put queries on article talkpages, some get answered quickly, some stick around indefinitely and occasionally old ones get resolved. My suspicion is that my experience is not unusual, but I hope that this is a software issue and that a lot more article queries could be resolved if the relevant editors knew of them. Would it be possible to have a bot produce reports for each Wikiproject of open/new talk page threads that are on pages tagged to that project? Ϣere SpielChequers 09:49, 10 February 2020 (UTC)
I'm in need of help replacing all instances of a set of WikiProject templates as taskforces of the one unified template: {{
WikiProject Molecular Biology}}
. Unfortunately a simple transclusion of the new template wrapped in the old templates isn't enough, since some pages have multiple WikiProject templates, so will need to be marked with multiple taskforces. It's therefore similar to when
Neurology was merged into WP:MED.
Replacing
{{WikiProject Molecular and Cellular Biology|class=GA|importance=high|peer-review=yes}} {{WikiProject Computational Biology|importance=mid|class=GA}}
With
{{WikiProject Molecular Biology|class=GA|importance=high|peer-review=yes |MCB=yes |MCB-imp=high |COMPBIO=yes |COMPBIO-imp=mid }}
Broadly, I think the necessary bot steps would be:
{{
WikiProject Molecular and Cell Biology}}
OR {{
WikiProject Genetics}}
OR {{
WikiProject Computational Biology}}
OR {{
WikiProject Biophysics}}
OR {{
WikiProject Gene Wiki}}
OR {{
WikiProject Cell Signaling}}
{{
WikiProject Molecular Biology}}
{{WikiProject Molecular and Cell Biology}}
AND {{WikiProject Genetics}}
AND {{WikiProject Computational Biology}}
AND {{WikiProject Biophysics}}
AND {{WikiProject Gene Wiki}}
{{WikiProject MCB/COMPBIO/Genetics/Biophysics/Gene Wiki|importance=X|quality=y}}
|MCB/COMPBIO/genetics/biophysics/Gene Wiki=yes
+ |MCB-imp/COMPBIO-imp/genetics-imp/biophysics-imp/GW-imp=X
(note: GW → Gene Wiki)|importance=
and |quality=
, add that as the overall |importance=
and |quality=
to {{WikiProject Molecular Biology}}
|signaling=yes
(i.e., replace {{
WikiProject Cell Signaling}}
on pages that transclude it with {{WikiProject Molecular Biology|...|signaling=yes}}
)|genewiki=yes
|metabolism=yes
Thank you in advance! T.Shafee(Evo&Evo) talk 07:09, 12 January 2020 (UTC) (refactored/edited by Seppi333 ( Insert 2¢) 05:48, 18 January 2020 (UTC))
{{
Subst only|auto=yes}}
template to merge one banner into another, but is there any support for merging multiple banners on a single page into 1? If not, are there any bots that have been approved to merge multiple project banners on talk pages (particularly where 2+ banners occur on a single page) into a single parent banner? Asking because I could likely modify the source code of a bot designed to merge the banners of another project's task forces for this purpose, especially if there's one written in python.
Seppi333 (
Insert 2¢) 03:45, 16 January 2020 (UTC)
{{
WikiProject Gene Wiki}}
&
Category:Gene Wiki articles) which is currently present on ~1800 pages. I think we're probably just going to go with the current task force listing in the {{
WPMOLBIO}}
template.The edge case is when two taskforces currently indicate different importance levels (e.g. Talk:DNA_gyrase). In such cases it might be safest to use the median rounded up for the overall importance (high+low→mid, high+mid→high), but maybe that's over complicating things.. It wouldn't be that technical to encode that. Programatically, one just needs to ordinally encode low→1, mid→2, high→3, top→4 (NB: this method implicitly assumes that there's an equal "importance distance" in a mathematical/statistical sense between importance ratings, which might not necessarily be true - it depends on how people go about rating importance on average), then use round(median(list of ratings)) or round(average(list of ratings)), then remap whatever number it returns back to an importance rating. E.g., the average rating of task forces that rate an article as low, high, and top is (1+3+4)/3, which would be rounded to 3 → high importance. Seppi333 ( Insert 2¢) 02:56, 18 January 2020 (UTC)
@ Evolution and evolvability: I refactored the request in Special:Diff/936327774/936342626 to reflect the changes to the template. You might want to look it over just to make sure nothing seems off. Seppi333 ( Insert 2¢) 05:48, 18 January 2020 (UTC)
{{ resolved}} Revisiting this March discussion for a new owner
When an AfD discussion ends with no discussion, WP:NOQUORUM indicates that the closing admin should treat the article as an expired PROD ( "soft delete"). As a courtesy/aid for the closer, if would be really helpful for a bot to inform of the article's PROD eligibility ("the page is not a redirect, never previously proposed for deletion, never undeleted, and never subject to a deletion discussion"). Cribbing from the last discussion, it could look like this:
This would greatly speed up the processing of these nominations. Eventually would be great to have this done automatically, but even a user script would be helpful for now. czar 19:26, 29 December 2019 (UTC)
@ Czar: For Wikipedia:Articles for deletion/Log/2020 February 3, I extract information like this: report. Is the information enough? -- Kanashimi ( talk) 10:06, 4 February 2020 (UTC)
Extended content
|
---|
posting something like this to the AfD discussion when no one else has !voted
|
{{ resolved}} I've been using User:Ucucha/HarvErrors.js for a few days now, and it's a pretty nice little script. However, the issues it highlights should be flagged for everyone to see and become part of regular cleanup. For example, in Music of India, two {{ harv}}-family templates are used to generate reference to anchors, designed to point to a full citation.
{{Harvnb|MacDonell|2004|pp=29–39}}
, pointing to
Music of India#CITEREFMacDonell2004{{Harvnb|Radhakrishnan|Moore|1957|p=3}}
, pointing to
Music of India#CITEREFRadhakrishnanMoore1957However, inspecting the page reveals those anchors aren't found anywhere on the page. Even a manual search won't find the corresponding citations on that page, because this isn't an issue of someone having forgotten a |ref=harv
in a citation template, they just aren't there to begin with.
A bot should flag those problems, probably with a new template {{ broken footnote}}, or possibly on the talk page.
Headbomb { t · c · p · b} 15:13, 21 February 2020 (UTC)
|ref=harv
automatically though. That would kill a great deal of those errors (although certainly not all).
Headbomb {
t ·
c ·
p ·
b} 20:12, 21 February 2020 (UTC)
<ref>Fischer 2008: p. 149</ref>
Lot of permutations for Harvard reference problems that a specialized bot could become expert on. --
Green
C 20:09, 21 February 2020 (UTC){{ resolved}} Per this conversation, the automated essay assessment system has fallen badly out of date since BernsteinBot stopped updating it in 2012. It would be useful to revive it so that essay readers could have a better indication as to whether the essay they are reading is more likely to represent a widespread norm or just a minority viewpoint. MZMcBride has provided the original code, but it will need to be updated. Your help would be much appreciated. Regards, Sdkb ( talk) 20:19, 22 March 2020 (UTC)
{{ resolved}} Greetings, I'm here to bother you all about File namespace nonsense again.
User:RonBot, which was disabled because its operator went inactive a year ago, had an approved task to reduce the display size of SVGs ( BRFA). In its absence, there's quite a pile-up of SVGs awaiting reduction (over 100 currently). I tried to reduce them manually and failed, so now I'm here asking someone else to take up the task themselves. The source code for the task is here.
Many thanks, The Squirrel Conspiracy ( talk) 23:41, 5 April 2020 (UTC)
{{
resolved}}
I have found 1,575 articles that have an identical referencing error. The author name listed in {{
sfn}} does not match the author's name as listed in the |ref=
parameter in the matching full {{
cite book}} citation template, which causes a non-working link from the short reference to the full reference. It also causes a red error message if you have the relevant script enabled.
I have performed a sample fix here. Is there a kind AWB editor or bot operator who would be willing to fix the rest?
The list of articles that need fixing is here. Thanks. – Jonesey95 ( talk) 15:23, 9 April 2020 (UTC)
{{
sfn}}
template. The book is *{{cite book
|last1=Gröner
|first1=Erich
|author-link1=
|author-mask1=
|last2=Jung
|first2=Dieter
|display-authors=
|last-author-amp=
|last3=Maass
|first3=Martin
|translator-last1=Thomas
|translator-first1=Keith
|translator-last2=Magowan
|translator-first2=Rachel
|year=1991
|title=U-boats and Mine Warfare Vessels
|volume=2
|work=German Warships 1815–1945
|location=London
|publisher=Conway Maritime Press
|isbn=0-85177-593-4
|ref=CITEREFGröner1991
}}
{{
sfn|Gröner|1991|p=...}}
should actually be {{
sfn|Gröner|Jung|Maass|1991|p=...}}
and that |ref=CITEREFGröner1991
(or |ref=CITEREFGr.C3.B6ner1991
if not yet modified) should be |ref=harv
. --
Redrose64 🌹 (
talk) 19:14, 9 April 2020 (UTC)
|ref={{
SfnRef|Gröner|1991}}
over |ref=CITEREFGröner1991
. I think the SfnRef way is a bit cleaner (since it matches the {{
sfn}} invocation), so would probably use that. --
AntiCompositeNumber (
talk) 02:11, 10 April 2020 (UTC)
{{
sfn|Gröner|Jung|Maass|1991|p=...}}
is the way, and people can change it to {{
sfn|Gröner et al.|1991|p=...}}
+ |ref=Gröner et al.
if they want to manually shorten the list of authors (most style guides say keep 3, so that's why the default is up to 3 named authors, and 4+ gets truncated to et al.
Headbomb {
t ·
c ·
p ·
b} 02:22, 10 April 2020 (UTC)
Done I took the easy way and made the suggested change, if anyone wants to alter the way that the linkage is made then go ahead. There are 3 articles, October 1918, SM U-10 (Austria-Hungary) and SM U-11 (Austria-Hungary), that need further investigation as there were 2 substitutions in them. Keith D ( talk) 14:45, 10 April 2020 (UTC)
{{ resolved}} Listeria bot has been blocked due to it not complying with our non-free content policy and having someone knowledgeable in PHP fork the bot using the original code and implement a fix would be greatly appreciated. Extensive discussion has occurred at Wikipedia:Bots/Noticeboard#Re-examination of ListeriaBot and Wikipedia:Administrators' noticeboard#ListeriaBot blocked an urgent resolution is needed. ‑‑ Trialpears ( talk) 20:08, 13 April 2020 (UTC)
My kingdom for a bot that compiles new articles in a new subject area (e.g., added to a WikiProject's scope). @ PresN, currently runs a script that does this manually (see one of the "New Articles" threads at WT:VG) but would love to be able to do this for other projects so that new editors get visibility/help and that the project can see the fruits of its efforts. (Also discussed at PresN's talk page.) Special:Contributions/InceptionBot currently finds articles that might be within scope but this proposal is instead a log of recent additions to a topic area (similar to how the 1.0 project compiles). It could be useful if delivered directly to a WikiProject/noticeboard page or, alternatively, updated on a single page and transcluded à la WP:Article alerts. czar 20:07, 15 December 2019 (UTC)
Extended content
|
---|
def parse_lists(lists, headers, assessments, new_cats, dates, dates_needed):
NULL_ASSESSMENT = '----'
max_lists = dates_needed * 4
extra_headers = get_extra_headers(headers) # Note "Renamed" headers
# Initial assessment
for index, list in enumerate(lists):
if index <= max_lists:
for item in list.find_all('li'):
contents = _.join(item.contents, ' ')
offset = count_less_than(extra_headers, index) - 1
date = datesint(max((index-(1 + offset)), 0)/3)] #TODO: handles 3+ sections
assess_type = assessment_type(contents)
# Assessment
if assess_type == ASSESSMENT:
namespaced_title = get_title(item, ASSESSMENT)
title = clean_title(namespaced_title)
old_klass = NULL_ASSESSMENT
new_klass = get_newly_assessed_class(item, namespaced_title)
if (not is_file(namespaced_title)
and not is_redirect_class(new_klass)
and not (title in assessments and was_later_deleted(assessmentstitle]))): # ignore files, redirects, and mayflies
if is_category(namespaced_title):
init_cat_if_not_present(new_cats, namespaced_title)
else:
init_if_not_present(assessments, title)
assessmentstitle]['creation_class' = new_klass
assessmentstitle]['creation_date' = date
if assess_type == REASSESSMENT:
namespaced_title = get_title(item, REASSESSMENT)
title = clean_title(namespaced_title)
old_klass = get_reassessment_class(item, 'OLD')
new_klass = get_reassessment_class(item, 'NEW')
if not is_file(namespaced_title):
init_if_not_present(assessments, title)
if is_redirect_class(new_klass): # tag redirect updates as removals, unless later recreated
if not (is_draft_class(old_klass) and 'creation_class' in assessmentstitle]): # Ignore if this a a draft-> mainspace move in 2 lines
assessmentstitle]['was_removed' = 'yes'
elif is_redirect_class(old_klass): # treat redirect -> non-redirect as a creation
assessmentstitle]['creation_class' = old_klass
assessmentstitle]['updated_class' = new_klass
assessmentstitle]['creation_date' = date
else: # only add the latest change, and only if there's no newer deletion
if 'updated_class' not in assessmentstitle and not was_later_deleted(assessmentstitle]):
assessmentstitle]['updated_class' = new_klass
# Rename
if assess_type == RENAME:
namespaced_old_title = get_rename_title(item, 'OLD')
namespaced_new_title = get_rename_title(item, 'NEW')
if not is_file(namespaced_new_title) and not is_category(namespaced_new_title):
new_title = clean_title(namespaced_new_title)
if is_draft(namespaced_old_title) and not is_draft(namespaced_new_title):
init_if_not_present(assessments, new_title)
if not was_later_updated(assessmentsnew_title]) and not was_later_deleted(assessmentsnew_title]):
assessmentsnew_title]['creation_class' = DRAFT_CLASS
assessmentsnew_title]['updated_class' = "Unassessed"
assessmentsnew_title]['creation_date' = date
if is_draft(namespaced_new_title) and not is_draft(namespaced_old_title):
init_if_not_present(assessments, new_title)
if not was_later_updated(assessmentsnew_title]) and not was_later_deleted(assessmentsnew_title]):
assessmentsnew_title]['creation_class' = "Unassessed"
assessmentsnew_title]['updated_class' = DRAFT_CLASS
assessmentsnew_title]['creation_date' = date
# Removal
if assess_type == REMOVAL:
namespaced_title = get_title(item, REMOVAL)
# Articles
if not is_file(namespaced_title):
title = clean_title(namespaced_title)
if title not in assessments: # don't tag if there's a newer re-creation
assessmentstitle = { 'was_removed': 'yes' }
if is_category(namespaced_title):
assessmentstitle]['creation_class' = CATEGORY_CLASS
if is_draft(namespaced_title):
assessmentstitle]['creation_class' = DRAFT_CLASS
# Categories
if is_category(namespaced_title) and namespaced_title not in new_cats:
new_catsnamespaced_title = 'was_removed'
return {'assessments': assessments, 'new_cats': new_cats}
|
I think I made this kind of request several years ago, but I can't find it in the archives.
Occasionally people add text like [citation needed] or (reference needed) to articles, and these articles don't end up in maintenance categories because they're plain text instead of templates. Could someone write a bot that would go around making edits like this, or could an existing maintenance-bot operator add this task? I'm guessing that it would be rather simple — give it a list of phrases, tell it to look for them inside parentheses and brackets, and let it loose. Of course, this isn't a one-time problem, so if this is a good idea, it ought to be made an ongoing task. Nyttend backup ( talk) 16:35, 20 April 2020 (UTC)
\[?\[citation needed\]\]?
with {{subst:cn}}
. --
AntiCompositeNumber (
talk) 15:49, 22 April 2020 (UTC)
Please see Help_talk:Citation_Style_1/Archive 69#Cite book Harv warning where the suggestion was made: not a formal BOTREQ yet, but might help if bot operators can give some advise about how this could best be addressed, so that a more formal BOTREQ can follow (if that is the best option). -- Francis Schonken ( talk) 07:38, 19 April 2020 (UTC)
|ref=harv
parameter is no longer needed one could run a bot task to remove it.
Jo-Jo Eumerus (
talk) 08:58, 19 April 2020 (UTC)
I occasionally see this (and have done it once or twice), and it's annoying to have to un-archive when it happens. Could we get a bot to find instances of people using something like {{
DNAU|47}}
and switch them to {{
subst:DNAU|47}}
? (I know there are a few bots already running that substitute accidental transclusions, so perhaps one of them could be tasked to this without too much effort.) {{u|
Sdkb}}
talk 04:16, 28 April 2020 (UTC)
Hello! I asked over at WikiProject Council if someone could have a bot remove all appearances of Portal:Pandemic from articles, and was advised to ask here. Could anyone here help? --- Another Believer ( Talk) 20:20, 22 April 2020 (UTC)
As of right now, there are currently 1065 images tagged with F8 that need deleting (specifically, these two categories are severely backlogged). Is it possible for a bot to perform this kind of maintenance based on transwiki checks to ensure that the Wikipedia and Commons versions of each file match, down to maximum resolution and filesize? ToThAc ( talk) 17:33, 24 April 2020 (UTC)
{{
Now Commons}}
has a |reviewer=
parameter which will
categorize files accordingly; fill that in for each file you've finished reviewing/fixing. -
FASTILY 04:08, 25 April 2020 (UTC)
Basically the title. There are numerous articles with images (and other content) that should have alt text but do not.
MOS:ALT says that we should try to ensure that images have alt text for accessibility reasons, which is especially important for people that utilize screen readers that cannot physically see the images. In a nutshell, said bot would probably check articles that have an embedded file such as a video, music, or image. It would then add the article a maintenance category on whether or not the embed has alt-text, as well as possibly a tag to the article letting readers (including people with screen readers) that alt-text is missing.
A related idea would be the same as the above but for math markup, which should probably be tagged/categorized separately due to the technical knowledge required to translate it into English.
Chess
(talk) Ping when replying 01:52, 5 March 2020 (UTC)
|alt=
and feel that they are justified in removing the tag. No |alt=
parameter is better than having a repeat of the caption. --
Redrose64 🌹 (
talk) 14:52, 5 March 2020 (UTC)
\and
and \or
are deprecated in favour of \land
and \lor
, which obviously can cause problems with screenreaders.
Help:Latex has a lot of examples and if you look at some of the LaTeX source for them you can see how it might be incomprehensible for a screen reader. Formatting instructions would presumably also be a pain to hear especially if there's a lot of them.It's not uncommon that inexperienced editors will add piped inline interlanguage links to articles that exist on a different Wikipedia in order to avoid red links. This is a contravention of the MOS, as it surprises the reader, and prevents links to valid articles once they are created. Such piped links should be replaced with the {{ Interlanguage link}} template, e.g. Special:Diff/866719019.
Is this something that could feasibly be done by a bot? Are there valid intentional uses that shouldn't be changed? (I guess it's clearer with languages that use non-Latin script, since I can't think of a good reason to pipe a foreign name under English text, but I'm not sure about those which use the Latin alphabet.) -- Paul_012 ( talk) 02:48, 13 March 2020 (UTC)
[[:th:ลมซ่อนรัก (ละครโทรทัศน์)|Hidden Love]]
, which would need to be converted to {{ill|Hidden Love (TV series)|lt=Hidden Love|th|ลมซ่อนรัก (ละครโทรทัศน์)}}
. --
Paul_012 (
talk) 18:58, 16 March 2020 (UTC)There are a lot of URL in sources, that have tracking extensions by Facebook attached, they should be deleted. ( https://en.wikipedia.org/?search=fbclid&title=Special:Search&fulltext=1&ns0=1) I think that would be a fine job for a bot, and as it's probably happening unintentional by some editors, who copy'n'paste this without much thinking, it should probably done once per day or week or so. Same goes probably for Google Analytics extensions with UTM: https://en.wikipedia.org/?title=Spezial:Suche&limit=500&offset=0&ns0=1&search=utm_source&advancedSearch-current={} Grüße vom Sänger ♫ ( talk) 15:02, 22 February 2020 (UTC)
|url=
they are now mismatched and look like different URLs, other bots might pick up on that and restore the archive URL version of the source URL, since it is the authority (once the link is dead). Personally, I would bypass any citation that involves an archive URL too many complications. --
Green
C 16:23, 22 February 2020 (UTC)
Hello,
I have a working bot; its purpose is to give readers and editors alike information regarding the presence of different content in other Wikipedia language editions for the same article they are reading. This information can be used to guide the reader to content which will add to their study, and/or highlight that content in another language happens to be biased. I hope that such a bot would be used to ratchet up the level of discourse across language editions and spread useful knowledge between them.
My proposal is that the bot be allowed to add a small phrase to the 'See Also' section of a given article, such as, "The Russian edition of this article is 70% different from this edition. You can view it here."
As I was working on this bot, there was an ongoing discussion at the Idea Lab. You can view it at Wikipedia Edition Article Similarity Bot.
I assert that the bot works: its most limiting factor right now is that I only have access to 2 million characters of translation capability per month for article comparisons, which limits the bot to a relative handful of articles in output per month. You can see the code here.
This is not a bot request--it is a request for the bot to have edit capabilities. If there is a more appropriate place for this request, please let me know.
Theory42 ( talk) 16:16, 26 March 2020 (UTC)
To add a merge template to the other page where only one page has had the merge template added; that is, to add reciprocal tags. This has been proposed before, and developed consensus, but doesn't seem to have been finished or the scope has been expanded too far until it becomes controversial. Rather than starting from scratch, it might be possible to resurrect Mutleybot or to add this as a Merge bot task, something wbm1058 has suggested before ( Wikipedia:Bot requests/Archive 70#Removing bad merge requests). I suggest that the scope of the bot be simple, and that it not be designed to interpret merge consensus (or not), something that has been controversial in the past. Klbrain ( talk) 07:50, 10 April 2020 (UTC)
Hi. The main resource for sourcing basic biography data on Olympians, Sports Reference, has now been switched off. I started a recent thread about this at the Olympic Project. There are tens of thousands of articles that source Sports Ref. However, there's quite a simple fix that can be done to stop the links from going dead. Just change "cite web" to "cite sports-reference" in the ref, as per this example, adds the web archive link. This is per the recent change made to the cite template by Zyxw.
So therefore, please can a bot change anything from "cite web" to "cite sports-reference" where this is used on WP? Many thousands of article already use the latter, but even more so do not. Please ping me if you need anymore info. Thanks. Lugnuts Fire Walk with Me 13:58, 17 May 2020 (UTC)
{{
cite sports-reference}}
this will create problems. --
Green
C 14:43, 17 May 2020 (UTC)Done Around 150k links archived in around 100k articles. -- Green C 02:39, 24 May 2020 (UTC)
I think it would be effective to have a bot that condenses multiple “article issue” templates, such as “more citations needed” or “Missing information” into the “This article has multiple issues” so it appears as one notice instead of several consecutive notices. Users might forget to do this, or add to previously existing issue templates and forget to condense it using the ‘multiple issues’ template. I propose that this bot would apply the condensing template to any article with more than 2 notices at the top, or whatever the official guidelines are for this according to the Manual of Style as I’m not yet sure what they say about the number of templates allowed to appear. This would be fully automated as opposed to the semi-automation of the AutoWikiBrowser that already has this capability.
This might already exist or have been discussed, so forgive me if I’m wrong.
Thanks! MrSwagger21 ( talk) 10:50, 7 May 2020 (UTC)
There are many bots whose job involves making regular updates or are otherwise anticipated to make edits frequently. When such bots stop operating, it might just be because they're no longer needed and have been retired, or it might be indicative of a problem. I propose a bot that monitors the edits of other bots known to make frequent edits (those bots could be added to a category, or to one of several categories based on level of activity expected), and sends an automated alert to a noticeboard if the bot makes no edits within the expected timeframe. At the noticeboard, editors could review the alerts, marking some as no issue and placing others into a queue for repairs. (This is somewhat a follow-up to my brainstorming from March; feel free to lmk if it's just as non-viable, but I wanted to at least throw it out here.) {{u| Sdkb}} talk 17:20, 1 May 2020 (UTC)
meta=featureusage
, although doing so it tedious and would require knowing the agent for each bot. I'm skeptical of the utility of this in general, but in theory such a tool could check bots without edits and known none-or-minimal-editing that way. In practice,
User:Joe's Null Bot/source does not list a custom useragent, so it'll be MediaWiki::API/0.41
or whatever version it's using. Trivial to add. ~ Amory (
u •
t •
c) 15:14, 2 May 2020 (UTC)Throwing ideas, we could simply have a table of bots sortable by bot name, operator name, number of edits made, and by date of last edit. Then have a disclaimer at the top that several bots, like nullbots, will not make edits. Would give a good idea at a glance of which bot is active and which isn't. Headbomb { t · c · p · b} 14:56, 2 May 2020 (UTC)
<center>—</center>
instead of hyphens to indicate an inexistent entry.
Headbomb {
t ·
c ·
p ·
b} 18:14, 2 May 2020 (UTC)
@ Majavah: I made some tweaks [6]. The class="center" thing messes with column widths, so I went with <center> </center> tags. The final ' of diffs should be done with {{ '}} (or you could just make use of {{ '}} everywhere instead of '). But this table looks pretty good to me. Headbomb { t · c · p · b} 01:12, 5 May 2020 (UTC)
This task was previously handled by Acebot (BFRA here), but it stopped functioning in November 2019. The manual updates done by several editors since then indicate that there is continued demand for the information in the table. Its operator appears to have retired. {{u| Sdkb}} talk 17:05, 1 May 2020 (UTC)
Doing... - an opportunity to test out tabular data on Commons with a Lua template. If it works, the Lua module can be rolled out to other wiki languages without needing bot perms or bot edits. -- Green C 18:24, 1 May 2020 (UTC)
Done, new system working with Commons tabular data. Installed on 60+ wikis. -- Green C 02:41, 24 May 2020 (UTC)
I would like to generate a list of Wikipedia Editors on the Luganda Wikipedia by Article Count https://lg.wikipedia.org/wiki/Olupapula_Olusooka
To be able to generate something like this /info/en/?search=Wikipedia:List_of_Wikipedians_by_article_count — Preceding unsigned comment added by Kateregga1 ( talk • contribs) 19:54, 19 April 2020 (UTC)
Kateregga1, if you want the bot that generates Wikipedia:List_of_Wikipedians_by_article_count to also run for Lgwiki, post a request on the talk page of the list. I recently set it up on Trwiki for example. -- Green C 00:10, 23 April 2020 (UTC)
GreenC bot by @ GreenC: has a job that detects when a file on Wikipedia has the same name as one on Commons but is a different image, and tags the local file with Template:Shadows Commons, which puts it in Category:Wikipedia files that shadow a file on Wikimedia Commons.
I've been processing the files in that category, and many of the files on Commons are copyright violations, which are deleted within hours/days of upload. It would be useful for a bot to review the files tagged with Template:Shadows Commons and remove that template if there is no longer a file on Commons with the same name.
At any given time there are only a small number of files in that category, 30 or so, so this could potentially be done more than once a day without being very resource intensive, though once a day would be plenty useful. The Squirrel Conspiracy ( talk) 06:43, 21 March 2020 (UTC)
This is a simpler multi-article move than the last one I requested and withdrew, since the targets are all redlinks. See the discussion at Wikipedia talk:WikiProject Rivers#More tributary disambiguators to update and complete list of old and new titles at User:Dicklyon/tributaries, which are listed like these examples (about 500 of them):
I appreciate your help. Dicklyon ( talk) 18:05, 5 June 2020 (UTC)
OK, new list of about 54 from Certes has been reviewed and made explicit, herebelow. Dicklyon ( talk) 05:04, 7 June 2020 (UTC)
The list of vital articles gets updated on a regular basis. Sometimes page titles are changed. I think we should have a bot update the pages to make work easier for humans. Interstellarity ( talk) 13:14, 2 June 2020 (UTC)
I noticed that a "Wiki Loves Pride" mass message to all wikiprojects from June 2015 is not being auto-archived by some of those projects that set up autoarchiving (such as WT:IRAN). It is missing a timestamp. Can a bot be set up to archive all those messages that still remain on the main talkpages to the proper archives? Or can a boit be set up to to add timestamps to all the messages that remain on the main talk pages? (June 2015 timestamp) This is 5 years out of date, and seems odd to inform people to do still some thing 5 years after it already ended.
-- 65.94.170.207 ( talk) 18:50, 28 May 2020 (UTC)
(timestamp may not be accurate) {{subst:Unsigned|Another Believer|15:13, 3 June 2015 (UTC)}}
on the end of each of them then?
Naypta ☺ |
✉ talk page | 22:34, 28 May 2020 (UTC)
Wikipedia has a very long list of section anchors that need to be repaired. To fix these broken links, it is necessary to add an {{ anchor}} whenever a section's title is changed.
User:Dexbot was designed to correct these broken links, but it hasn't corrected any of them in several years. Can Dexbot be configured to fix these links again? Jarble ( talk) 16:40, 22 May 2020 (UTC)
Done I started the bot, here's the first edit: Special:Diff/958520923 Ladsgroup overleg 08:31, 24 May 2020 (UTC)
This defunct bot removed inappropriate uses of {{ Current}} (which per its documentation is meant only for short-term use on articles receiving a high edit count) by removing it from articles that have not been edited in more than two hours. It stopped functioning I think in 2013, and since then (perhaps because it stopped) the standards have gotten increasingly lax. I propose that we bring it back (with perhaps a slightly longer edit window, at least to start). {{u| Sdkb}} talk 08:54, 19 May 2020 (UTC)
I am a bureaucrat on Real Life Villains Wikia and the other bureaucrat wanted to do a category cleanup, but changed his mind about some of the categories. Unfortunately, the user he tasked with removing the categories took his job too seriously and removed them anyway even after we decided to keep them. On any page where User:Super Poison Ivy removed the categories Anti-Semitic, Anti-Christian, Bully, Islamophobes, Fascist, and Communist, I want those categories to be restored. — Preceding unsigned comment added by Bjanderson94 ( talk • contribs)
Would it be controversial to request a bot to create redirects in the Wikipedia talk namespace to the talk pages of the targets of redirects in the Wikipedia namespace? I've typed WT:xxx, expecting it's a shortcut given WP:xxx is, only to be disappointed it doesn't exist from time to time. Nardog ( talk) 01:26, 24 February 2020 (UTC)
Should only...Why? It's not like WP: shortcuts technically exist in the main namespace, as in H:. I'd like e.g. WT:Actors to work, even though WP:Actors isn't marked as a shortcut. (I can see an argument for avoiding shortcuts to sections, though.) Nardog ( talk) 03:23, 24 February 2020 (UTC)
The {{
literal translation}} and {{
langnf}} templates were originally written with simple unquoted outputs of (if called with just "example text" as their argument) Spanish for example text
and lit. example text
. This isn't the best way to present a translated string, and in hundreds of articles users have very reasonably added quotemarks into the template calls (eg. {{langnf||Spanish|"Rich Port"}} and {{lit.|"Free Associated State of Puerto Rico"}} on the
Puerto Rico article).
Last week User:Ravenpuff updated the two templates to include apostrophes around the translated phrase. This resulted in some articles displaying nested quotation marks, such as:-
Puerto Rico (Spanish for '"Rich Port"'; abbreviated PR), officially the Commonwealth of Puerto Rico (Spanish: Estado Libre Asociado de Puerto Rico, lit. '"Free Associated State of Puerto Rico"')
I suggested adding a {{ trim quotes}} to the templates to avoid this, and Ravenpuff suggested fixing all of the hundreds or thousands of template calls in articles instead. Which sounds like a job for a bot, so here I am. A bot would simply be tasked with checking all usages of the {{ literal translation}} and {{ langnf}} templates (and their synonyms), to look for any argument that starts and ends with a quotation mark, and remove those marks. If an argument contained more than two quotemarks, which is perhaps plausible where an editor offers multiple translations, that should be flagged somehow.
Is this worth creating a bot for, or is {{ trim quotes}} a better solution? Or are there other options to explore? -- Lord Belbury ( talk) 10:52, 16 April 2020 (UTC)
@
Jonesey95: Best I can do offhand has been
User:Lord Belbury/sandbox, which needs another pass to ignore all italics markup, and I'm stumped by Lua's handling of curly quotes (I've never used Lua before): it's beyond me why a match of s:match([[([“])]])
is returning true for a single curly apostrophe. Will take another look later, would appreciate any feedback (or a pointer to a better talk page to ask for templating help).
While this is being worked on, should {{ literal translation}} and {{ langnf}} be left as they are (with Ravenpuff's simple " put quotemarks around every string, even if it already has them" update) or reverted to leaving the string unchanged? -- Lord Belbury ( talk) 10:50, 23 April 2020 (UTC)
As the result of a move request, 2019–20 coronavirus pandemic was moved to COVID-19 pandemic. Unfortunately, there are a metric tonne of articles (and templates and categories) that have "2019–20 coronavirus pandemic" (or "2020 coronavirus pandemic") in the name. Accordingly, it would be appreciated if we could get a bot that would move all of these to the consistent "COVID-19 pandemic" name. This matter was briefly discussed in the move request, with unanimous support for consistency, and it's quite obvious that all these titles should be in line with the main article, named so only because of the previous name.
While this is a one-time request, I believe this is too time-consuming with AWB as these are title changes. But happy to be told otherwise. -- tariqabjotu 03:14, 4 May 2020 (UTC)
Following up from this conversation, I think it would be helpful to have a bot automatically apply the appropriate padlock icon to pages after they become protected. {{u| Sdkb}} talk 09:39, 21 May 2020 (UTC)
Could I also mention that it would be useful to have a bot which fixes incorrect protection templates? MusikBot removes incorrect ones, as I pointed out here, but it doesn't replace them (and could be the cause of some of these missing templates). This seems like a related subject. RandomCanadian ( talk / contribs) 18:29, 21 May 2020 (UTC)
Coding... — MusikAnimal talk 17:34, 8 June 2020 (UTC)
BRFA filed — MusikAnimal talk 01:04, 11 June 2020 (UTC)
There is already an existing score parameter that will determine if a team wins or loses a match. This w/l parameter deemed dubious and redundant, hence score parameter must be taken advantage to assess the win-loss logic instead.
Scenario description | Sample parameter usage | Requested bot action |
---|---|---|
Both w/l and score parameters are empty | |w/l= |score=
|
Remove w/l parameter usage |
w/l value is empty, and score value is dash (or en dash, hyphen) |
|w/l= |score=- (using minus sign)
| |
|w/l= |score=– (using en dash)
| ||
w/l is either W or L, and score contains dash-separated numbers |
|w/l=w |score=100-90 (using minus sign)
| |
|w/l=l |score=90–100 (using en dash)
| ||
|w/l=w |score=[http://www.game.com/boxscore/game/1 100–90]
| ||
|w/l=l |score=[[Duke–Michigan men's basketball rivalry|90–100]]
| ||
w/l is either W or L, and score contains HTML – between scores
|
|w/l=w |score=100–90
| |
|w/l=l |score=90–100
| ||
|w/l=w |score=[http://www.game.com/boxscore/game/1 100–90]
| ||
|w/l=l |score=[[Duke–Michigan men's basketball rivalry|90–100]]
| ||
w/l value is either w or l, and score is contains all any other values or is empty
(i.e. the winner/loser of the match is known, but the final score is not available) |
|w/l=w or |w/l=l |score=Default
|
Rename parameter to status:
|
|w/l=w or |w/l=l |score=Forfeit
| ||
|w/l=w or |w/l=l |score=
| ||
w/l value is p | |w/l=p
|
Rename parameter to status:
|
w/l parameter not found | Do nothing |
Let me know if I miss any other scenarios. – McVahl ( talk) 07:09, 11 June 2020 (UTC)
–
instead of "–" (for example, |score=90–100
). Sorry for late notice. I just observed only today when PrimeBot made some edits, as this case where not covered on the initial 25 amendments the other day. –
McVahl (
talk) 06:27, 21 June 2020 (UTC)
Example. Missing "}}" is a not too uncommon problem. They can't be tracked by CS1|2 itself because the template is never invoked. I would caution attempting an automated fix because when "}}" doesn't exist there are often other structural problems, and there might be embedded templates etc.. -- Green C 15:29, 18 June 2020 (UTC)
{{
malformed template}}
) is great because it could be visible in the wikitext, produce a red warning message, allow for a tracking cat, and have argument options for the bot name and date, plus whatever future requirements. --
Green
C 17:05, 18 June 2020 (UTC)
About 106 pages in article and template space contain wikilinks that begin with w:en:
, which is redundant, and
VPT consensus was that this extra code can interfere with various tools and scripts that expect links to be in a certain form. Would it be possible for an AWB-wielding editor to go through and remove those prefixes, at least in article space? The edits in template space would need manual inspection to see if they are intentional for some reason. Pinging @
Redrose64,
Trialpears,
Xaosflux,
Johnuniq, and
BrownHairedGirl:, who attended that VPT discussion. –
Jonesey95 (
talk) 03:55, 1 May 2020 (UTC)
[[w:en:Foo|Bar]]
, which has now been changed to [[Foo|Bar]]
. That's fine ... however, many of the links were of the form [[w:en:Foo|Foo]]
, and that first run has left them as [[Foo|Foo]]
, which needs to be consolidated as [[Foo]]
. So I will do a second run through the set, just applying genfixes. --
BrownHairedGirl
(talk) • (
contribs) 05:08, 1 May 2020 (UTC)
w:en:
. I will leave to others the manual inspection and possible cleanup of those templates. @
Jonesey95,
Redrose64,
Trialpears,
Xaosflux, and
Johnuniq: do any of you want to do the templates? --
BrownHairedGirl
(talk) • (
contribs) 05:31, 1 May 2020 (UTC)
:
" in case the parameter is File/Category? any other bad side effects?)If you have short citations like
{{harvnb|Smith|2001|pp=13}}
{{harvnb|Smith|2001|p=1-3}}
Those will appear like
Those are obviously wrong, and should be fixed so they would appear like this
{{harvnb|Smith|2001|p=13}}
{{harvnb|Smith|2001|pp=1–3}}
Those will appear like
Those should be an easy fix for an AWB bot or similar. Those should cover all {{ harv}}/{{ sfn}}-like templates. Headbomb { t · c · p · b} 13:02, 11 April 2020 (UTC)
{{harvnb|Smith|2001|p=p. 13}} {{harvnb|Smith|2001|p=p. 1–3}} {{harvnb|Smith|2001|pp=pp. 13}} {{harvnb|Smith|2001|pp=pp. 1–3}} {{harvnb|Smith|2001|p=pp. 13}} {{harvnb|Smith|2001|p=pp. 1–3}} {{harvnb|Smith|2001|pp=p. 13}} {{harvnb|Smith|2001|pp=p. 1–3}} |
→ |
{{harvnb|Smith|2001|p=13}} {{harvnb|Smith|2001|pp=1–3}} {{harvnb|Smith|2001|p=13}} {{harvnb|Smith|2001|pp=1–3}} {{harvnb|Smith|2001|p=13}} {{harvnb|Smith|2001|pp=1–3}} {{harvnb|Smith|2001|p=13}} {{harvnb|Smith|2001|pp=1–3}} |
Headbomb { t · c · p · b} 13:05, 11 April 2020 (UTC)
p=3-1
, for a document where page 1 of part 3 is called "3-1". –
Jonesey95 (
talk) 05:00, 23 April 2020 (UTC)
|page=3{{hyphen}}1
in those cases in CS1/CS2 templates. But if that's somehow not an acceptable solution here, the bot could take care of the rest. Or assume that |p=p. 3-4
should be converted to |p=3-4
and not |pp=3–4
.
Headbomb {
t ·
c ·
p ·
b} 15:21, 2 May 2020 (UTC)While working on a stub recently, I noticed the US Navy's Naval History and Heritage Command has updated the syntax of links to entries in the important reference Dictionary of American Naval Fighting Ships. This means that many outside links to the dictionary and tools like Template:DANFS (which is transcluded on hundreds if not thousands of US Navy ship articles) now have incorrect html targets. Here are three examples of repairs I've performed personally: [12], [13], [14]. As those examples reveal, the new webpage structure isn't complicated and while I suppose I could go through all the articles by hand and rapidly improve my edit count, this is exactly the sort of thing that an automated performer of edits would be best to solve. I've never before requested a bot, so I'm asking meekly for advice. BusterD ( talk) 15:54, 2 May 2020 (UTC)
Moved to WP:URLREQ#BFI. -- Izno ( talk) 12:32, 3 July 2020 (UTC)
After editing a lot of music articles that had no album cover in the Template:Infobox_album ( Category:Album_infoboxes_lacking_a_cover), I realized that it was a very repetitive processed that could be streamlined by having a bot that:
I looked in the rejected ideas and bots, and it seems like none really tried to attack this. My programming knowledge is okay at best, but I couldn't get any of the Java frameworks working so I'm out of luck doing this myself. ⠀TOMÁSTOMÁSTOMÁS⠀ TALK⠀ 00:49, 22 June 2020 (UTC)
... a separate, specific non-free use rationale for each use of the item, as explained at Wikipedia:Non-free use rationale guideline. The rationale is presented in clear, plain language and is relevant to each use.This is not possible for a bot to do except by means of boilerplated text, and that would imply that little or no thought has been put into the wording of the FUR. -- Redrose64 🌹 ( talk) 11:21, 23 June 2020 (UTC)
Could a bot run a SQL query or similar to compile COVID 19 data into a editable data sheet that another/same bot could import to Wikipedia COVID 19 pandemic update map/graph — Preceding unsigned comment added by 80.41.138.48 ( talk) 16:24, 19 June 2020 (UTC)
I hope that someone can help me by making a bot add the template that articles has been added to Wikipedia:WikiProject Europe/The 10,000 Challenge for example. There are several Challenge pages and there are templates to be added at the articles talk pages that the articles has been added to the Challenge project page, but the bot has stopped to do the task for a long time. Please ping me if this can be done. BabbaQ ( talk) 17:17, 27 April 2020 (UTC)
Coding... I did a quick sample/proof of concept of going in and reviewing paged for eligibility, here's a random sampling of pages that appear to be eligible. Adding the template to the talk page is easy compared to unwinding the list.
@ BabbaQ: Was there a consensus discussion about applying {{ WPEUR10k}} to these talk pages? I suspect this isn't contraversial, but it might be needed when I go to file the BRFA. Hasteur ( talk) 19:45, 7 June 2020 (UTC)
If any bot could take data from https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6 and https://www.worldometers.info/coronavirus/#countries and edit Template:Cases in 2019–20 coronavirus pandemic and Template:Territories affected by the 2019-20 coronavirus pandemic automatically with the latest information that would be great. Sam1370 ( talk) 00:21, 8 April 2020 (UTC)
Covid-19 is undoubtedly testing our public health, medical, and economic systems. But it's also testing our ability to process so much frightening and imminently consequential data. All these data add up to the Covid-19 "infowhelm," the term I use to describe the phenomenon of being overwhelmed by a constant flow of sometimes conflicting information.-- Green C 16:56, 6 May 2020 (UTC)
There are thousands of file talk pages in Category:File-Class United States articles for files that were moved to Commons and deleted in 2011 or 2012. These talk pages contain no content except a transclusion of {{ WikiProject United States}} (or one of its redirects) and should have been deleted long ago. These transclusions are of no use to the WikiProject and should be removed; however, simply removing them would leave these talk pages blank and mislead a viewer seeing a blue link into thinking there is something there. More broadly, there is no reason for these Commons files to be project-tagged on en.wikipedia—local talk pages for Commons files generally lead to split discussions or invite occasional comments that no one sees or answers.
I asked about these talk pages at the WikiProject's talk page (see Wikipedia talk:WikiProject United States#Categorizing files on Commons), and was told to "go with [my] own instincts on this". Any page in Category:File-Class United States articles that (1) does not have a corresponding file on en.wikipedia and (2) contains no content other than a transclusion of {{ WikiProject United States}} (or a redirect), should be speedily deleted under criterion G6 (routine housekeeping). Given the sheer number of pages involved, I am hoping a bot could take on the task. Thanks, -- Black Falcon ( talk) 23:21, 19 April 2020 (UTC)
Very easy typo task (no proper rights to do it myself): == Referencias ==
-> == References ==
and ==Referencias==
-> == References ==
--
Emptywords (
talk) 09:02, 20 July 2020 (UTC)
For all citations to pages under westmidlandbirdclub.com, please add |url-status
,
thus, as the domain has been cyber-squatted.
Andy Mabbett (Pigsonthewing);
Talk to Andy;
Andy's edits 19:54, 14 July 2020 (UTC)
Done -- Green C 00:44, 18 July 2020 (UTC)
File:
Hello!
I checked this category which is for so-called valid SVG files tagged with {{ Valid SVG}} however I noticed that many in fact were invalid. I would like a bot to check all files in the category to see if they are in fact valid or if the files are mistagged. Steps:
http://validator.w3.org/check?uri=http:{{urlencode:{{filepath:{{#titleparts:{{PAGENAME}}}}}}}}
, if yes ignore, if no see 2.http://validator.w3.org/check?uri=http:{{urlencode:{{filepath:{{#titleparts:{{PAGENAME}}}}}}}}
Pinging @ JJMC89: who is familiar with the File: namespace.
I think this is quite important to do since now probably hundreds of files are lying about their validity which isn't good.
Thanks! Jonteemil ( talk) 07:21, 7 May 2020 (UTC)
Can someone create a bot that will look at the latest date of the maps when the maps are updated and update the date automatically? I tried putting in the TODAY template, but I got reverted by Boing! said Zebedee that it would not work. I was hoping someone could work on a bot to save editors' time updating the dates on the maps. Interstellarity ( talk) 19:45, 26 May 2020 (UTC)
{{
Cases in the COVID-19 pandemic|date}}
, fetching a value that would be stored at the Commons file and updated by the map updater whenever they upload a new version. As an aside, thank you, Interstellarity, for all the work you've put in updating map date captions; I recognize it's a tedious task. {{u|
Sdkb}}
talk 19:59, 26 May 2020 (UTC)
{{wikidata|qualifier|Q81068910|P1846|P585}}
, which produces
- and you'd then only have to update the single point on Wikidata qualifier on Wikidata (
wikidata:Q81068910#P1846) for it to update on all wikis. I've had a chat with a couple of admins about this and the general consensus is that it's okay to do performance-wise, but be careful with how you use this - using the wikidata template in this way can be taxing on the server, so try and use it the fewest amount of times you can!If you're happy with that method, I can run through and update the relevant bits on enwiki - you'll know better than I will where the bits are on the other wikis.
Naypta ☺ |
✉ talk page | 18:26, 28 May 2020 (UTC)
Comma separated values like A, B, C can be instead converted into
or
{{hlist|A|B|C}}
This is usually found in infoboxes. Additionally, values separated by a
<br/>
can also be converted into a list.
I'mFeistyIncognito 16:39, 14 June 2020 (UTC)
<div>...</div>
tags, cannot be wrapped by any tags or templates that use <span>...</span>
tags, like {{
nowrap}}. If an infobox wraps a parameter with {{
nowrap}}, converting that parameter's contents to use {{
hlist}} will lead to invalid HTML output. –
Jonesey95 (
talk) 22:17, 14 June 2020 (UTC)
= = List of your created articles that are in [[:Category:Harv and Sfn no-target errors]] = = A few articles you created are in need of some reference cleanup. Basically, some short references create via {{tl|sfn}} and {{tl|harvnb}} and similar templates have missing full citations or have some other problems. This is ''usually'' caused by copy-pasting a short reference from another article without adding the full reference, or because a full reference is not making use of citation templates like {{tl|cite book}} (see [[Help:CS1]]) or {{tl|citation}} (see [[Help:CS2]]). See [[Category:Harv and Sfn template errors#Resolving errors|how to resolve issues]]. To easily see which citation is in need of cleanup, you can check '''[[:Category:Harv and Sfn template errors#Displaying error messages|these instructions]]''' to enable error messages ('''Svick's script''' is the simplest to use, but '''Trappist the monk's script''' is a bit more refined if you're interested in doing deeper cleanup). The following articles could use some of your attention {{columns-list|colwidth=30em| #[[Ancient 1]] #[[Article 2]] ... }} If you could add the full references to those article, that would be great. Again, the easiest way to deal with those is to install Svick's script per [[:Category:Harv and Sfn template errors#Displaying error messages|these instructions]]. If after installing the script, you do not see an error, that means it was either taken care of, or was a false positive, and you don't need to do anything else. Also note that the use of {{para|ref|harv}} is no longer needed to generate anchors. ~~~~
List of your created articles that are in [[:Category:Harv and Sfn no-target errors]]
in headers since they already have such a report
Headbomb { t · c · p · b} 23:18, 18 May 2020 (UTC)
Let's make a bot that creates each page that day at midnight. 95.49.166.194 ( talk) 13:10, 17 June 2020 (UTC)
I want a bot to do all of my editing.It is hard to do editing.It may help with deleting pages if you want to.Having a bot also puts less stress on editing. Was an explorer —Preceding undated comment added 14:21, 4 August 2020
Virtually every one of the 3000-ish places listed in the 132 sub-lists of National Register of Historic Places listings in Virginia has an article, and with very few exceptions, both lists and articles have coordinates for every place, but the source database has lots of errors, so I've gone through all the lists and manually corrected the coords. As a result, the lists are a lot more accurate, but because I haven't had time to fix the articles, tons of them (probably over 2000) now have coordinates that differ between article and list. For example, the article about the John Miley Maphis House says that its location is 38°50′20″N 78°35′55″W / 38.83889°N 78.59861°W, but the manually corrected coords on the list are 38°50′21″N 78°35′52″W / 38.83917°N 78.59778°W. Like most of the affected places, the Maphis House has coords that differ only a small bit, but (1) ideally there should be no difference at all, and (2) some places have big differences, and either we should fix everything, or we'll have to have a rather pointless discussion of which errors are too little to fix.
Therefore, I'm looking for someone to write a bot to copy coords from each place's NRHP list to the coordinates section of {{ infobox NRHP}} in each place's article. A few points to consider:
I've copied this request from an archive three years ago; an off-topic discussion happened, but no bot operators offered any opinions. Neither then nor now has any discussion has yet been conducted for this idea; it's just something I've thought of. I've come here basically just to see if someone's willing to try this route, and if someone says "I think I can help", I'll start the discussion at WT:NRHP and be able to say that someone's happy to help us. Of course, I wouldn't ask you actually to do any coding or other work until after consensus is reached at WT:NRHP. Nyttend ( talk) 15:53, 12 February 2020 (UTC)
=whatever
function, e.g. in cell L4 you type =B4
so that L4 displays whatever's in B4; is that right? If so, I don't think it would be useful unless it were immediately followed by whatever's analogous to Excel's "Paste Values". Is that what you mean by having a bot doing the swap? Since there are 3000+ entries, I'm sure there are a few errors somewhere, but I trust they're over 99% accurate.
Nyttend (
talk) 02:57, 13 February 2020 (UTC)
wins
and nominations
values from the infobox at the "
list of awards", which means the main article doesn't need to be updated every time the list is changed.I propose a bot along the lines of {{ Brazil municipality}} is created to develop our stubs like Jacaré dos Homens which have been lying around for up to 14 years in some cases. There's 5570 municipality articles, mostly poorly developed or inconsistent with data and formatting even within different states. A bot would bring much needed information and consistency to the articles and leave them in a half decent state for the time being, Igaci which Aymatth2 expanded is an example of what is planned and would happen to stubs like Jacaré dos Homens. Some municipalities have infoboxes and some information but hopefully this bot will iron out the current inconsistencies and dramatically improve the average article quality. It would be far too tedious to do it manually, would take years, and they've already been like this for up to 14 years! So support on this would be appreciated.† Encyclopædius 12:09, 20 May 2020 (UTC)
Greetings. At WP:DYKN, the image size is based on the orientation of the image; vertical images at 120px, square at 140, and horizontal at 160. However there is no way to set the resolution during nomination, which means that even experienced editors often forget to fix the size of the image, and new editors don't know that they should.
I am proposing that a bot do a daily check and update the resolution where needed. In order to cut down on the amount of resources required, it needs only look at recent additions.
It would, I'm guessing, work something like this:
Sincerely, The Squirrel Conspiracy ( talk) 00:31, 7 June 2020 (UTC)
Per Wikipedia talk:WikiProject Pharmacology/Archive 16#Molecular weights in drugboxes, I am requesting bot attention to remove the following regexp line:
/\| *molecular_weight *= *[0-9.]+ *g\/mol\n/
in articles that transclude Template:Infobox drug. There are a few rare variations that I can remove by hand or that require manual decision whether to remove, but this seems to be the vast majority and a conservative regex for it. This is a one-time cleanup pass that I started doing it with WP:JWB before I realized it was possibly the majority of the 12K articles in that transcluders list. DMacks ( talk) 19:18, 17 June 2020 (UTC)
/\| *C *= *\d/
|C=\d
".
DMacks (
talk) 05:10, 18 June 2020 (UTC)
This task might be better for semi-automated editing than a straight bot, but I'll throw it out here. I often come across hatnotes and see also sections that link to an old title for a page, e.g. this sort of fix or this one. Would it be possible to create a bot or a tool that lists or fixes instances where hatnotes or see also sections include a redirect to a page that has been moved to a new title? {{u| Sdkb}} talk 05:30, 7 July 2020 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 75 | ← | Archive 78 | Archive 79 | Archive 80 | Archive 81 | Archive 82 | → | Archive 85 |
Per a request at Wikipedia:Village pump (technical)#Popular user scripts, SD0001 created a script to update the table at Wikipedia:User scripts/Most imported scripts. While it's great that the table can now be updated without too much effort, the ongoing work of keeping this table up to date on a periodic basis seems like a task better suited to a bot than a human. Plus bots have the higher query limit, and shouldn't mind waiting a bit longer for results in order to lessen the sever load of ~1700-1900 API calls (e.g. making sequential API calls and/or using an appropriate maxlag setting). - Evad37 [ talk 13:45, 18 November 2019 (UTC)
Is there a way the bot could differentiate active from inactive users, Evad37? (I suppose differentiating admins from editors would be trivial.)
Also, if the bot could also spit out the change in number of imports, so that we could rank by trending scripts, that'd be a bonus! Guarapiranga ( talk) 01:46, 20 November 2019 (UTC)
— diff
list=allusers
with auactiveusers
. I believe that's just 1 edit in the last 30 days, but it's something. It returns the number of actions for each user, so it could also be subsequently filtered for higher definitions of active. I think you can also limit the results to just autoconfirmed or extendedconfirmed users. ~ Amory (
u •
t •
c) 11:07, 22 November 2019 (UTC)
auactiveusers
only lists out all active users, there is no option to list active users from a given set of users. We would need to first get the list of 138,000 active users (28 queries), and then we've to pull in the full search results (which can be done in 1 query for each script except the top 2 scripts - which need 2 coz of >5000 installations) and check how many of these users are in our list of active users. This does seem just about practical enough.
SD0001 (
talk) 17:57, 23 November 2019 (UTC)Hello, I was wondering if it might be possible to construct a bot to streamline the otherwise tedious task of fixing numerous (thousands) of Wikipedia articles that use certain medical jargon terms and revise them to their more widely understood counterparts. For example, I would propose a bot that changes the phrase "renal failure" to kidney failure (except on pages when they are a part of quotes or in the title of a research article that is being cited, if a bot can be programmed to screen for those exceptions). There are several other examples and programming is foreign to me. Please let me know if this is a viable idea for an otherwise tedious (and herculean) task. Thank you! TylerDurden8823 ( talk) 00:46, 26 January 2020 (UTC)
This bot is unfortunately down again and both of the users who maintained it have departed the project. without the bot there is no on-wiki record of UTRS appeals, leaving the system ripe for WP:FORUMSHOP abuses. At the very least if the "notify user" functionality could be replicated, that would be great. Beeblebrox ( talk) 21:25, 4 December 2019 (UTC)
Hello. I wasn't sure where to put this as this is a request in regards to a live website that's going down in a few months. SR/Olympics will be closing by March 2020. On Wikipedia, there's 945 articles that are using this website url, with 2 here and 391 more here (might be duplicates). I feel that InternetArchiveBot nor WaybackMedic would be suitable for this request as the links aren't dead yet. Should a bot archive these links before they break or wait? -- MrLinkinPark333 ( talk) 19:25, 6 January 2020 (UTC)
Is it possible for someone to search the English Wikipedia and create a list of articles that do not follow
WP:BOLDAVOID? Specifically, the search would need to find if there is any use of linking ([[ ]]
) within the bolded (''' '''
) portion of the first sentence of the article. If this is possible, would you be able to provide an output list of the linked article names here:
User:Gonzo_fan2007/BOLD. Cheers,
« Gonzo fan2007
(talk) @ 21:50, 27 January 2020 (UTC)
The assassination of Archduke Franz Ferdinand of Austria...(which is incorrect), in which you would need to remove the link, but then still find the best place to add a link to Archduke Franz Ferdinand of Austria somewhere in the lead. « Gonzo fan2007 (talk) @ 16:30, 28 January 2020 (UTC)
Geregen2 has added hundreds of "proposed deletion" messages to User talk:WildCherry06 and other user talk pages without signing any of them with four tildes. I suggest that a bot add a signature (Geregen2's signature, not the bot's signature) and timestamp to the end of all those messages using {{subst: unsigned}}. GeoffreyT2000 ( talk) 15:41, 19 December 2019 (UTC)
Hello, I'd appreciate if someone can help me with placing the following templates on the relevant TV task force categories found here Category:WikiProject Television task forces, with {{ WikiProject Television task force assessment quality category}} and {{ WikiProject Television task force assessment importance category}}. There was a recent discussion which lead to WikiProjects being converted to TV task forces and this is part of the clean up. Using the templates on these categories will categorize them in their correct place. An example can be seen here Category:A-Class Avatar: The Last Airbender articles (which uses the template) and Category:A-Class Holby articles (which does not). As can be seen by the example, usually only the template will be needed, without any other category or text being used on the page. -- Gonnym ( talk) 01:45, 24 December 2019 (UTC)
This is a multi-part request.
The first part should be relatively uncontroversial: it is to generate a page (for instance,
User:Tigraan/Exxx redirects) containing a list of all pages which are redirects and whose title matches the regexp E1?[0-9]{3}[a-j]?
. (If there is an easy way that I could do it myself, please enlighten me.) Bonus points if the page contains the current redirect targets as well. I estimate this would be around 1000 pages.
The second part would be, after manual inspection of the redirects to clear up false positives, to mass-tag those redirects for a WP:RFD bundled nomination. That certainly requires consensus but I got mostly ignored when asking at the places I would think to ask: I posted at Wikipedia_talk:WikiProject_Food_and_drink#Food_additives_codes_redirect_to_chemical_compounds_instead_of_E_number_article (where you can read a sketch of the RfD nomination rationale) and Wikipedia_talk:Redirects_for_discussion#Nominating_lots_of_related_redirects, both of which combined attracted a whole one other comment (supporting the proposed RfD) after a week. (If you want to see more solid consensus, please tell me where to ask for it.)
The third part would be to clean up after the RfD, either by untagging and leaving things in place if rejected, or by retargeting the redirects according to a relatively simple scheme. Tigraan Click here to contact me 17:49, 23 January 2020 (UTC)
Hi, I want to add this template ( Template:Ash'ari) to all the pages/articles that are listed/linked. Thanks in advance!-- TheEagle107 ( talk) 01:45, 22 January 2020 (UTC)
— Preceding unsigned comment added by TheEagle107 ( talk • contribs) 03:47, 22 January 2020 (UTC)
After a discussion here a couple weeks ago, there was a rough local consensus that it might be beneficial to merge the majority of Russian rural locality articles (95% of which are two-line permastubs) to list articles (currently these lists are by first-level division, such as List of rural localities in Vologda Oblast). As you can see on that article, Fram is in the process of merging the pertinent information from the individual stubs into tables, but it's tedious work and there's something on the order of like 10,000 or so of such articles.
I was wondering if it's possible/plausible to create a bot that could automate any part of that process? ♠ PMC♠ (talk) 04:04, 29 November 2019 (UTC)
@ Premeditated Chaos, Fram, Ymblanter, and Jo-Jo Eumerus: I took a pass at parsing out population data into User:AntiCompositeNumber/rustubs, trying to get data from the infobox, {{ ru-census}}, and string matching. The character count is also included. (It's lower than the MW byte count because of UTF-8 character encoding.) -- AntiCompositeNumber ( talk) 01:38, 1 January 2020 (UTC)
Among moth articles, and I suspect many others, there are sometimes template links to Wikispecies and Wikimedia Commons but there's nothing at the target location in the sister project. I'd love to see a bot which could go through and check these and remove the deceptive templates.
Even better if it could remove links to Commons if the only file in Commons is already in use in the article.
Another refinement would be to change from a general Commons link to a Commons category link when that exists.
Examples:
Thank you. SchreiberBike | ⌨ 03:59, 16 December 2019 (UTC)
The parameters were changed 9 July 2018 per this discussion: Template talk:WikiProject Christianity#Parameter Correction. church-of-the-nazarene was changed to holiness-movement as was the -importance parameter. However, per the discussion above, and as I've seen, it wasn't updated everywhere. Jerod Lycett ( talk) 04:24, 21 January 2020 (UTC)
I know this is going to be quite a bit of work, however I feel it will have significant value once the process has caught up.
I refer to the error cat "Tidy bug affecting font tags wrapping links (4,275,998 errors)" as at today, some of which date back to 2006.
As an example the followong sinature;
[[User:AndonicO|<font face="Papyrus" color="Black">'''A'''</font><font face="Papyrus" color="DarkSlateGray">ndonic</font><font face="Papyrus" color="Black" size="2">'''O'''</font>]] <small><sup><font face="Times New Roman" color="Tan">[[User talk:AndonicO|''Talk'']]</font> | <font face="Times New Roman" color="Tan">[[User:AndonicO/My Autograph Book|''Sign Here'']]</font></sup></small>
has various errors that may cross various error categories and will never be fixed as per current methodology, a bot that does a simple find and replace, with something like;
[[User:AndonicO|talk]] signature adjusted by lint bot for lint errors.
would fix every instance of each signature as identified and could cover many instances in order, this is especially important for these aged and non-active users, and could also be used to identify current user signatures with errors and we could offer a reformatted signature solution.Thoughts
121.99.108.78 (
talk) 00:03, 28 January 2020 (UTC)
<b>...</b>
to '''...'''
. Lint errors are a clear criteria. This would probably have consensus, although that's still not a guarantee. Basically, take it to
WP:VPR and see how the dice lands.
Headbomb {
t ·
c ·
p ·
b} 01:42, 28 January 2020 (UTC)
<font>[[link]]</font>
works just as well as [[link|<font>link</font>]]
. Yes, I know that font tags are deprecated, but there are literally millions of pages that use the former format. --
Ahecht (
TALKtyi's and if you want to add anything 121.99.108.78 ( talk) 10:07, 28 January 2020 (UTC)
<font>[[link]]</font>
did work like [[link|<font>link</font>]]
. That is, <font color="x">[[link]]</font>
, and also <font style="color:x">[[link]]</font>
both were processed by Tidy into [[link|<font...>link</font>]]
(piped appropriately, of course). The font tag had to immediately wrap the Wikilink or External link, otherwise it was ignored. The font color
, but not the font style
, is detected as the Tidy font link bug, but the font style
version of the Tidy font link bug is quite rare. Tidy has been replaced, so now font coloring tags immediately wrapping a Wikilink or external link are overridden, as you would logically expect, by default link colors. The replacement parser is an
HTML 5-compatible upgrade from Tidy and we are not going back.
Wikipedia:Linter#How you can help was written November 23, 2017, and since it was first written it has always said that it is OK to fix lint errors, including on talk pages, but one should "[t]ry to preserve the appearance." So, for more than two years, it has officially been OK to de-lint user signatures, preserving the appearance, and this has never been officially challenged or disputed; it is the consensus. (However, I don't think there's a consensus on systematic lint fixing by bot.) The Tidy font bug is a high priority lint error and I would favor fixing these lint errors in a systematic way by bot, taking care, of course, to exclude talk page discussions where fixing an instance of this error would confuse a question about this exact behavior. —
Anomalocaris (
talk) 02:22, 3 February 2020 (UTC)
Hi all, I saw that " peer reviews" are now included on WP:Article alerts (yay!). Unfortunately it turns out there's more than a few reviews that editors either haven't been opened properly. These will clog up article alert lists and I was wondering if I could have some help with a bot to process them (or even generate a list to give me).
In short:
Thanks for your help, -- Tom (LT) ( talk) 08:47, 10 February 2020 (UTC)
We're in the process at WikiProject New York (state) of converting 5 WikiProjects to taskforces. Specifically the following are being consolidated, under the statewide banner:
|Capital=yes
|Hudson=yes
|LI=yes
|Syracuse=yes
|Western=yes
However, we can't just convert the existing templates to wrappers and have AnomieBOT substitute them without creating a mess of duplicates, because some pages are tagged by more than one subproject or are already tagged with {{ WikiProject New York (state)}} in addition or both, e.g. Talk:Albany, New York.
I don't want to take the time to do the scripting just yet unless it's necessary, going off the assumption that this has been done frequently enough that there's already a working version for project mergers. I'll just take a quick minute here to give some basic examples to avoid confusion.
Examples
|
---|
Without loss of generality we will use Case {{WikiProject Capital District|class=c|importance=mid}} Output {{WikiProject New York (state)|class=c|importance=|Capital=yes|Capital-importance=mid}} Case {{WikiProject New York (state)|class=c|importance=low}} {{WikiProject Capital District|class=c|importance=mid}} Output {{WikiProject New York (state)|class=c|importance=low|Capital=yes|Capital-importance=mid}} Case {{WikiProject New York (state)|class=c|importance=low}} {{WikiProject Capital District|class=c|importance=mid}} {{WikiProject Hudson Valley|class=c|importance=high}} Output {{WikiProject New York (state)|class=c|importance=low|Capital=yes|Capital-importance=mid|Hudson=yes|Hudson-importance=high}} Case {{WikiProject Capital District|class=c|importance=mid}} {{WikiProject Hudson Valley|class=c|importance=high}} Output {{WikiProject New York (state)|class=c|importance=|Capital=yes|Capital-importance=mid|Hudson=yes|Hudson-importance=high}} |
I'm not particularly active around here, but I should be around for an hour or two more today; I will try to find time at least once every 48 hours this week to log in and do some work, so hopefully I'll be able to respond to any inquiries reasonably promptly, thank you. (please ping on reply)
𝒬 𝔔 23:42, 25 February 2020 (UTC)
Hi there! I’ve been editing the Duolingo Wikipedia article to keep it up to date with the number of learners on each course. I was wondering if there’s a boy that could update the lists daily, rather than having to do it myself, or how I could create such a bot? Thanks! :-) — Preceding unsigned comment added by CcfUk2018 ( talk • contribs) 03:28, 22 January 2020 (UTC)
The bot ( approved here) for updating vital articles counts, icons, and corresponding talk pages has been inoperable for a while, per this discussion. Could one of you please look into fixing it? Thanks! Sdkb ( talk) 19:03, 3 January 2020 (UTC)
There are about 2,000 transclusions: Special:WhatLinksHere/Template:Distinguish&namespace=14&limit=500
For an example of the change, see change history of Category:Literature:
Thanks. fgnievinski ( talk) 21:18, 26 January 2020 (UTC)
Hi there, re:
this permalinked discussion, could you stellar bot handlers please remove the |residence=
parameter and subsequent content from articles using {{
Infobox person}}? Per some of the discussions,
Category:Infobox person using residence might list most of the pages using this template. And
RexxS said:
hastemplate:"infobox person" insource:/residence *= *[A-Za-z\[]/
) shows 36,844 results, but it might have missed a few (like {{
plainlist}}); there are at least 766 uses of the parameter with a blank value."I don't know if this helps. This is not my exact area of expertise. Thanks! Cyphoidbomb ( talk) 05:31, 27 December 2019 (UTC)
This is simple and can be handled by just about any bot.
The Federal Telecommunications Institute (IFT) of Mexico made a one-character change in document URLs that will need updating. Hundreds of Mexican radio articles cite its technical and other authorizations.
They added a "v" to the URL, so URLs that were formerly
https://rpc.ift.org.mx/rpc/pdfs/96255_181211120729_7489.pdf
changed to
https://rpc.ift.org.mx/vrpc/pdfs/96255_181211120729_7489.pdf
Is this possible to have done as a bot task? The articles that need it are mostly in Category:Radio stations in Mexico or Category:Television stations in Mexico. Raymie ( t • c) 20:10, 4 February 2020 (UTC)
@
Raymie: In addition they now serve https only, but left no http->https redirect, and most all of the links on WP are http. This should be done by URL-specific bot because of archive URLs and {{
dead link}}
tags (some may already be marked dead and/or archived that need to be unwound once corrected). Could you post/copy the request to URLREQ, there is a backlog but I will get to it. --
Green
C 20:02, 8 February 2020 (UTC)
Thread moved to Wikipedia:Link_rot/URL_change_requests#Request_for_change_of_(soon_to_be)_broken_links_to_LPSN and poster notified. -- Green C 03:26, 14 February 2020 (UTC)
Hello. I want a bot . Because as I'm a student so I'm unable to be active on Wikipedia as much as it is required. So I think if I will get a bot then when I will unable to do template like about 100 pages at that time , instead of me my bot will do that . — Preceding unsigned comment added by Tanisha priyadarshini ( talk • contribs) 16:07, 7 April 2020 (UTC)
There are many articles (or at least enough that it would be tedious to go through and check each and every one) in the backlog (en.wikipedia.org/wiki/Category:Wikipedia_infobox_backlog) that actually do have infoboxes. I think a bot that could be submitted a category of 200-500 pages, read the wikitext of each one, and if the page has {{infobox
in it, go to the talk page of that article and remove the needs-infobox=yes
parameter
Firestarforever (
talk) 13:39, 28 March 2020 (UTC)
While it is likely impossible to automate all of these guidelines Wikipedia:Writing_about_women, things like using last name or relationships in lede are systematic bias which can have systematic solutions. A bot attempting to do this would be AMAZING (where exceptions like Icelandic folks would be an opt out rather than opt in) — Preceding unsigned comment added by Icy13 ( talk • contribs) 21:39, 25 February 2020 (UTC)
When using footnoted referencing, the task of assessing what source supports what text is complicated. A reference may be tagged e.g. {{
self-published source}}, {{
self-published inline}}, {{
deprecated inline}}, {{
dubious}} and other tags which may be applied to the footnoted reference but these are not linked to the readable content. When using <ref>
tags, by contrast, we can use <nowiki><ref>{{cite [...] | publisher=$VANITYPRESS [...] {{self-published source}}</ref>{{self-published inline}}
to flag both the reference and the inline citation.
I would like a maintenance tag bot to add, e.g., {{ self-published inline}} after the {{ sfn}}/{{ harv}} instances matching footnoted citations that are flagged as self-published, deprecated or otherwise dubious. Guy ( help!) 09:15, 25 February 2020 (UTC)
|journal=
, but a similar bot could be coded to look for domains found in |url=
and |publisher/website/magazine/journal/work/...=
Headbomb {
t ·
c ·
p ·
b} 22:22, 25 February 2020 (UTC)Greetings. I'm here once again to bother you all about Files!
Tagging a file {{Non-free reduce}} places it in Category:Wikipedia non-free file size reduction requests, where User:DatBot performs the file size reduction automatically if the file is in .png or .jpg format. However, DatBot doesn't process any other format, and therefore files in other formats need manual processing.
I am requesting a bot to, once daily, check all files in Category:Wikipedia non-free file size reduction requests and, if the file format is not .png or .jpg, change {{Non-free reduce}} into {{Non-free manual reduce}}, so that they're more readily processed.
Thanks! The Squirrel Conspiracy ( talk) 02:31, 23 March 2020 (UTC)
Done — Preceding unsigned comment added by The Squirrel Conspiracy ( talk • contribs) 19:50, 27 March 2020 (UTC)
I sometimes put queries on article talkpages, some get answered quickly, some stick around indefinitely and occasionally old ones get resolved. My suspicion is that my experience is not unusual, but I hope that this is a software issue and that a lot more article queries could be resolved if the relevant editors knew of them. Would it be possible to have a bot produce reports for each Wikiproject of open/new talk page threads that are on pages tagged to that project? Ϣere SpielChequers 09:49, 10 February 2020 (UTC)
I'm in need of help replacing all instances of a set of WikiProject templates as taskforces of the one unified template: {{
WikiProject Molecular Biology}}
. Unfortunately a simple transclusion of the new template wrapped in the old templates isn't enough, since some pages have multiple WikiProject templates, so will need to be marked with multiple taskforces. It's therefore similar to when
Neurology was merged into WP:MED.
Replacing
{{WikiProject Molecular and Cellular Biology|class=GA|importance=high|peer-review=yes}} {{WikiProject Computational Biology|importance=mid|class=GA}}
With
{{WikiProject Molecular Biology|class=GA|importance=high|peer-review=yes |MCB=yes |MCB-imp=high |COMPBIO=yes |COMPBIO-imp=mid }}
Broadly, I think the necessary bot steps would be:
{{
WikiProject Molecular and Cell Biology}}
OR {{
WikiProject Genetics}}
OR {{
WikiProject Computational Biology}}
OR {{
WikiProject Biophysics}}
OR {{
WikiProject Gene Wiki}}
OR {{
WikiProject Cell Signaling}}
{{
WikiProject Molecular Biology}}
{{WikiProject Molecular and Cell Biology}}
AND {{WikiProject Genetics}}
AND {{WikiProject Computational Biology}}
AND {{WikiProject Biophysics}}
AND {{WikiProject Gene Wiki}}
{{WikiProject MCB/COMPBIO/Genetics/Biophysics/Gene Wiki|importance=X|quality=y}}
|MCB/COMPBIO/genetics/biophysics/Gene Wiki=yes
+ |MCB-imp/COMPBIO-imp/genetics-imp/biophysics-imp/GW-imp=X
(note: GW → Gene Wiki)|importance=
and |quality=
, add that as the overall |importance=
and |quality=
to {{WikiProject Molecular Biology}}
|signaling=yes
(i.e., replace {{
WikiProject Cell Signaling}}
on pages that transclude it with {{WikiProject Molecular Biology|...|signaling=yes}}
)|genewiki=yes
|metabolism=yes
Thank you in advance! T.Shafee(Evo&Evo) talk 07:09, 12 January 2020 (UTC) (refactored/edited by Seppi333 ( Insert 2¢) 05:48, 18 January 2020 (UTC))
{{
Subst only|auto=yes}}
template to merge one banner into another, but is there any support for merging multiple banners on a single page into 1? If not, are there any bots that have been approved to merge multiple project banners on talk pages (particularly where 2+ banners occur on a single page) into a single parent banner? Asking because I could likely modify the source code of a bot designed to merge the banners of another project's task forces for this purpose, especially if there's one written in python.
Seppi333 (
Insert 2¢) 03:45, 16 January 2020 (UTC)
{{
WikiProject Gene Wiki}}
&
Category:Gene Wiki articles) which is currently present on ~1800 pages. I think we're probably just going to go with the current task force listing in the {{
WPMOLBIO}}
template.The edge case is when two taskforces currently indicate different importance levels (e.g. Talk:DNA_gyrase). In such cases it might be safest to use the median rounded up for the overall importance (high+low→mid, high+mid→high), but maybe that's over complicating things.. It wouldn't be that technical to encode that. Programatically, one just needs to ordinally encode low→1, mid→2, high→3, top→4 (NB: this method implicitly assumes that there's an equal "importance distance" in a mathematical/statistical sense between importance ratings, which might not necessarily be true - it depends on how people go about rating importance on average), then use round(median(list of ratings)) or round(average(list of ratings)), then remap whatever number it returns back to an importance rating. E.g., the average rating of task forces that rate an article as low, high, and top is (1+3+4)/3, which would be rounded to 3 → high importance. Seppi333 ( Insert 2¢) 02:56, 18 January 2020 (UTC)
@ Evolution and evolvability: I refactored the request in Special:Diff/936327774/936342626 to reflect the changes to the template. You might want to look it over just to make sure nothing seems off. Seppi333 ( Insert 2¢) 05:48, 18 January 2020 (UTC)
{{ resolved}} Revisiting this March discussion for a new owner
When an AfD discussion ends with no discussion, WP:NOQUORUM indicates that the closing admin should treat the article as an expired PROD ( "soft delete"). As a courtesy/aid for the closer, if would be really helpful for a bot to inform of the article's PROD eligibility ("the page is not a redirect, never previously proposed for deletion, never undeleted, and never subject to a deletion discussion"). Cribbing from the last discussion, it could look like this:
This would greatly speed up the processing of these nominations. Eventually would be great to have this done automatically, but even a user script would be helpful for now. czar 19:26, 29 December 2019 (UTC)
@ Czar: For Wikipedia:Articles for deletion/Log/2020 February 3, I extract information like this: report. Is the information enough? -- Kanashimi ( talk) 10:06, 4 February 2020 (UTC)
Extended content
|
---|
posting something like this to the AfD discussion when no one else has !voted
|
{{ resolved}} I've been using User:Ucucha/HarvErrors.js for a few days now, and it's a pretty nice little script. However, the issues it highlights should be flagged for everyone to see and become part of regular cleanup. For example, in Music of India, two {{ harv}}-family templates are used to generate reference to anchors, designed to point to a full citation.
{{Harvnb|MacDonell|2004|pp=29–39}}
, pointing to
Music of India#CITEREFMacDonell2004{{Harvnb|Radhakrishnan|Moore|1957|p=3}}
, pointing to
Music of India#CITEREFRadhakrishnanMoore1957However, inspecting the page reveals those anchors aren't found anywhere on the page. Even a manual search won't find the corresponding citations on that page, because this isn't an issue of someone having forgotten a |ref=harv
in a citation template, they just aren't there to begin with.
A bot should flag those problems, probably with a new template {{ broken footnote}}, or possibly on the talk page.
Headbomb { t · c · p · b} 15:13, 21 February 2020 (UTC)
|ref=harv
automatically though. That would kill a great deal of those errors (although certainly not all).
Headbomb {
t ·
c ·
p ·
b} 20:12, 21 February 2020 (UTC)
<ref>Fischer 2008: p. 149</ref>
Lot of permutations for Harvard reference problems that a specialized bot could become expert on. --
Green
C 20:09, 21 February 2020 (UTC){{ resolved}} Per this conversation, the automated essay assessment system has fallen badly out of date since BernsteinBot stopped updating it in 2012. It would be useful to revive it so that essay readers could have a better indication as to whether the essay they are reading is more likely to represent a widespread norm or just a minority viewpoint. MZMcBride has provided the original code, but it will need to be updated. Your help would be much appreciated. Regards, Sdkb ( talk) 20:19, 22 March 2020 (UTC)
{{ resolved}} Greetings, I'm here to bother you all about File namespace nonsense again.
User:RonBot, which was disabled because its operator went inactive a year ago, had an approved task to reduce the display size of SVGs ( BRFA). In its absence, there's quite a pile-up of SVGs awaiting reduction (over 100 currently). I tried to reduce them manually and failed, so now I'm here asking someone else to take up the task themselves. The source code for the task is here.
Many thanks, The Squirrel Conspiracy ( talk) 23:41, 5 April 2020 (UTC)
{{
resolved}}
I have found 1,575 articles that have an identical referencing error. The author name listed in {{
sfn}} does not match the author's name as listed in the |ref=
parameter in the matching full {{
cite book}} citation template, which causes a non-working link from the short reference to the full reference. It also causes a red error message if you have the relevant script enabled.
I have performed a sample fix here. Is there a kind AWB editor or bot operator who would be willing to fix the rest?
The list of articles that need fixing is here. Thanks. – Jonesey95 ( talk) 15:23, 9 April 2020 (UTC)
{{
sfn}}
template. The book is *{{cite book
|last1=Gröner
|first1=Erich
|author-link1=
|author-mask1=
|last2=Jung
|first2=Dieter
|display-authors=
|last-author-amp=
|last3=Maass
|first3=Martin
|translator-last1=Thomas
|translator-first1=Keith
|translator-last2=Magowan
|translator-first2=Rachel
|year=1991
|title=U-boats and Mine Warfare Vessels
|volume=2
|work=German Warships 1815–1945
|location=London
|publisher=Conway Maritime Press
|isbn=0-85177-593-4
|ref=CITEREFGröner1991
}}
{{
sfn|Gröner|1991|p=...}}
should actually be {{
sfn|Gröner|Jung|Maass|1991|p=...}}
and that |ref=CITEREFGröner1991
(or |ref=CITEREFGr.C3.B6ner1991
if not yet modified) should be |ref=harv
. --
Redrose64 🌹 (
talk) 19:14, 9 April 2020 (UTC)
|ref={{
SfnRef|Gröner|1991}}
over |ref=CITEREFGröner1991
. I think the SfnRef way is a bit cleaner (since it matches the {{
sfn}} invocation), so would probably use that. --
AntiCompositeNumber (
talk) 02:11, 10 April 2020 (UTC)
{{
sfn|Gröner|Jung|Maass|1991|p=...}}
is the way, and people can change it to {{
sfn|Gröner et al.|1991|p=...}}
+ |ref=Gröner et al.
if they want to manually shorten the list of authors (most style guides say keep 3, so that's why the default is up to 3 named authors, and 4+ gets truncated to et al.
Headbomb {
t ·
c ·
p ·
b} 02:22, 10 April 2020 (UTC)
Done I took the easy way and made the suggested change, if anyone wants to alter the way that the linkage is made then go ahead. There are 3 articles, October 1918, SM U-10 (Austria-Hungary) and SM U-11 (Austria-Hungary), that need further investigation as there were 2 substitutions in them. Keith D ( talk) 14:45, 10 April 2020 (UTC)
{{ resolved}} Listeria bot has been blocked due to it not complying with our non-free content policy and having someone knowledgeable in PHP fork the bot using the original code and implement a fix would be greatly appreciated. Extensive discussion has occurred at Wikipedia:Bots/Noticeboard#Re-examination of ListeriaBot and Wikipedia:Administrators' noticeboard#ListeriaBot blocked an urgent resolution is needed. ‑‑ Trialpears ( talk) 20:08, 13 April 2020 (UTC)
My kingdom for a bot that compiles new articles in a new subject area (e.g., added to a WikiProject's scope). @ PresN, currently runs a script that does this manually (see one of the "New Articles" threads at WT:VG) but would love to be able to do this for other projects so that new editors get visibility/help and that the project can see the fruits of its efforts. (Also discussed at PresN's talk page.) Special:Contributions/InceptionBot currently finds articles that might be within scope but this proposal is instead a log of recent additions to a topic area (similar to how the 1.0 project compiles). It could be useful if delivered directly to a WikiProject/noticeboard page or, alternatively, updated on a single page and transcluded à la WP:Article alerts. czar 20:07, 15 December 2019 (UTC)
Extended content
|
---|
def parse_lists(lists, headers, assessments, new_cats, dates, dates_needed):
NULL_ASSESSMENT = '----'
max_lists = dates_needed * 4
extra_headers = get_extra_headers(headers) # Note "Renamed" headers
# Initial assessment
for index, list in enumerate(lists):
if index <= max_lists:
for item in list.find_all('li'):
contents = _.join(item.contents, ' ')
offset = count_less_than(extra_headers, index) - 1
date = datesint(max((index-(1 + offset)), 0)/3)] #TODO: handles 3+ sections
assess_type = assessment_type(contents)
# Assessment
if assess_type == ASSESSMENT:
namespaced_title = get_title(item, ASSESSMENT)
title = clean_title(namespaced_title)
old_klass = NULL_ASSESSMENT
new_klass = get_newly_assessed_class(item, namespaced_title)
if (not is_file(namespaced_title)
and not is_redirect_class(new_klass)
and not (title in assessments and was_later_deleted(assessmentstitle]))): # ignore files, redirects, and mayflies
if is_category(namespaced_title):
init_cat_if_not_present(new_cats, namespaced_title)
else:
init_if_not_present(assessments, title)
assessmentstitle]['creation_class' = new_klass
assessmentstitle]['creation_date' = date
if assess_type == REASSESSMENT:
namespaced_title = get_title(item, REASSESSMENT)
title = clean_title(namespaced_title)
old_klass = get_reassessment_class(item, 'OLD')
new_klass = get_reassessment_class(item, 'NEW')
if not is_file(namespaced_title):
init_if_not_present(assessments, title)
if is_redirect_class(new_klass): # tag redirect updates as removals, unless later recreated
if not (is_draft_class(old_klass) and 'creation_class' in assessmentstitle]): # Ignore if this a a draft-> mainspace move in 2 lines
assessmentstitle]['was_removed' = 'yes'
elif is_redirect_class(old_klass): # treat redirect -> non-redirect as a creation
assessmentstitle]['creation_class' = old_klass
assessmentstitle]['updated_class' = new_klass
assessmentstitle]['creation_date' = date
else: # only add the latest change, and only if there's no newer deletion
if 'updated_class' not in assessmentstitle and not was_later_deleted(assessmentstitle]):
assessmentstitle]['updated_class' = new_klass
# Rename
if assess_type == RENAME:
namespaced_old_title = get_rename_title(item, 'OLD')
namespaced_new_title = get_rename_title(item, 'NEW')
if not is_file(namespaced_new_title) and not is_category(namespaced_new_title):
new_title = clean_title(namespaced_new_title)
if is_draft(namespaced_old_title) and not is_draft(namespaced_new_title):
init_if_not_present(assessments, new_title)
if not was_later_updated(assessmentsnew_title]) and not was_later_deleted(assessmentsnew_title]):
assessmentsnew_title]['creation_class' = DRAFT_CLASS
assessmentsnew_title]['updated_class' = "Unassessed"
assessmentsnew_title]['creation_date' = date
if is_draft(namespaced_new_title) and not is_draft(namespaced_old_title):
init_if_not_present(assessments, new_title)
if not was_later_updated(assessmentsnew_title]) and not was_later_deleted(assessmentsnew_title]):
assessmentsnew_title]['creation_class' = "Unassessed"
assessmentsnew_title]['updated_class' = DRAFT_CLASS
assessmentsnew_title]['creation_date' = date
# Removal
if assess_type == REMOVAL:
namespaced_title = get_title(item, REMOVAL)
# Articles
if not is_file(namespaced_title):
title = clean_title(namespaced_title)
if title not in assessments: # don't tag if there's a newer re-creation
assessmentstitle = { 'was_removed': 'yes' }
if is_category(namespaced_title):
assessmentstitle]['creation_class' = CATEGORY_CLASS
if is_draft(namespaced_title):
assessmentstitle]['creation_class' = DRAFT_CLASS
# Categories
if is_category(namespaced_title) and namespaced_title not in new_cats:
new_catsnamespaced_title = 'was_removed'
return {'assessments': assessments, 'new_cats': new_cats}
|
I think I made this kind of request several years ago, but I can't find it in the archives.
Occasionally people add text like [citation needed] or (reference needed) to articles, and these articles don't end up in maintenance categories because they're plain text instead of templates. Could someone write a bot that would go around making edits like this, or could an existing maintenance-bot operator add this task? I'm guessing that it would be rather simple — give it a list of phrases, tell it to look for them inside parentheses and brackets, and let it loose. Of course, this isn't a one-time problem, so if this is a good idea, it ought to be made an ongoing task. Nyttend backup ( talk) 16:35, 20 April 2020 (UTC)
\[?\[citation needed\]\]?
with {{subst:cn}}
. --
AntiCompositeNumber (
talk) 15:49, 22 April 2020 (UTC)
Please see Help_talk:Citation_Style_1/Archive 69#Cite book Harv warning where the suggestion was made: not a formal BOTREQ yet, but might help if bot operators can give some advise about how this could best be addressed, so that a more formal BOTREQ can follow (if that is the best option). -- Francis Schonken ( talk) 07:38, 19 April 2020 (UTC)
|ref=harv
parameter is no longer needed one could run a bot task to remove it.
Jo-Jo Eumerus (
talk) 08:58, 19 April 2020 (UTC)
I occasionally see this (and have done it once or twice), and it's annoying to have to un-archive when it happens. Could we get a bot to find instances of people using something like {{
DNAU|47}}
and switch them to {{
subst:DNAU|47}}
? (I know there are a few bots already running that substitute accidental transclusions, so perhaps one of them could be tasked to this without too much effort.) {{u|
Sdkb}}
talk 04:16, 28 April 2020 (UTC)
Hello! I asked over at WikiProject Council if someone could have a bot remove all appearances of Portal:Pandemic from articles, and was advised to ask here. Could anyone here help? --- Another Believer ( Talk) 20:20, 22 April 2020 (UTC)
As of right now, there are currently 1065 images tagged with F8 that need deleting (specifically, these two categories are severely backlogged). Is it possible for a bot to perform this kind of maintenance based on transwiki checks to ensure that the Wikipedia and Commons versions of each file match, down to maximum resolution and filesize? ToThAc ( talk) 17:33, 24 April 2020 (UTC)
{{
Now Commons}}
has a |reviewer=
parameter which will
categorize files accordingly; fill that in for each file you've finished reviewing/fixing. -
FASTILY 04:08, 25 April 2020 (UTC)
Basically the title. There are numerous articles with images (and other content) that should have alt text but do not.
MOS:ALT says that we should try to ensure that images have alt text for accessibility reasons, which is especially important for people that utilize screen readers that cannot physically see the images. In a nutshell, said bot would probably check articles that have an embedded file such as a video, music, or image. It would then add the article a maintenance category on whether or not the embed has alt-text, as well as possibly a tag to the article letting readers (including people with screen readers) that alt-text is missing.
A related idea would be the same as the above but for math markup, which should probably be tagged/categorized separately due to the technical knowledge required to translate it into English.
Chess
(talk) Ping when replying 01:52, 5 March 2020 (UTC)
|alt=
and feel that they are justified in removing the tag. No |alt=
parameter is better than having a repeat of the caption. --
Redrose64 🌹 (
talk) 14:52, 5 March 2020 (UTC)
\and
and \or
are deprecated in favour of \land
and \lor
, which obviously can cause problems with screenreaders.
Help:Latex has a lot of examples and if you look at some of the LaTeX source for them you can see how it might be incomprehensible for a screen reader. Formatting instructions would presumably also be a pain to hear especially if there's a lot of them.It's not uncommon that inexperienced editors will add piped inline interlanguage links to articles that exist on a different Wikipedia in order to avoid red links. This is a contravention of the MOS, as it surprises the reader, and prevents links to valid articles once they are created. Such piped links should be replaced with the {{ Interlanguage link}} template, e.g. Special:Diff/866719019.
Is this something that could feasibly be done by a bot? Are there valid intentional uses that shouldn't be changed? (I guess it's clearer with languages that use non-Latin script, since I can't think of a good reason to pipe a foreign name under English text, but I'm not sure about those which use the Latin alphabet.) -- Paul_012 ( talk) 02:48, 13 March 2020 (UTC)
[[:th:ลมซ่อนรัก (ละครโทรทัศน์)|Hidden Love]]
, which would need to be converted to {{ill|Hidden Love (TV series)|lt=Hidden Love|th|ลมซ่อนรัก (ละครโทรทัศน์)}}
. --
Paul_012 (
talk) 18:58, 16 March 2020 (UTC)There are a lot of URL in sources, that have tracking extensions by Facebook attached, they should be deleted. ( https://en.wikipedia.org/?search=fbclid&title=Special:Search&fulltext=1&ns0=1) I think that would be a fine job for a bot, and as it's probably happening unintentional by some editors, who copy'n'paste this without much thinking, it should probably done once per day or week or so. Same goes probably for Google Analytics extensions with UTM: https://en.wikipedia.org/?title=Spezial:Suche&limit=500&offset=0&ns0=1&search=utm_source&advancedSearch-current={} Grüße vom Sänger ♫ ( talk) 15:02, 22 February 2020 (UTC)
|url=
they are now mismatched and look like different URLs, other bots might pick up on that and restore the archive URL version of the source URL, since it is the authority (once the link is dead). Personally, I would bypass any citation that involves an archive URL too many complications. --
Green
C 16:23, 22 February 2020 (UTC)
Hello,
I have a working bot; its purpose is to give readers and editors alike information regarding the presence of different content in other Wikipedia language editions for the same article they are reading. This information can be used to guide the reader to content which will add to their study, and/or highlight that content in another language happens to be biased. I hope that such a bot would be used to ratchet up the level of discourse across language editions and spread useful knowledge between them.
My proposal is that the bot be allowed to add a small phrase to the 'See Also' section of a given article, such as, "The Russian edition of this article is 70% different from this edition. You can view it here."
As I was working on this bot, there was an ongoing discussion at the Idea Lab. You can view it at Wikipedia Edition Article Similarity Bot.
I assert that the bot works: its most limiting factor right now is that I only have access to 2 million characters of translation capability per month for article comparisons, which limits the bot to a relative handful of articles in output per month. You can see the code here.
This is not a bot request--it is a request for the bot to have edit capabilities. If there is a more appropriate place for this request, please let me know.
Theory42 ( talk) 16:16, 26 March 2020 (UTC)
To add a merge template to the other page where only one page has had the merge template added; that is, to add reciprocal tags. This has been proposed before, and developed consensus, but doesn't seem to have been finished or the scope has been expanded too far until it becomes controversial. Rather than starting from scratch, it might be possible to resurrect Mutleybot or to add this as a Merge bot task, something wbm1058 has suggested before ( Wikipedia:Bot requests/Archive 70#Removing bad merge requests). I suggest that the scope of the bot be simple, and that it not be designed to interpret merge consensus (or not), something that has been controversial in the past. Klbrain ( talk) 07:50, 10 April 2020 (UTC)
Hi. The main resource for sourcing basic biography data on Olympians, Sports Reference, has now been switched off. I started a recent thread about this at the Olympic Project. There are tens of thousands of articles that source Sports Ref. However, there's quite a simple fix that can be done to stop the links from going dead. Just change "cite web" to "cite sports-reference" in the ref, as per this example, adds the web archive link. This is per the recent change made to the cite template by Zyxw.
So therefore, please can a bot change anything from "cite web" to "cite sports-reference" where this is used on WP? Many thousands of article already use the latter, but even more so do not. Please ping me if you need anymore info. Thanks. Lugnuts Fire Walk with Me 13:58, 17 May 2020 (UTC)
{{
cite sports-reference}}
this will create problems. --
Green
C 14:43, 17 May 2020 (UTC)Done Around 150k links archived in around 100k articles. -- Green C 02:39, 24 May 2020 (UTC)
I think it would be effective to have a bot that condenses multiple “article issue” templates, such as “more citations needed” or “Missing information” into the “This article has multiple issues” so it appears as one notice instead of several consecutive notices. Users might forget to do this, or add to previously existing issue templates and forget to condense it using the ‘multiple issues’ template. I propose that this bot would apply the condensing template to any article with more than 2 notices at the top, or whatever the official guidelines are for this according to the Manual of Style as I’m not yet sure what they say about the number of templates allowed to appear. This would be fully automated as opposed to the semi-automation of the AutoWikiBrowser that already has this capability.
This might already exist or have been discussed, so forgive me if I’m wrong.
Thanks! MrSwagger21 ( talk) 10:50, 7 May 2020 (UTC)
There are many bots whose job involves making regular updates or are otherwise anticipated to make edits frequently. When such bots stop operating, it might just be because they're no longer needed and have been retired, or it might be indicative of a problem. I propose a bot that monitors the edits of other bots known to make frequent edits (those bots could be added to a category, or to one of several categories based on level of activity expected), and sends an automated alert to a noticeboard if the bot makes no edits within the expected timeframe. At the noticeboard, editors could review the alerts, marking some as no issue and placing others into a queue for repairs. (This is somewhat a follow-up to my brainstorming from March; feel free to lmk if it's just as non-viable, but I wanted to at least throw it out here.) {{u| Sdkb}} talk 17:20, 1 May 2020 (UTC)
meta=featureusage
, although doing so it tedious and would require knowing the agent for each bot. I'm skeptical of the utility of this in general, but in theory such a tool could check bots without edits and known none-or-minimal-editing that way. In practice,
User:Joe's Null Bot/source does not list a custom useragent, so it'll be MediaWiki::API/0.41
or whatever version it's using. Trivial to add. ~ Amory (
u •
t •
c) 15:14, 2 May 2020 (UTC)Throwing ideas, we could simply have a table of bots sortable by bot name, operator name, number of edits made, and by date of last edit. Then have a disclaimer at the top that several bots, like nullbots, will not make edits. Would give a good idea at a glance of which bot is active and which isn't. Headbomb { t · c · p · b} 14:56, 2 May 2020 (UTC)
<center>—</center>
instead of hyphens to indicate an inexistent entry.
Headbomb {
t ·
c ·
p ·
b} 18:14, 2 May 2020 (UTC)
@ Majavah: I made some tweaks [6]. The class="center" thing messes with column widths, so I went with <center> </center> tags. The final ' of diffs should be done with {{ '}} (or you could just make use of {{ '}} everywhere instead of '). But this table looks pretty good to me. Headbomb { t · c · p · b} 01:12, 5 May 2020 (UTC)
This task was previously handled by Acebot (BFRA here), but it stopped functioning in November 2019. The manual updates done by several editors since then indicate that there is continued demand for the information in the table. Its operator appears to have retired. {{u| Sdkb}} talk 17:05, 1 May 2020 (UTC)
Doing... - an opportunity to test out tabular data on Commons with a Lua template. If it works, the Lua module can be rolled out to other wiki languages without needing bot perms or bot edits. -- Green C 18:24, 1 May 2020 (UTC)
Done, new system working with Commons tabular data. Installed on 60+ wikis. -- Green C 02:41, 24 May 2020 (UTC)
I would like to generate a list of Wikipedia Editors on the Luganda Wikipedia by Article Count https://lg.wikipedia.org/wiki/Olupapula_Olusooka
To be able to generate something like this /info/en/?search=Wikipedia:List_of_Wikipedians_by_article_count — Preceding unsigned comment added by Kateregga1 ( talk • contribs) 19:54, 19 April 2020 (UTC)
Kateregga1, if you want the bot that generates Wikipedia:List_of_Wikipedians_by_article_count to also run for Lgwiki, post a request on the talk page of the list. I recently set it up on Trwiki for example. -- Green C 00:10, 23 April 2020 (UTC)
GreenC bot by @ GreenC: has a job that detects when a file on Wikipedia has the same name as one on Commons but is a different image, and tags the local file with Template:Shadows Commons, which puts it in Category:Wikipedia files that shadow a file on Wikimedia Commons.
I've been processing the files in that category, and many of the files on Commons are copyright violations, which are deleted within hours/days of upload. It would be useful for a bot to review the files tagged with Template:Shadows Commons and remove that template if there is no longer a file on Commons with the same name.
At any given time there are only a small number of files in that category, 30 or so, so this could potentially be done more than once a day without being very resource intensive, though once a day would be plenty useful. The Squirrel Conspiracy ( talk) 06:43, 21 March 2020 (UTC)
This is a simpler multi-article move than the last one I requested and withdrew, since the targets are all redlinks. See the discussion at Wikipedia talk:WikiProject Rivers#More tributary disambiguators to update and complete list of old and new titles at User:Dicklyon/tributaries, which are listed like these examples (about 500 of them):
I appreciate your help. Dicklyon ( talk) 18:05, 5 June 2020 (UTC)
OK, new list of about 54 from Certes has been reviewed and made explicit, herebelow. Dicklyon ( talk) 05:04, 7 June 2020 (UTC)
The list of vital articles gets updated on a regular basis. Sometimes page titles are changed. I think we should have a bot update the pages to make work easier for humans. Interstellarity ( talk) 13:14, 2 June 2020 (UTC)
I noticed that a "Wiki Loves Pride" mass message to all wikiprojects from June 2015 is not being auto-archived by some of those projects that set up autoarchiving (such as WT:IRAN). It is missing a timestamp. Can a bot be set up to archive all those messages that still remain on the main talkpages to the proper archives? Or can a boit be set up to to add timestamps to all the messages that remain on the main talk pages? (June 2015 timestamp) This is 5 years out of date, and seems odd to inform people to do still some thing 5 years after it already ended.
-- 65.94.170.207 ( talk) 18:50, 28 May 2020 (UTC)
(timestamp may not be accurate) {{subst:Unsigned|Another Believer|15:13, 3 June 2015 (UTC)}}
on the end of each of them then?
Naypta ☺ |
✉ talk page | 22:34, 28 May 2020 (UTC)
Wikipedia has a very long list of section anchors that need to be repaired. To fix these broken links, it is necessary to add an {{ anchor}} whenever a section's title is changed.
User:Dexbot was designed to correct these broken links, but it hasn't corrected any of them in several years. Can Dexbot be configured to fix these links again? Jarble ( talk) 16:40, 22 May 2020 (UTC)
Done I started the bot, here's the first edit: Special:Diff/958520923 Ladsgroup overleg 08:31, 24 May 2020 (UTC)
This defunct bot removed inappropriate uses of {{ Current}} (which per its documentation is meant only for short-term use on articles receiving a high edit count) by removing it from articles that have not been edited in more than two hours. It stopped functioning I think in 2013, and since then (perhaps because it stopped) the standards have gotten increasingly lax. I propose that we bring it back (with perhaps a slightly longer edit window, at least to start). {{u| Sdkb}} talk 08:54, 19 May 2020 (UTC)
I am a bureaucrat on Real Life Villains Wikia and the other bureaucrat wanted to do a category cleanup, but changed his mind about some of the categories. Unfortunately, the user he tasked with removing the categories took his job too seriously and removed them anyway even after we decided to keep them. On any page where User:Super Poison Ivy removed the categories Anti-Semitic, Anti-Christian, Bully, Islamophobes, Fascist, and Communist, I want those categories to be restored. — Preceding unsigned comment added by Bjanderson94 ( talk • contribs)
Would it be controversial to request a bot to create redirects in the Wikipedia talk namespace to the talk pages of the targets of redirects in the Wikipedia namespace? I've typed WT:xxx, expecting it's a shortcut given WP:xxx is, only to be disappointed it doesn't exist from time to time. Nardog ( talk) 01:26, 24 February 2020 (UTC)
Should only...Why? It's not like WP: shortcuts technically exist in the main namespace, as in H:. I'd like e.g. WT:Actors to work, even though WP:Actors isn't marked as a shortcut. (I can see an argument for avoiding shortcuts to sections, though.) Nardog ( talk) 03:23, 24 February 2020 (UTC)
The {{
literal translation}} and {{
langnf}} templates were originally written with simple unquoted outputs of (if called with just "example text" as their argument) Spanish for example text
and lit. example text
. This isn't the best way to present a translated string, and in hundreds of articles users have very reasonably added quotemarks into the template calls (eg. {{langnf||Spanish|"Rich Port"}} and {{lit.|"Free Associated State of Puerto Rico"}} on the
Puerto Rico article).
Last week User:Ravenpuff updated the two templates to include apostrophes around the translated phrase. This resulted in some articles displaying nested quotation marks, such as:-
Puerto Rico (Spanish for '"Rich Port"'; abbreviated PR), officially the Commonwealth of Puerto Rico (Spanish: Estado Libre Asociado de Puerto Rico, lit. '"Free Associated State of Puerto Rico"')
I suggested adding a {{ trim quotes}} to the templates to avoid this, and Ravenpuff suggested fixing all of the hundreds or thousands of template calls in articles instead. Which sounds like a job for a bot, so here I am. A bot would simply be tasked with checking all usages of the {{ literal translation}} and {{ langnf}} templates (and their synonyms), to look for any argument that starts and ends with a quotation mark, and remove those marks. If an argument contained more than two quotemarks, which is perhaps plausible where an editor offers multiple translations, that should be flagged somehow.
Is this worth creating a bot for, or is {{ trim quotes}} a better solution? Or are there other options to explore? -- Lord Belbury ( talk) 10:52, 16 April 2020 (UTC)
@
Jonesey95: Best I can do offhand has been
User:Lord Belbury/sandbox, which needs another pass to ignore all italics markup, and I'm stumped by Lua's handling of curly quotes (I've never used Lua before): it's beyond me why a match of s:match([[([“])]])
is returning true for a single curly apostrophe. Will take another look later, would appreciate any feedback (or a pointer to a better talk page to ask for templating help).
While this is being worked on, should {{ literal translation}} and {{ langnf}} be left as they are (with Ravenpuff's simple " put quotemarks around every string, even if it already has them" update) or reverted to leaving the string unchanged? -- Lord Belbury ( talk) 10:50, 23 April 2020 (UTC)
As the result of a move request, 2019–20 coronavirus pandemic was moved to COVID-19 pandemic. Unfortunately, there are a metric tonne of articles (and templates and categories) that have "2019–20 coronavirus pandemic" (or "2020 coronavirus pandemic") in the name. Accordingly, it would be appreciated if we could get a bot that would move all of these to the consistent "COVID-19 pandemic" name. This matter was briefly discussed in the move request, with unanimous support for consistency, and it's quite obvious that all these titles should be in line with the main article, named so only because of the previous name.
While this is a one-time request, I believe this is too time-consuming with AWB as these are title changes. But happy to be told otherwise. -- tariqabjotu 03:14, 4 May 2020 (UTC)
Following up from this conversation, I think it would be helpful to have a bot automatically apply the appropriate padlock icon to pages after they become protected. {{u| Sdkb}} talk 09:39, 21 May 2020 (UTC)
Could I also mention that it would be useful to have a bot which fixes incorrect protection templates? MusikBot removes incorrect ones, as I pointed out here, but it doesn't replace them (and could be the cause of some of these missing templates). This seems like a related subject. RandomCanadian ( talk / contribs) 18:29, 21 May 2020 (UTC)
Coding... — MusikAnimal talk 17:34, 8 June 2020 (UTC)
BRFA filed — MusikAnimal talk 01:04, 11 June 2020 (UTC)
There is already an existing score parameter that will determine if a team wins or loses a match. This w/l parameter deemed dubious and redundant, hence score parameter must be taken advantage to assess the win-loss logic instead.
Scenario description | Sample parameter usage | Requested bot action |
---|---|---|
Both w/l and score parameters are empty | |w/l= |score=
|
Remove w/l parameter usage |
w/l value is empty, and score value is dash (or en dash, hyphen) |
|w/l= |score=- (using minus sign)
| |
|w/l= |score=– (using en dash)
| ||
w/l is either W or L, and score contains dash-separated numbers |
|w/l=w |score=100-90 (using minus sign)
| |
|w/l=l |score=90–100 (using en dash)
| ||
|w/l=w |score=[http://www.game.com/boxscore/game/1 100–90]
| ||
|w/l=l |score=[[Duke–Michigan men's basketball rivalry|90–100]]
| ||
w/l is either W or L, and score contains HTML – between scores
|
|w/l=w |score=100–90
| |
|w/l=l |score=90–100
| ||
|w/l=w |score=[http://www.game.com/boxscore/game/1 100–90]
| ||
|w/l=l |score=[[Duke–Michigan men's basketball rivalry|90–100]]
| ||
w/l value is either w or l, and score is contains all any other values or is empty
(i.e. the winner/loser of the match is known, but the final score is not available) |
|w/l=w or |w/l=l |score=Default
|
Rename parameter to status:
|
|w/l=w or |w/l=l |score=Forfeit
| ||
|w/l=w or |w/l=l |score=
| ||
w/l value is p | |w/l=p
|
Rename parameter to status:
|
w/l parameter not found | Do nothing |
Let me know if I miss any other scenarios. – McVahl ( talk) 07:09, 11 June 2020 (UTC)
–
instead of "–" (for example, |score=90–100
). Sorry for late notice. I just observed only today when PrimeBot made some edits, as this case where not covered on the initial 25 amendments the other day. –
McVahl (
talk) 06:27, 21 June 2020 (UTC)
Example. Missing "}}" is a not too uncommon problem. They can't be tracked by CS1|2 itself because the template is never invoked. I would caution attempting an automated fix because when "}}" doesn't exist there are often other structural problems, and there might be embedded templates etc.. -- Green C 15:29, 18 June 2020 (UTC)
{{
malformed template}}
) is great because it could be visible in the wikitext, produce a red warning message, allow for a tracking cat, and have argument options for the bot name and date, plus whatever future requirements. --
Green
C 17:05, 18 June 2020 (UTC)
About 106 pages in article and template space contain wikilinks that begin with w:en:
, which is redundant, and
VPT consensus was that this extra code can interfere with various tools and scripts that expect links to be in a certain form. Would it be possible for an AWB-wielding editor to go through and remove those prefixes, at least in article space? The edits in template space would need manual inspection to see if they are intentional for some reason. Pinging @
Redrose64,
Trialpears,
Xaosflux,
Johnuniq, and
BrownHairedGirl:, who attended that VPT discussion. –
Jonesey95 (
talk) 03:55, 1 May 2020 (UTC)
[[w:en:Foo|Bar]]
, which has now been changed to [[Foo|Bar]]
. That's fine ... however, many of the links were of the form [[w:en:Foo|Foo]]
, and that first run has left them as [[Foo|Foo]]
, which needs to be consolidated as [[Foo]]
. So I will do a second run through the set, just applying genfixes. --
BrownHairedGirl
(talk) • (
contribs) 05:08, 1 May 2020 (UTC)
w:en:
. I will leave to others the manual inspection and possible cleanup of those templates. @
Jonesey95,
Redrose64,
Trialpears,
Xaosflux, and
Johnuniq: do any of you want to do the templates? --
BrownHairedGirl
(talk) • (
contribs) 05:31, 1 May 2020 (UTC)
:
" in case the parameter is File/Category? any other bad side effects?)If you have short citations like
{{harvnb|Smith|2001|pp=13}}
{{harvnb|Smith|2001|p=1-3}}
Those will appear like
Those are obviously wrong, and should be fixed so they would appear like this
{{harvnb|Smith|2001|p=13}}
{{harvnb|Smith|2001|pp=1–3}}
Those will appear like
Those should be an easy fix for an AWB bot or similar. Those should cover all {{ harv}}/{{ sfn}}-like templates. Headbomb { t · c · p · b} 13:02, 11 April 2020 (UTC)
{{harvnb|Smith|2001|p=p. 13}} {{harvnb|Smith|2001|p=p. 1–3}} {{harvnb|Smith|2001|pp=pp. 13}} {{harvnb|Smith|2001|pp=pp. 1–3}} {{harvnb|Smith|2001|p=pp. 13}} {{harvnb|Smith|2001|p=pp. 1–3}} {{harvnb|Smith|2001|pp=p. 13}} {{harvnb|Smith|2001|pp=p. 1–3}} |
→ |
{{harvnb|Smith|2001|p=13}} {{harvnb|Smith|2001|pp=1–3}} {{harvnb|Smith|2001|p=13}} {{harvnb|Smith|2001|pp=1–3}} {{harvnb|Smith|2001|p=13}} {{harvnb|Smith|2001|pp=1–3}} {{harvnb|Smith|2001|p=13}} {{harvnb|Smith|2001|pp=1–3}} |
Headbomb { t · c · p · b} 13:05, 11 April 2020 (UTC)
p=3-1
, for a document where page 1 of part 3 is called "3-1". –
Jonesey95 (
talk) 05:00, 23 April 2020 (UTC)
|page=3{{hyphen}}1
in those cases in CS1/CS2 templates. But if that's somehow not an acceptable solution here, the bot could take care of the rest. Or assume that |p=p. 3-4
should be converted to |p=3-4
and not |pp=3–4
.
Headbomb {
t ·
c ·
p ·
b} 15:21, 2 May 2020 (UTC)While working on a stub recently, I noticed the US Navy's Naval History and Heritage Command has updated the syntax of links to entries in the important reference Dictionary of American Naval Fighting Ships. This means that many outside links to the dictionary and tools like Template:DANFS (which is transcluded on hundreds if not thousands of US Navy ship articles) now have incorrect html targets. Here are three examples of repairs I've performed personally: [12], [13], [14]. As those examples reveal, the new webpage structure isn't complicated and while I suppose I could go through all the articles by hand and rapidly improve my edit count, this is exactly the sort of thing that an automated performer of edits would be best to solve. I've never before requested a bot, so I'm asking meekly for advice. BusterD ( talk) 15:54, 2 May 2020 (UTC)
Moved to WP:URLREQ#BFI. -- Izno ( talk) 12:32, 3 July 2020 (UTC)
After editing a lot of music articles that had no album cover in the Template:Infobox_album ( Category:Album_infoboxes_lacking_a_cover), I realized that it was a very repetitive processed that could be streamlined by having a bot that:
I looked in the rejected ideas and bots, and it seems like none really tried to attack this. My programming knowledge is okay at best, but I couldn't get any of the Java frameworks working so I'm out of luck doing this myself. ⠀TOMÁSTOMÁSTOMÁS⠀ TALK⠀ 00:49, 22 June 2020 (UTC)
... a separate, specific non-free use rationale for each use of the item, as explained at Wikipedia:Non-free use rationale guideline. The rationale is presented in clear, plain language and is relevant to each use.This is not possible for a bot to do except by means of boilerplated text, and that would imply that little or no thought has been put into the wording of the FUR. -- Redrose64 🌹 ( talk) 11:21, 23 June 2020 (UTC)
Could a bot run a SQL query or similar to compile COVID 19 data into a editable data sheet that another/same bot could import to Wikipedia COVID 19 pandemic update map/graph — Preceding unsigned comment added by 80.41.138.48 ( talk) 16:24, 19 June 2020 (UTC)
I hope that someone can help me by making a bot add the template that articles has been added to Wikipedia:WikiProject Europe/The 10,000 Challenge for example. There are several Challenge pages and there are templates to be added at the articles talk pages that the articles has been added to the Challenge project page, but the bot has stopped to do the task for a long time. Please ping me if this can be done. BabbaQ ( talk) 17:17, 27 April 2020 (UTC)
Coding... I did a quick sample/proof of concept of going in and reviewing paged for eligibility, here's a random sampling of pages that appear to be eligible. Adding the template to the talk page is easy compared to unwinding the list.
@ BabbaQ: Was there a consensus discussion about applying {{ WPEUR10k}} to these talk pages? I suspect this isn't contraversial, but it might be needed when I go to file the BRFA. Hasteur ( talk) 19:45, 7 June 2020 (UTC)
If any bot could take data from https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6 and https://www.worldometers.info/coronavirus/#countries and edit Template:Cases in 2019–20 coronavirus pandemic and Template:Territories affected by the 2019-20 coronavirus pandemic automatically with the latest information that would be great. Sam1370 ( talk) 00:21, 8 April 2020 (UTC)
Covid-19 is undoubtedly testing our public health, medical, and economic systems. But it's also testing our ability to process so much frightening and imminently consequential data. All these data add up to the Covid-19 "infowhelm," the term I use to describe the phenomenon of being overwhelmed by a constant flow of sometimes conflicting information.-- Green C 16:56, 6 May 2020 (UTC)
There are thousands of file talk pages in Category:File-Class United States articles for files that were moved to Commons and deleted in 2011 or 2012. These talk pages contain no content except a transclusion of {{ WikiProject United States}} (or one of its redirects) and should have been deleted long ago. These transclusions are of no use to the WikiProject and should be removed; however, simply removing them would leave these talk pages blank and mislead a viewer seeing a blue link into thinking there is something there. More broadly, there is no reason for these Commons files to be project-tagged on en.wikipedia—local talk pages for Commons files generally lead to split discussions or invite occasional comments that no one sees or answers.
I asked about these talk pages at the WikiProject's talk page (see Wikipedia talk:WikiProject United States#Categorizing files on Commons), and was told to "go with [my] own instincts on this". Any page in Category:File-Class United States articles that (1) does not have a corresponding file on en.wikipedia and (2) contains no content other than a transclusion of {{ WikiProject United States}} (or a redirect), should be speedily deleted under criterion G6 (routine housekeeping). Given the sheer number of pages involved, I am hoping a bot could take on the task. Thanks, -- Black Falcon ( talk) 23:21, 19 April 2020 (UTC)
Very easy typo task (no proper rights to do it myself): == Referencias ==
-> == References ==
and ==Referencias==
-> == References ==
--
Emptywords (
talk) 09:02, 20 July 2020 (UTC)
For all citations to pages under westmidlandbirdclub.com, please add |url-status
,
thus, as the domain has been cyber-squatted.
Andy Mabbett (Pigsonthewing);
Talk to Andy;
Andy's edits 19:54, 14 July 2020 (UTC)
Done -- Green C 00:44, 18 July 2020 (UTC)
File:
Hello!
I checked this category which is for so-called valid SVG files tagged with {{ Valid SVG}} however I noticed that many in fact were invalid. I would like a bot to check all files in the category to see if they are in fact valid or if the files are mistagged. Steps:
http://validator.w3.org/check?uri=http:{{urlencode:{{filepath:{{#titleparts:{{PAGENAME}}}}}}}}
, if yes ignore, if no see 2.http://validator.w3.org/check?uri=http:{{urlencode:{{filepath:{{#titleparts:{{PAGENAME}}}}}}}}
Pinging @ JJMC89: who is familiar with the File: namespace.
I think this is quite important to do since now probably hundreds of files are lying about their validity which isn't good.
Thanks! Jonteemil ( talk) 07:21, 7 May 2020 (UTC)
Can someone create a bot that will look at the latest date of the maps when the maps are updated and update the date automatically? I tried putting in the TODAY template, but I got reverted by Boing! said Zebedee that it would not work. I was hoping someone could work on a bot to save editors' time updating the dates on the maps. Interstellarity ( talk) 19:45, 26 May 2020 (UTC)
{{
Cases in the COVID-19 pandemic|date}}
, fetching a value that would be stored at the Commons file and updated by the map updater whenever they upload a new version. As an aside, thank you, Interstellarity, for all the work you've put in updating map date captions; I recognize it's a tedious task. {{u|
Sdkb}}
talk 19:59, 26 May 2020 (UTC)
{{wikidata|qualifier|Q81068910|P1846|P585}}
, which produces
- and you'd then only have to update the single point on Wikidata qualifier on Wikidata (
wikidata:Q81068910#P1846) for it to update on all wikis. I've had a chat with a couple of admins about this and the general consensus is that it's okay to do performance-wise, but be careful with how you use this - using the wikidata template in this way can be taxing on the server, so try and use it the fewest amount of times you can!If you're happy with that method, I can run through and update the relevant bits on enwiki - you'll know better than I will where the bits are on the other wikis.
Naypta ☺ |
✉ talk page | 18:26, 28 May 2020 (UTC)
Comma separated values like A, B, C can be instead converted into
or
{{hlist|A|B|C}}
This is usually found in infoboxes. Additionally, values separated by a
<br/>
can also be converted into a list.
I'mFeistyIncognito 16:39, 14 June 2020 (UTC)
<div>...</div>
tags, cannot be wrapped by any tags or templates that use <span>...</span>
tags, like {{
nowrap}}. If an infobox wraps a parameter with {{
nowrap}}, converting that parameter's contents to use {{
hlist}} will lead to invalid HTML output. –
Jonesey95 (
talk) 22:17, 14 June 2020 (UTC)
= = List of your created articles that are in [[:Category:Harv and Sfn no-target errors]] = = A few articles you created are in need of some reference cleanup. Basically, some short references create via {{tl|sfn}} and {{tl|harvnb}} and similar templates have missing full citations or have some other problems. This is ''usually'' caused by copy-pasting a short reference from another article without adding the full reference, or because a full reference is not making use of citation templates like {{tl|cite book}} (see [[Help:CS1]]) or {{tl|citation}} (see [[Help:CS2]]). See [[Category:Harv and Sfn template errors#Resolving errors|how to resolve issues]]. To easily see which citation is in need of cleanup, you can check '''[[:Category:Harv and Sfn template errors#Displaying error messages|these instructions]]''' to enable error messages ('''Svick's script''' is the simplest to use, but '''Trappist the monk's script''' is a bit more refined if you're interested in doing deeper cleanup). The following articles could use some of your attention {{columns-list|colwidth=30em| #[[Ancient 1]] #[[Article 2]] ... }} If you could add the full references to those article, that would be great. Again, the easiest way to deal with those is to install Svick's script per [[:Category:Harv and Sfn template errors#Displaying error messages|these instructions]]. If after installing the script, you do not see an error, that means it was either taken care of, or was a false positive, and you don't need to do anything else. Also note that the use of {{para|ref|harv}} is no longer needed to generate anchors. ~~~~
List of your created articles that are in [[:Category:Harv and Sfn no-target errors]]
in headers since they already have such a report
Headbomb { t · c · p · b} 23:18, 18 May 2020 (UTC)
Let's make a bot that creates each page that day at midnight. 95.49.166.194 ( talk) 13:10, 17 June 2020 (UTC)
I want a bot to do all of my editing.It is hard to do editing.It may help with deleting pages if you want to.Having a bot also puts less stress on editing. Was an explorer —Preceding undated comment added 14:21, 4 August 2020
Virtually every one of the 3000-ish places listed in the 132 sub-lists of National Register of Historic Places listings in Virginia has an article, and with very few exceptions, both lists and articles have coordinates for every place, but the source database has lots of errors, so I've gone through all the lists and manually corrected the coords. As a result, the lists are a lot more accurate, but because I haven't had time to fix the articles, tons of them (probably over 2000) now have coordinates that differ between article and list. For example, the article about the John Miley Maphis House says that its location is 38°50′20″N 78°35′55″W / 38.83889°N 78.59861°W, but the manually corrected coords on the list are 38°50′21″N 78°35′52″W / 38.83917°N 78.59778°W. Like most of the affected places, the Maphis House has coords that differ only a small bit, but (1) ideally there should be no difference at all, and (2) some places have big differences, and either we should fix everything, or we'll have to have a rather pointless discussion of which errors are too little to fix.
Therefore, I'm looking for someone to write a bot to copy coords from each place's NRHP list to the coordinates section of {{ infobox NRHP}} in each place's article. A few points to consider:
I've copied this request from an archive three years ago; an off-topic discussion happened, but no bot operators offered any opinions. Neither then nor now has any discussion has yet been conducted for this idea; it's just something I've thought of. I've come here basically just to see if someone's willing to try this route, and if someone says "I think I can help", I'll start the discussion at WT:NRHP and be able to say that someone's happy to help us. Of course, I wouldn't ask you actually to do any coding or other work until after consensus is reached at WT:NRHP. Nyttend ( talk) 15:53, 12 February 2020 (UTC)
=whatever
function, e.g. in cell L4 you type =B4
so that L4 displays whatever's in B4; is that right? If so, I don't think it would be useful unless it were immediately followed by whatever's analogous to Excel's "Paste Values". Is that what you mean by having a bot doing the swap? Since there are 3000+ entries, I'm sure there are a few errors somewhere, but I trust they're over 99% accurate.
Nyttend (
talk) 02:57, 13 February 2020 (UTC)
wins
and nominations
values from the infobox at the "
list of awards", which means the main article doesn't need to be updated every time the list is changed.I propose a bot along the lines of {{ Brazil municipality}} is created to develop our stubs like Jacaré dos Homens which have been lying around for up to 14 years in some cases. There's 5570 municipality articles, mostly poorly developed or inconsistent with data and formatting even within different states. A bot would bring much needed information and consistency to the articles and leave them in a half decent state for the time being, Igaci which Aymatth2 expanded is an example of what is planned and would happen to stubs like Jacaré dos Homens. Some municipalities have infoboxes and some information but hopefully this bot will iron out the current inconsistencies and dramatically improve the average article quality. It would be far too tedious to do it manually, would take years, and they've already been like this for up to 14 years! So support on this would be appreciated.† Encyclopædius 12:09, 20 May 2020 (UTC)
Greetings. At WP:DYKN, the image size is based on the orientation of the image; vertical images at 120px, square at 140, and horizontal at 160. However there is no way to set the resolution during nomination, which means that even experienced editors often forget to fix the size of the image, and new editors don't know that they should.
I am proposing that a bot do a daily check and update the resolution where needed. In order to cut down on the amount of resources required, it needs only look at recent additions.
It would, I'm guessing, work something like this:
Sincerely, The Squirrel Conspiracy ( talk) 00:31, 7 June 2020 (UTC)
Per Wikipedia talk:WikiProject Pharmacology/Archive 16#Molecular weights in drugboxes, I am requesting bot attention to remove the following regexp line:
/\| *molecular_weight *= *[0-9.]+ *g\/mol\n/
in articles that transclude Template:Infobox drug. There are a few rare variations that I can remove by hand or that require manual decision whether to remove, but this seems to be the vast majority and a conservative regex for it. This is a one-time cleanup pass that I started doing it with WP:JWB before I realized it was possibly the majority of the 12K articles in that transcluders list. DMacks ( talk) 19:18, 17 June 2020 (UTC)
/\| *C *= *\d/
|C=\d
".
DMacks (
talk) 05:10, 18 June 2020 (UTC)
This task might be better for semi-automated editing than a straight bot, but I'll throw it out here. I often come across hatnotes and see also sections that link to an old title for a page, e.g. this sort of fix or this one. Would it be possible to create a bot or a tool that lists or fixes instances where hatnotes or see also sections include a redirect to a page that has been moved to a new title? {{u| Sdkb}} talk 05:30, 7 July 2020 (UTC)